00:00:00.001 Started by upstream project "autotest-per-patch" build number 130537 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.097 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.097 The recommended git tool is: git 00:00:00.098 using credential 00000000-0000-0000-0000-000000000002 00:00:00.099 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.164 Fetching changes from the remote Git repository 00:00:00.166 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.236 Using shallow fetch with depth 1 00:00:00.236 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.236 > git --version # timeout=10 00:00:00.296 > git --version # 'git version 2.39.2' 00:00:00.296 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.338 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.338 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.878 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.890 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.904 Checking out Revision 53a1a621557260e3fbfd1fd32ee65ff11a804d5b (FETCH_HEAD) 00:00:05.904 > git config core.sparsecheckout # timeout=10 00:00:05.915 > git read-tree -mu HEAD # timeout=10 00:00:05.932 > git checkout -f 53a1a621557260e3fbfd1fd32ee65ff11a804d5b # timeout=5 00:00:05.955 Commit message: "packer: Merge irdmafedora into main fedora image" 00:00:05.956 > git rev-list --no-walk 53a1a621557260e3fbfd1fd32ee65ff11a804d5b # timeout=10 00:00:06.060 [Pipeline] Start of Pipeline 00:00:06.077 [Pipeline] library 00:00:06.079 Loading library shm_lib@master 00:00:06.079 Library shm_lib@master is cached. Copying from home. 00:00:06.096 [Pipeline] node 00:00:06.106 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.107 [Pipeline] { 00:00:06.118 [Pipeline] catchError 00:00:06.120 [Pipeline] { 00:00:06.134 [Pipeline] wrap 00:00:06.145 [Pipeline] { 00:00:06.156 [Pipeline] stage 00:00:06.158 [Pipeline] { (Prologue) 00:00:06.356 [Pipeline] sh 00:00:06.645 + logger -p user.info -t JENKINS-CI 00:00:06.663 [Pipeline] echo 00:00:06.664 Node: CYP9 00:00:06.671 [Pipeline] sh 00:00:06.973 [Pipeline] setCustomBuildProperty 00:00:06.982 [Pipeline] echo 00:00:06.983 Cleanup processes 00:00:06.986 [Pipeline] sh 00:00:07.269 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.269 3428940 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.283 [Pipeline] sh 00:00:07.568 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.568 ++ grep -v 'sudo pgrep' 00:00:07.568 ++ awk '{print $1}' 00:00:07.568 + sudo kill -9 00:00:07.568 + true 00:00:07.581 [Pipeline] cleanWs 00:00:07.589 [WS-CLEANUP] Deleting project workspace... 00:00:07.589 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.596 [WS-CLEANUP] done 00:00:07.600 [Pipeline] setCustomBuildProperty 00:00:07.612 [Pipeline] sh 00:00:07.932 + sudo git config --global --replace-all safe.directory '*' 00:00:08.024 [Pipeline] httpRequest 00:00:08.454 [Pipeline] echo 00:00:08.455 Sorcerer 10.211.164.101 is alive 00:00:08.464 [Pipeline] retry 00:00:08.466 [Pipeline] { 00:00:08.475 [Pipeline] httpRequest 00:00:08.479 HttpMethod: GET 00:00:08.479 URL: http://10.211.164.101/packages/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:08.479 Sending request to url: http://10.211.164.101/packages/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:08.504 Response Code: HTTP/1.1 200 OK 00:00:08.505 Success: Status code 200 is in the accepted range: 200,404 00:00:08.505 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:16.031 [Pipeline] } 00:00:16.049 [Pipeline] // retry 00:00:16.058 [Pipeline] sh 00:00:16.345 + tar --no-same-owner -xf jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:16.361 [Pipeline] httpRequest 00:00:17.369 [Pipeline] echo 00:00:17.370 Sorcerer 10.211.164.101 is alive 00:00:17.376 [Pipeline] retry 00:00:17.377 [Pipeline] { 00:00:17.386 [Pipeline] httpRequest 00:00:17.390 HttpMethod: GET 00:00:17.390 URL: http://10.211.164.101/packages/spdk_718f46c19269161caad2fb1c28f8e852027f99c4.tar.gz 00:00:17.391 Sending request to url: http://10.211.164.101/packages/spdk_718f46c19269161caad2fb1c28f8e852027f99c4.tar.gz 00:00:17.401 Response Code: HTTP/1.1 200 OK 00:00:17.402 Success: Status code 200 is in the accepted range: 200,404 00:00:17.402 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_718f46c19269161caad2fb1c28f8e852027f99c4.tar.gz 00:03:39.408 [Pipeline] } 00:03:39.427 [Pipeline] // retry 00:03:39.436 [Pipeline] sh 00:03:39.767 + tar --no-same-owner -xf spdk_718f46c19269161caad2fb1c28f8e852027f99c4.tar.gz 00:03:42.326 [Pipeline] sh 00:03:42.611 + git -C spdk log --oneline -n5 00:03:42.611 718f46c19 lib/trace: add extra check in trace parser when determining first entry 00:03:42.611 6e5fa34c7 lib/event: setup tracing before creating app_thread 00:03:42.611 09cc66129 test/unit: add mixed busy/idle mock poller function in reactor_ut 00:03:42.611 a67b3561a dpdk: update submodule to include alarm_cancel fix 00:03:42.611 43f6d3385 nvmf: remove use of STAILQ for last_wqe events 00:03:42.624 [Pipeline] } 00:03:42.642 [Pipeline] // stage 00:03:42.654 [Pipeline] stage 00:03:42.659 [Pipeline] { (Prepare) 00:03:42.678 [Pipeline] writeFile 00:03:42.696 [Pipeline] sh 00:03:42.987 + logger -p user.info -t JENKINS-CI 00:03:43.001 [Pipeline] sh 00:03:43.288 + logger -p user.info -t JENKINS-CI 00:03:43.301 [Pipeline] sh 00:03:43.587 + cat autorun-spdk.conf 00:03:43.587 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:43.587 SPDK_TEST_NVMF=1 00:03:43.587 SPDK_TEST_NVME_CLI=1 00:03:43.587 SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:43.587 SPDK_TEST_NVMF_NICS=e810 00:03:43.587 SPDK_TEST_VFIOUSER=1 00:03:43.587 SPDK_RUN_UBSAN=1 00:03:43.587 NET_TYPE=phy 00:03:43.596 RUN_NIGHTLY=0 00:03:43.601 [Pipeline] readFile 00:03:43.626 [Pipeline] withEnv 00:03:43.628 [Pipeline] { 00:03:43.641 [Pipeline] sh 00:03:43.928 + set -ex 00:03:43.928 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:03:43.928 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:43.928 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:43.928 ++ SPDK_TEST_NVMF=1 00:03:43.928 ++ SPDK_TEST_NVME_CLI=1 00:03:43.928 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:43.928 ++ SPDK_TEST_NVMF_NICS=e810 00:03:43.928 ++ SPDK_TEST_VFIOUSER=1 00:03:43.928 ++ SPDK_RUN_UBSAN=1 00:03:43.928 ++ NET_TYPE=phy 00:03:43.928 ++ RUN_NIGHTLY=0 00:03:43.928 + case $SPDK_TEST_NVMF_NICS in 00:03:43.928 + DRIVERS=ice 00:03:43.928 + [[ tcp == \r\d\m\a ]] 00:03:43.928 + [[ -n ice ]] 00:03:43.928 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:03:43.928 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:03:43.928 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:03:43.928 rmmod: ERROR: Module irdma is not currently loaded 00:03:43.928 rmmod: ERROR: Module i40iw is not currently loaded 00:03:43.928 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:03:43.928 + true 00:03:43.928 + for D in $DRIVERS 00:03:43.928 + sudo modprobe ice 00:03:43.928 + exit 0 00:03:43.938 [Pipeline] } 00:03:43.953 [Pipeline] // withEnv 00:03:43.958 [Pipeline] } 00:03:43.971 [Pipeline] // stage 00:03:43.981 [Pipeline] catchError 00:03:43.982 [Pipeline] { 00:03:43.996 [Pipeline] timeout 00:03:43.996 Timeout set to expire in 1 hr 0 min 00:03:43.998 [Pipeline] { 00:03:44.012 [Pipeline] stage 00:03:44.014 [Pipeline] { (Tests) 00:03:44.031 [Pipeline] sh 00:03:44.317 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:44.317 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:44.317 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:44.317 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:03:44.317 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:44.317 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:03:44.317 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:03:44.317 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:03:44.317 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:03:44.317 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:03:44.317 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:03:44.317 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:44.317 + source /etc/os-release 00:03:44.317 ++ NAME='Fedora Linux' 00:03:44.317 ++ VERSION='39 (Cloud Edition)' 00:03:44.317 ++ ID=fedora 00:03:44.317 ++ VERSION_ID=39 00:03:44.317 ++ VERSION_CODENAME= 00:03:44.317 ++ PLATFORM_ID=platform:f39 00:03:44.317 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:03:44.317 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:44.317 ++ LOGO=fedora-logo-icon 00:03:44.317 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:03:44.317 ++ HOME_URL=https://fedoraproject.org/ 00:03:44.317 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:03:44.317 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:44.317 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:44.317 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:44.317 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:03:44.317 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:44.317 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:03:44.317 ++ SUPPORT_END=2024-11-12 00:03:44.317 ++ VARIANT='Cloud Edition' 00:03:44.317 ++ VARIANT_ID=cloud 00:03:44.317 + uname -a 00:03:44.317 Linux spdk-cyp-09 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:03:44.317 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:47.618 Hugepages 00:03:47.618 node hugesize free / total 00:03:47.618 node0 1048576kB 0 / 0 00:03:47.618 node0 2048kB 0 / 0 00:03:47.618 node1 1048576kB 0 / 0 00:03:47.618 node1 2048kB 0 / 0 00:03:47.618 00:03:47.618 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:47.618 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:03:47.618 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:03:47.618 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:03:47.618 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:03:47.618 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:03:47.618 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:03:47.618 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:03:47.618 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:03:47.618 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:03:47.618 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:03:47.618 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:03:47.618 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:03:47.618 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:03:47.618 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:03:47.618 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:03:47.618 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:03:47.618 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:03:47.618 + rm -f /tmp/spdk-ld-path 00:03:47.618 + source autorun-spdk.conf 00:03:47.618 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:47.618 ++ SPDK_TEST_NVMF=1 00:03:47.618 ++ SPDK_TEST_NVME_CLI=1 00:03:47.618 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:47.618 ++ SPDK_TEST_NVMF_NICS=e810 00:03:47.618 ++ SPDK_TEST_VFIOUSER=1 00:03:47.618 ++ SPDK_RUN_UBSAN=1 00:03:47.618 ++ NET_TYPE=phy 00:03:47.618 ++ RUN_NIGHTLY=0 00:03:47.618 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:47.618 + [[ -n '' ]] 00:03:47.618 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:47.618 + for M in /var/spdk/build-*-manifest.txt 00:03:47.618 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:47.618 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:47.618 + for M in /var/spdk/build-*-manifest.txt 00:03:47.618 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:47.618 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:47.618 + for M in /var/spdk/build-*-manifest.txt 00:03:47.618 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:47.618 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:47.618 ++ uname 00:03:47.618 + [[ Linux == \L\i\n\u\x ]] 00:03:47.618 + sudo dmesg -T 00:03:47.618 + sudo dmesg --clear 00:03:47.618 + dmesg_pid=3431073 00:03:47.618 + [[ Fedora Linux == FreeBSD ]] 00:03:47.618 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:47.618 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:47.618 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:47.618 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:03:47.618 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:03:47.618 + [[ -x /usr/src/fio-static/fio ]] 00:03:47.618 + export FIO_BIN=/usr/src/fio-static/fio 00:03:47.618 + FIO_BIN=/usr/src/fio-static/fio 00:03:47.618 + sudo dmesg -Tw 00:03:47.618 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:47.618 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:47.618 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:47.618 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:47.618 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:47.618 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:47.618 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:47.618 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:47.618 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:47.618 Test configuration: 00:03:47.618 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:47.618 SPDK_TEST_NVMF=1 00:03:47.618 SPDK_TEST_NVME_CLI=1 00:03:47.618 SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:47.618 SPDK_TEST_NVMF_NICS=e810 00:03:47.618 SPDK_TEST_VFIOUSER=1 00:03:47.618 SPDK_RUN_UBSAN=1 00:03:47.618 NET_TYPE=phy 00:03:47.618 RUN_NIGHTLY=0 08:18:39 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:03:47.618 08:18:39 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:47.618 08:18:39 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:47.618 08:18:39 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:47.618 08:18:39 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:47.618 08:18:39 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:47.618 08:18:39 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:47.618 08:18:39 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:47.618 08:18:39 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:47.618 08:18:39 -- paths/export.sh@5 -- $ export PATH 00:03:47.618 08:18:39 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:47.618 08:18:39 -- common/autobuild_common.sh@478 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:47.618 08:18:39 -- common/autobuild_common.sh@479 -- $ date +%s 00:03:47.618 08:18:39 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727763519.XXXXXX 00:03:47.618 08:18:39 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727763519.mWkmyQ 00:03:47.618 08:18:39 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:03:47.618 08:18:39 -- common/autobuild_common.sh@485 -- $ '[' -n '' ']' 00:03:47.618 08:18:39 -- common/autobuild_common.sh@488 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:03:47.618 08:18:39 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:03:47.618 08:18:39 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:03:47.618 08:18:39 -- common/autobuild_common.sh@495 -- $ get_config_params 00:03:47.618 08:18:39 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:03:47.618 08:18:39 -- common/autotest_common.sh@10 -- $ set +x 00:03:47.618 08:18:39 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:03:47.618 08:18:39 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:03:47.618 08:18:39 -- pm/common@17 -- $ local monitor 00:03:47.618 08:18:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:47.618 08:18:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:47.618 08:18:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:47.619 08:18:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:47.619 08:18:39 -- pm/common@25 -- $ sleep 1 00:03:47.619 08:18:39 -- pm/common@21 -- $ date +%s 00:03:47.619 08:18:39 -- pm/common@21 -- $ date +%s 00:03:47.619 08:18:39 -- pm/common@21 -- $ date +%s 00:03:47.619 08:18:39 -- pm/common@21 -- $ date +%s 00:03:47.619 08:18:39 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1727763519 00:03:47.619 08:18:39 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1727763519 00:03:47.619 08:18:39 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1727763519 00:03:47.619 08:18:39 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1727763519 00:03:47.619 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1727763519_collect-vmstat.pm.log 00:03:47.619 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1727763519_collect-cpu-load.pm.log 00:03:47.619 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1727763519_collect-cpu-temp.pm.log 00:03:47.619 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1727763519_collect-bmc-pm.bmc.pm.log 00:03:48.565 08:18:40 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:03:48.565 08:18:40 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:48.565 08:18:40 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:48.565 08:18:40 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:48.565 08:18:40 -- spdk/autobuild.sh@16 -- $ date -u 00:03:48.565 Tue Oct 1 06:18:40 AM UTC 2024 00:03:48.565 08:18:40 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:48.565 v25.01-pre-19-g718f46c19 00:03:48.565 08:18:40 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:03:48.565 08:18:40 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:48.565 08:18:40 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:48.565 08:18:40 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:48.565 08:18:40 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:48.565 08:18:40 -- common/autotest_common.sh@10 -- $ set +x 00:03:48.565 ************************************ 00:03:48.565 START TEST ubsan 00:03:48.565 ************************************ 00:03:48.565 08:18:40 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:03:48.565 using ubsan 00:03:48.565 00:03:48.565 real 0m0.000s 00:03:48.565 user 0m0.000s 00:03:48.565 sys 0m0.000s 00:03:48.565 08:18:40 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:48.565 08:18:40 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:48.565 ************************************ 00:03:48.565 END TEST ubsan 00:03:48.565 ************************************ 00:03:48.565 08:18:40 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:48.565 08:18:40 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:48.565 08:18:40 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:48.565 08:18:40 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:48.565 08:18:40 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:48.565 08:18:40 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:48.565 08:18:40 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:48.565 08:18:40 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:48.565 08:18:40 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:03:48.825 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:03:48.825 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:03:49.086 Using 'verbs' RDMA provider 00:04:04.952 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:04:17.179 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:04:17.179 Creating mk/config.mk...done. 00:04:17.179 Creating mk/cc.flags.mk...done. 00:04:17.179 Type 'make' to build. 00:04:17.179 08:19:08 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:04:17.179 08:19:08 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:04:17.179 08:19:08 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:04:17.179 08:19:08 -- common/autotest_common.sh@10 -- $ set +x 00:04:17.179 ************************************ 00:04:17.179 START TEST make 00:04:17.179 ************************************ 00:04:17.179 08:19:08 make -- common/autotest_common.sh@1125 -- $ make -j144 00:04:17.750 make[1]: Nothing to be done for 'all'. 00:04:19.132 The Meson build system 00:04:19.132 Version: 1.5.0 00:04:19.132 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:04:19.132 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:19.132 Build type: native build 00:04:19.132 Project name: libvfio-user 00:04:19.132 Project version: 0.0.1 00:04:19.132 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:19.132 C linker for the host machine: cc ld.bfd 2.40-14 00:04:19.132 Host machine cpu family: x86_64 00:04:19.132 Host machine cpu: x86_64 00:04:19.132 Run-time dependency threads found: YES 00:04:19.132 Library dl found: YES 00:04:19.132 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:19.132 Run-time dependency json-c found: YES 0.17 00:04:19.132 Run-time dependency cmocka found: YES 1.1.7 00:04:19.132 Program pytest-3 found: NO 00:04:19.132 Program flake8 found: NO 00:04:19.132 Program misspell-fixer found: NO 00:04:19.132 Program restructuredtext-lint found: NO 00:04:19.132 Program valgrind found: YES (/usr/bin/valgrind) 00:04:19.132 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:19.132 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:19.132 Compiler for C supports arguments -Wwrite-strings: YES 00:04:19.132 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:19.132 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:04:19.132 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:04:19.132 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:19.132 Build targets in project: 8 00:04:19.132 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:04:19.132 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:04:19.132 00:04:19.132 libvfio-user 0.0.1 00:04:19.132 00:04:19.132 User defined options 00:04:19.132 buildtype : debug 00:04:19.132 default_library: shared 00:04:19.132 libdir : /usr/local/lib 00:04:19.132 00:04:19.132 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:19.391 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:04:19.391 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:04:19.391 [2/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:04:19.391 [3/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:04:19.391 [4/37] Compiling C object samples/null.p/null.c.o 00:04:19.391 [5/37] Compiling C object samples/lspci.p/lspci.c.o 00:04:19.391 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:04:19.391 [7/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:04:19.391 [8/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:04:19.391 [9/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:04:19.391 [10/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:04:19.391 [11/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:04:19.391 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:04:19.391 [13/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:04:19.391 [14/37] Compiling C object test/unit_tests.p/mocks.c.o 00:04:19.391 [15/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:04:19.391 [16/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:04:19.391 [17/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:04:19.391 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:04:19.391 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:04:19.391 [20/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:04:19.391 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:04:19.391 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:04:19.391 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:04:19.391 [24/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:04:19.391 [25/37] Compiling C object samples/client.p/client.c.o 00:04:19.391 [26/37] Compiling C object samples/server.p/server.c.o 00:04:19.391 [27/37] Linking target samples/client 00:04:19.653 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:04:19.653 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:04:19.653 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:04:19.653 [31/37] Linking target test/unit_tests 00:04:19.653 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:04:19.653 [33/37] Linking target samples/lspci 00:04:19.653 [34/37] Linking target samples/null 00:04:19.653 [35/37] Linking target samples/server 00:04:19.653 [36/37] Linking target samples/gpio-pci-idio-16 00:04:19.653 [37/37] Linking target samples/shadow_ioeventfd_server 00:04:19.653 INFO: autodetecting backend as ninja 00:04:19.653 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:19.915 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:20.176 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:04:20.176 ninja: no work to do. 00:04:26.769 The Meson build system 00:04:26.769 Version: 1.5.0 00:04:26.769 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:04:26.769 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:04:26.769 Build type: native build 00:04:26.769 Program cat found: YES (/usr/bin/cat) 00:04:26.769 Project name: DPDK 00:04:26.769 Project version: 24.03.0 00:04:26.769 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:26.769 C linker for the host machine: cc ld.bfd 2.40-14 00:04:26.769 Host machine cpu family: x86_64 00:04:26.769 Host machine cpu: x86_64 00:04:26.769 Message: ## Building in Developer Mode ## 00:04:26.769 Program pkg-config found: YES (/usr/bin/pkg-config) 00:04:26.769 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:04:26.769 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:04:26.769 Program python3 found: YES (/usr/bin/python3) 00:04:26.769 Program cat found: YES (/usr/bin/cat) 00:04:26.769 Compiler for C supports arguments -march=native: YES 00:04:26.769 Checking for size of "void *" : 8 00:04:26.769 Checking for size of "void *" : 8 (cached) 00:04:26.769 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:04:26.769 Library m found: YES 00:04:26.769 Library numa found: YES 00:04:26.769 Has header "numaif.h" : YES 00:04:26.769 Library fdt found: NO 00:04:26.769 Library execinfo found: NO 00:04:26.769 Has header "execinfo.h" : YES 00:04:26.769 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:26.769 Run-time dependency libarchive found: NO (tried pkgconfig) 00:04:26.769 Run-time dependency libbsd found: NO (tried pkgconfig) 00:04:26.769 Run-time dependency jansson found: NO (tried pkgconfig) 00:04:26.769 Run-time dependency openssl found: YES 3.1.1 00:04:26.769 Run-time dependency libpcap found: YES 1.10.4 00:04:26.769 Has header "pcap.h" with dependency libpcap: YES 00:04:26.769 Compiler for C supports arguments -Wcast-qual: YES 00:04:26.769 Compiler for C supports arguments -Wdeprecated: YES 00:04:26.769 Compiler for C supports arguments -Wformat: YES 00:04:26.769 Compiler for C supports arguments -Wformat-nonliteral: NO 00:04:26.769 Compiler for C supports arguments -Wformat-security: NO 00:04:26.770 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:26.770 Compiler for C supports arguments -Wmissing-prototypes: YES 00:04:26.770 Compiler for C supports arguments -Wnested-externs: YES 00:04:26.770 Compiler for C supports arguments -Wold-style-definition: YES 00:04:26.770 Compiler for C supports arguments -Wpointer-arith: YES 00:04:26.770 Compiler for C supports arguments -Wsign-compare: YES 00:04:26.770 Compiler for C supports arguments -Wstrict-prototypes: YES 00:04:26.770 Compiler for C supports arguments -Wundef: YES 00:04:26.770 Compiler for C supports arguments -Wwrite-strings: YES 00:04:26.770 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:04:26.770 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:04:26.770 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:26.770 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:04:26.770 Program objdump found: YES (/usr/bin/objdump) 00:04:26.770 Compiler for C supports arguments -mavx512f: YES 00:04:26.770 Checking if "AVX512 checking" compiles: YES 00:04:26.770 Fetching value of define "__SSE4_2__" : 1 00:04:26.770 Fetching value of define "__AES__" : 1 00:04:26.770 Fetching value of define "__AVX__" : 1 00:04:26.770 Fetching value of define "__AVX2__" : 1 00:04:26.770 Fetching value of define "__AVX512BW__" : 1 00:04:26.770 Fetching value of define "__AVX512CD__" : 1 00:04:26.770 Fetching value of define "__AVX512DQ__" : 1 00:04:26.770 Fetching value of define "__AVX512F__" : 1 00:04:26.770 Fetching value of define "__AVX512VL__" : 1 00:04:26.770 Fetching value of define "__PCLMUL__" : 1 00:04:26.770 Fetching value of define "__RDRND__" : 1 00:04:26.770 Fetching value of define "__RDSEED__" : 1 00:04:26.770 Fetching value of define "__VPCLMULQDQ__" : 1 00:04:26.770 Fetching value of define "__znver1__" : (undefined) 00:04:26.770 Fetching value of define "__znver2__" : (undefined) 00:04:26.770 Fetching value of define "__znver3__" : (undefined) 00:04:26.770 Fetching value of define "__znver4__" : (undefined) 00:04:26.770 Compiler for C supports arguments -Wno-format-truncation: YES 00:04:26.770 Message: lib/log: Defining dependency "log" 00:04:26.770 Message: lib/kvargs: Defining dependency "kvargs" 00:04:26.770 Message: lib/telemetry: Defining dependency "telemetry" 00:04:26.770 Checking for function "getentropy" : NO 00:04:26.770 Message: lib/eal: Defining dependency "eal" 00:04:26.770 Message: lib/ring: Defining dependency "ring" 00:04:26.770 Message: lib/rcu: Defining dependency "rcu" 00:04:26.770 Message: lib/mempool: Defining dependency "mempool" 00:04:26.770 Message: lib/mbuf: Defining dependency "mbuf" 00:04:26.770 Fetching value of define "__PCLMUL__" : 1 (cached) 00:04:26.770 Fetching value of define "__AVX512F__" : 1 (cached) 00:04:26.770 Fetching value of define "__AVX512BW__" : 1 (cached) 00:04:26.770 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:04:26.770 Fetching value of define "__AVX512VL__" : 1 (cached) 00:04:26.770 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:04:26.770 Compiler for C supports arguments -mpclmul: YES 00:04:26.770 Compiler for C supports arguments -maes: YES 00:04:26.770 Compiler for C supports arguments -mavx512f: YES (cached) 00:04:26.770 Compiler for C supports arguments -mavx512bw: YES 00:04:26.770 Compiler for C supports arguments -mavx512dq: YES 00:04:26.770 Compiler for C supports arguments -mavx512vl: YES 00:04:26.770 Compiler for C supports arguments -mvpclmulqdq: YES 00:04:26.770 Compiler for C supports arguments -mavx2: YES 00:04:26.770 Compiler for C supports arguments -mavx: YES 00:04:26.770 Message: lib/net: Defining dependency "net" 00:04:26.770 Message: lib/meter: Defining dependency "meter" 00:04:26.770 Message: lib/ethdev: Defining dependency "ethdev" 00:04:26.770 Message: lib/pci: Defining dependency "pci" 00:04:26.770 Message: lib/cmdline: Defining dependency "cmdline" 00:04:26.770 Message: lib/hash: Defining dependency "hash" 00:04:26.770 Message: lib/timer: Defining dependency "timer" 00:04:26.770 Message: lib/compressdev: Defining dependency "compressdev" 00:04:26.770 Message: lib/cryptodev: Defining dependency "cryptodev" 00:04:26.770 Message: lib/dmadev: Defining dependency "dmadev" 00:04:26.770 Compiler for C supports arguments -Wno-cast-qual: YES 00:04:26.770 Message: lib/power: Defining dependency "power" 00:04:26.770 Message: lib/reorder: Defining dependency "reorder" 00:04:26.770 Message: lib/security: Defining dependency "security" 00:04:26.770 Has header "linux/userfaultfd.h" : YES 00:04:26.770 Has header "linux/vduse.h" : YES 00:04:26.770 Message: lib/vhost: Defining dependency "vhost" 00:04:26.770 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:04:26.770 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:04:26.770 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:04:26.770 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:04:26.770 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:04:26.770 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:04:26.770 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:04:26.770 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:04:26.770 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:04:26.770 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:04:26.770 Program doxygen found: YES (/usr/local/bin/doxygen) 00:04:26.770 Configuring doxy-api-html.conf using configuration 00:04:26.770 Configuring doxy-api-man.conf using configuration 00:04:26.770 Program mandb found: YES (/usr/bin/mandb) 00:04:26.770 Program sphinx-build found: NO 00:04:26.770 Configuring rte_build_config.h using configuration 00:04:26.770 Message: 00:04:26.770 ================= 00:04:26.770 Applications Enabled 00:04:26.770 ================= 00:04:26.770 00:04:26.770 apps: 00:04:26.770 00:04:26.770 00:04:26.770 Message: 00:04:26.770 ================= 00:04:26.770 Libraries Enabled 00:04:26.770 ================= 00:04:26.770 00:04:26.770 libs: 00:04:26.770 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:04:26.770 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:04:26.770 cryptodev, dmadev, power, reorder, security, vhost, 00:04:26.770 00:04:26.770 Message: 00:04:26.770 =============== 00:04:26.770 Drivers Enabled 00:04:26.770 =============== 00:04:26.770 00:04:26.770 common: 00:04:26.770 00:04:26.770 bus: 00:04:26.770 pci, vdev, 00:04:26.770 mempool: 00:04:26.770 ring, 00:04:26.770 dma: 00:04:26.770 00:04:26.770 net: 00:04:26.770 00:04:26.770 crypto: 00:04:26.770 00:04:26.770 compress: 00:04:26.770 00:04:26.770 vdpa: 00:04:26.770 00:04:26.770 00:04:26.770 Message: 00:04:26.770 ================= 00:04:26.770 Content Skipped 00:04:26.770 ================= 00:04:26.770 00:04:26.770 apps: 00:04:26.770 dumpcap: explicitly disabled via build config 00:04:26.770 graph: explicitly disabled via build config 00:04:26.770 pdump: explicitly disabled via build config 00:04:26.770 proc-info: explicitly disabled via build config 00:04:26.770 test-acl: explicitly disabled via build config 00:04:26.770 test-bbdev: explicitly disabled via build config 00:04:26.770 test-cmdline: explicitly disabled via build config 00:04:26.770 test-compress-perf: explicitly disabled via build config 00:04:26.770 test-crypto-perf: explicitly disabled via build config 00:04:26.770 test-dma-perf: explicitly disabled via build config 00:04:26.770 test-eventdev: explicitly disabled via build config 00:04:26.770 test-fib: explicitly disabled via build config 00:04:26.770 test-flow-perf: explicitly disabled via build config 00:04:26.770 test-gpudev: explicitly disabled via build config 00:04:26.770 test-mldev: explicitly disabled via build config 00:04:26.770 test-pipeline: explicitly disabled via build config 00:04:26.770 test-pmd: explicitly disabled via build config 00:04:26.770 test-regex: explicitly disabled via build config 00:04:26.770 test-sad: explicitly disabled via build config 00:04:26.770 test-security-perf: explicitly disabled via build config 00:04:26.770 00:04:26.770 libs: 00:04:26.770 argparse: explicitly disabled via build config 00:04:26.770 metrics: explicitly disabled via build config 00:04:26.770 acl: explicitly disabled via build config 00:04:26.770 bbdev: explicitly disabled via build config 00:04:26.770 bitratestats: explicitly disabled via build config 00:04:26.770 bpf: explicitly disabled via build config 00:04:26.770 cfgfile: explicitly disabled via build config 00:04:26.770 distributor: explicitly disabled via build config 00:04:26.770 efd: explicitly disabled via build config 00:04:26.770 eventdev: explicitly disabled via build config 00:04:26.770 dispatcher: explicitly disabled via build config 00:04:26.770 gpudev: explicitly disabled via build config 00:04:26.770 gro: explicitly disabled via build config 00:04:26.770 gso: explicitly disabled via build config 00:04:26.770 ip_frag: explicitly disabled via build config 00:04:26.770 jobstats: explicitly disabled via build config 00:04:26.770 latencystats: explicitly disabled via build config 00:04:26.770 lpm: explicitly disabled via build config 00:04:26.770 member: explicitly disabled via build config 00:04:26.770 pcapng: explicitly disabled via build config 00:04:26.770 rawdev: explicitly disabled via build config 00:04:26.770 regexdev: explicitly disabled via build config 00:04:26.770 mldev: explicitly disabled via build config 00:04:26.770 rib: explicitly disabled via build config 00:04:26.770 sched: explicitly disabled via build config 00:04:26.770 stack: explicitly disabled via build config 00:04:26.770 ipsec: explicitly disabled via build config 00:04:26.770 pdcp: explicitly disabled via build config 00:04:26.770 fib: explicitly disabled via build config 00:04:26.770 port: explicitly disabled via build config 00:04:26.770 pdump: explicitly disabled via build config 00:04:26.770 table: explicitly disabled via build config 00:04:26.770 pipeline: explicitly disabled via build config 00:04:26.770 graph: explicitly disabled via build config 00:04:26.770 node: explicitly disabled via build config 00:04:26.770 00:04:26.770 drivers: 00:04:26.770 common/cpt: not in enabled drivers build config 00:04:26.770 common/dpaax: not in enabled drivers build config 00:04:26.770 common/iavf: not in enabled drivers build config 00:04:26.771 common/idpf: not in enabled drivers build config 00:04:26.771 common/ionic: not in enabled drivers build config 00:04:26.771 common/mvep: not in enabled drivers build config 00:04:26.771 common/octeontx: not in enabled drivers build config 00:04:26.771 bus/auxiliary: not in enabled drivers build config 00:04:26.771 bus/cdx: not in enabled drivers build config 00:04:26.771 bus/dpaa: not in enabled drivers build config 00:04:26.771 bus/fslmc: not in enabled drivers build config 00:04:26.771 bus/ifpga: not in enabled drivers build config 00:04:26.771 bus/platform: not in enabled drivers build config 00:04:26.771 bus/uacce: not in enabled drivers build config 00:04:26.771 bus/vmbus: not in enabled drivers build config 00:04:26.771 common/cnxk: not in enabled drivers build config 00:04:26.771 common/mlx5: not in enabled drivers build config 00:04:26.771 common/nfp: not in enabled drivers build config 00:04:26.771 common/nitrox: not in enabled drivers build config 00:04:26.771 common/qat: not in enabled drivers build config 00:04:26.771 common/sfc_efx: not in enabled drivers build config 00:04:26.771 mempool/bucket: not in enabled drivers build config 00:04:26.771 mempool/cnxk: not in enabled drivers build config 00:04:26.771 mempool/dpaa: not in enabled drivers build config 00:04:26.771 mempool/dpaa2: not in enabled drivers build config 00:04:26.771 mempool/octeontx: not in enabled drivers build config 00:04:26.771 mempool/stack: not in enabled drivers build config 00:04:26.771 dma/cnxk: not in enabled drivers build config 00:04:26.771 dma/dpaa: not in enabled drivers build config 00:04:26.771 dma/dpaa2: not in enabled drivers build config 00:04:26.771 dma/hisilicon: not in enabled drivers build config 00:04:26.771 dma/idxd: not in enabled drivers build config 00:04:26.771 dma/ioat: not in enabled drivers build config 00:04:26.771 dma/skeleton: not in enabled drivers build config 00:04:26.771 net/af_packet: not in enabled drivers build config 00:04:26.771 net/af_xdp: not in enabled drivers build config 00:04:26.771 net/ark: not in enabled drivers build config 00:04:26.771 net/atlantic: not in enabled drivers build config 00:04:26.771 net/avp: not in enabled drivers build config 00:04:26.771 net/axgbe: not in enabled drivers build config 00:04:26.771 net/bnx2x: not in enabled drivers build config 00:04:26.771 net/bnxt: not in enabled drivers build config 00:04:26.771 net/bonding: not in enabled drivers build config 00:04:26.771 net/cnxk: not in enabled drivers build config 00:04:26.771 net/cpfl: not in enabled drivers build config 00:04:26.771 net/cxgbe: not in enabled drivers build config 00:04:26.771 net/dpaa: not in enabled drivers build config 00:04:26.771 net/dpaa2: not in enabled drivers build config 00:04:26.771 net/e1000: not in enabled drivers build config 00:04:26.771 net/ena: not in enabled drivers build config 00:04:26.771 net/enetc: not in enabled drivers build config 00:04:26.771 net/enetfec: not in enabled drivers build config 00:04:26.771 net/enic: not in enabled drivers build config 00:04:26.771 net/failsafe: not in enabled drivers build config 00:04:26.771 net/fm10k: not in enabled drivers build config 00:04:26.771 net/gve: not in enabled drivers build config 00:04:26.771 net/hinic: not in enabled drivers build config 00:04:26.771 net/hns3: not in enabled drivers build config 00:04:26.771 net/i40e: not in enabled drivers build config 00:04:26.771 net/iavf: not in enabled drivers build config 00:04:26.771 net/ice: not in enabled drivers build config 00:04:26.771 net/idpf: not in enabled drivers build config 00:04:26.771 net/igc: not in enabled drivers build config 00:04:26.771 net/ionic: not in enabled drivers build config 00:04:26.771 net/ipn3ke: not in enabled drivers build config 00:04:26.771 net/ixgbe: not in enabled drivers build config 00:04:26.771 net/mana: not in enabled drivers build config 00:04:26.771 net/memif: not in enabled drivers build config 00:04:26.771 net/mlx4: not in enabled drivers build config 00:04:26.771 net/mlx5: not in enabled drivers build config 00:04:26.771 net/mvneta: not in enabled drivers build config 00:04:26.771 net/mvpp2: not in enabled drivers build config 00:04:26.771 net/netvsc: not in enabled drivers build config 00:04:26.771 net/nfb: not in enabled drivers build config 00:04:26.771 net/nfp: not in enabled drivers build config 00:04:26.771 net/ngbe: not in enabled drivers build config 00:04:26.771 net/null: not in enabled drivers build config 00:04:26.771 net/octeontx: not in enabled drivers build config 00:04:26.771 net/octeon_ep: not in enabled drivers build config 00:04:26.771 net/pcap: not in enabled drivers build config 00:04:26.771 net/pfe: not in enabled drivers build config 00:04:26.771 net/qede: not in enabled drivers build config 00:04:26.771 net/ring: not in enabled drivers build config 00:04:26.771 net/sfc: not in enabled drivers build config 00:04:26.771 net/softnic: not in enabled drivers build config 00:04:26.771 net/tap: not in enabled drivers build config 00:04:26.771 net/thunderx: not in enabled drivers build config 00:04:26.771 net/txgbe: not in enabled drivers build config 00:04:26.771 net/vdev_netvsc: not in enabled drivers build config 00:04:26.771 net/vhost: not in enabled drivers build config 00:04:26.771 net/virtio: not in enabled drivers build config 00:04:26.771 net/vmxnet3: not in enabled drivers build config 00:04:26.771 raw/*: missing internal dependency, "rawdev" 00:04:26.771 crypto/armv8: not in enabled drivers build config 00:04:26.771 crypto/bcmfs: not in enabled drivers build config 00:04:26.771 crypto/caam_jr: not in enabled drivers build config 00:04:26.771 crypto/ccp: not in enabled drivers build config 00:04:26.771 crypto/cnxk: not in enabled drivers build config 00:04:26.771 crypto/dpaa_sec: not in enabled drivers build config 00:04:26.771 crypto/dpaa2_sec: not in enabled drivers build config 00:04:26.771 crypto/ipsec_mb: not in enabled drivers build config 00:04:26.771 crypto/mlx5: not in enabled drivers build config 00:04:26.771 crypto/mvsam: not in enabled drivers build config 00:04:26.771 crypto/nitrox: not in enabled drivers build config 00:04:26.771 crypto/null: not in enabled drivers build config 00:04:26.771 crypto/octeontx: not in enabled drivers build config 00:04:26.771 crypto/openssl: not in enabled drivers build config 00:04:26.771 crypto/scheduler: not in enabled drivers build config 00:04:26.771 crypto/uadk: not in enabled drivers build config 00:04:26.771 crypto/virtio: not in enabled drivers build config 00:04:26.771 compress/isal: not in enabled drivers build config 00:04:26.771 compress/mlx5: not in enabled drivers build config 00:04:26.771 compress/nitrox: not in enabled drivers build config 00:04:26.771 compress/octeontx: not in enabled drivers build config 00:04:26.771 compress/zlib: not in enabled drivers build config 00:04:26.771 regex/*: missing internal dependency, "regexdev" 00:04:26.771 ml/*: missing internal dependency, "mldev" 00:04:26.771 vdpa/ifc: not in enabled drivers build config 00:04:26.771 vdpa/mlx5: not in enabled drivers build config 00:04:26.771 vdpa/nfp: not in enabled drivers build config 00:04:26.771 vdpa/sfc: not in enabled drivers build config 00:04:26.771 event/*: missing internal dependency, "eventdev" 00:04:26.771 baseband/*: missing internal dependency, "bbdev" 00:04:26.771 gpu/*: missing internal dependency, "gpudev" 00:04:26.771 00:04:26.771 00:04:26.771 Build targets in project: 84 00:04:26.771 00:04:26.771 DPDK 24.03.0 00:04:26.771 00:04:26.771 User defined options 00:04:26.771 buildtype : debug 00:04:26.771 default_library : shared 00:04:26.771 libdir : lib 00:04:26.771 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:04:26.771 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:04:26.771 c_link_args : 00:04:26.771 cpu_instruction_set: native 00:04:26.771 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:04:26.771 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:04:26.771 enable_docs : false 00:04:26.771 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:04:26.771 enable_kmods : false 00:04:26.771 max_lcores : 128 00:04:26.771 tests : false 00:04:26.771 00:04:26.771 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:26.771 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:04:27.033 [1/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:04:27.033 [2/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:04:27.033 [3/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:04:27.033 [4/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:04:27.033 [5/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:04:27.033 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:04:27.033 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:04:27.033 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:04:27.033 [9/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:04:27.033 [10/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:04:27.033 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:04:27.033 [12/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:04:27.033 [13/267] Linking static target lib/librte_log.a 00:04:27.033 [14/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:04:27.033 [15/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:04:27.033 [16/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:04:27.033 [17/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:04:27.033 [18/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:04:27.033 [19/267] Linking static target lib/librte_kvargs.a 00:04:27.033 [20/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:04:27.033 [21/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:04:27.033 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:04:27.033 [23/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:04:27.033 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:04:27.033 [25/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:04:27.292 [26/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:04:27.292 [27/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:04:27.292 [28/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:04:27.292 [29/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:04:27.292 [30/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:04:27.292 [31/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:04:27.292 [32/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:04:27.292 [33/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:04:27.292 [34/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:04:27.292 [35/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:04:27.292 [36/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:04:27.292 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:04:27.292 [38/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:04:27.293 [39/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:04:27.293 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:04:27.293 [41/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:04:27.293 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:04:27.293 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:04:27.293 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:04:27.293 [45/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:04:27.293 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:04:27.293 [47/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:04:27.293 [48/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:04:27.293 [49/267] Linking static target lib/librte_pci.a 00:04:27.293 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:04:27.293 [51/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:04:27.293 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:04:27.293 [53/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:04:27.293 [54/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:04:27.293 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:04:27.293 [56/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:04:27.293 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:04:27.293 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:04:27.293 [59/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:04:27.293 [60/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:04:27.293 [61/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:04:27.293 [62/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:04:27.293 [63/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:04:27.293 [64/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:04:27.293 [65/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:04:27.294 [66/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:04:27.294 [67/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:04:27.294 [68/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:04:27.294 [69/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:04:27.294 [70/267] Linking static target lib/librte_meter.a 00:04:27.294 [71/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:04:27.294 [72/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:04:27.294 [73/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:04:27.294 [74/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:04:27.294 [75/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:04:27.294 [76/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:04:27.294 [77/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:04:27.294 [78/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:04:27.294 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:04:27.294 [80/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:04:27.294 [81/267] Linking static target lib/librte_ring.a 00:04:27.294 [82/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:04:27.294 [83/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:04:27.294 [84/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:04:27.294 [85/267] Linking static target lib/librte_timer.a 00:04:27.555 [86/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:04:27.555 [87/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:04:27.555 [88/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:04:27.555 [89/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:04:27.555 [90/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:04:27.555 [91/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:04:27.555 [92/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:04:27.555 [93/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:04:27.555 [94/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:04:27.555 [95/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:04:27.555 [96/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:04:27.555 [97/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:04:27.555 [98/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:04:27.555 [99/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:04:27.555 [100/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:04:27.555 [101/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:04:27.555 [102/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:27.555 [103/267] Linking static target lib/librte_rcu.a 00:04:27.555 [104/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:04:27.555 [105/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:04:27.555 [106/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:04:27.555 [107/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:04:27.555 [108/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:04:27.555 [109/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:04:27.555 [110/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:04:27.555 [111/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:04:27.555 [112/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:04:27.555 [113/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:04:27.555 [114/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:04:27.555 [115/267] Linking target lib/librte_log.so.24.1 00:04:27.555 [116/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:04:27.555 [117/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:04:27.555 [118/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:04:27.555 [119/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:04:27.555 [120/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:04:27.555 [121/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:04:27.555 [122/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:04:27.555 [123/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:04:27.555 [124/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:04:27.555 [125/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:04:27.555 [126/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:04:27.555 [127/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:04:27.555 [128/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:04:27.555 [129/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:04:27.555 [130/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:04:27.555 [131/267] Linking static target lib/librte_dmadev.a 00:04:27.555 [132/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:04:27.555 [133/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:04:27.555 [134/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:04:27.817 [135/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:04:27.817 [136/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:04:27.817 [137/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:04:27.817 [138/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:04:27.817 [139/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:04:27.817 [140/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:04:27.817 [141/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:04:27.817 [142/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:04:27.817 [143/267] Linking static target lib/librte_telemetry.a 00:04:27.817 [144/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:04:27.817 [145/267] Linking static target lib/librte_net.a 00:04:27.817 [146/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:04:27.817 [147/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:04:27.817 [148/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:04:27.818 [149/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:04:27.818 [150/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:04:27.818 [151/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:04:27.818 [152/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:04:27.818 [153/267] Linking static target lib/librte_mempool.a 00:04:27.818 [154/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:04:27.818 [155/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:04:27.818 [156/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:04:27.818 [157/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:04:27.818 [158/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:04:27.818 [159/267] Linking target lib/librte_kvargs.so.24.1 00:04:27.818 [160/267] Linking static target lib/librte_reorder.a 00:04:27.818 [161/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:04:27.818 [162/267] Linking static target lib/librte_cmdline.a 00:04:27.818 [163/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:04:27.818 [164/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:04:27.818 [165/267] Linking static target lib/librte_hash.a 00:04:27.818 [166/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:04:27.818 [167/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:04:27.818 [168/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:04:27.818 [169/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:04:27.818 [170/267] Linking static target lib/librte_mbuf.a 00:04:27.818 [171/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:04:27.818 [172/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:04:27.818 [173/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:04:27.818 [174/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:04:27.818 [175/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:04:27.818 [176/267] Linking static target lib/librte_compressdev.a 00:04:27.818 [177/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:04:27.818 [178/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:04:27.818 [179/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:04:27.818 [180/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:04:27.818 [181/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:27.818 [182/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:27.818 [183/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:04:27.818 [184/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:04:27.818 [185/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:04:27.818 [186/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:04:27.818 [187/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:04:27.818 [188/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:04:27.818 [189/267] Linking static target drivers/librte_mempool_ring.a 00:04:27.818 [190/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:04:27.818 [191/267] Linking static target lib/librte_power.a 00:04:27.818 [192/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:04:27.818 [193/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:27.818 [194/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:04:27.818 [195/267] Linking static target drivers/librte_bus_vdev.a 00:04:27.818 [196/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:27.818 [197/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:04:27.818 [198/267] Linking static target lib/librte_cryptodev.a 00:04:27.818 [199/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:04:27.818 [200/267] Linking static target lib/librte_security.a 00:04:28.079 [201/267] Linking static target lib/librte_eal.a 00:04:28.079 [202/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:04:28.079 [203/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:04:28.079 [204/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:04:28.079 [205/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:04:28.079 [206/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:28.079 [207/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:28.079 [208/267] Linking static target drivers/librte_bus_pci.a 00:04:28.341 [209/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:04:28.341 [210/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:28.341 [211/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:04:28.341 [212/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:04:28.341 [213/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:04:28.341 [214/267] Linking static target lib/librte_ethdev.a 00:04:28.341 [215/267] Linking target lib/librte_telemetry.so.24.1 00:04:28.341 [216/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:28.603 [217/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:04:28.603 [218/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:04:28.603 [219/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:04:28.603 [220/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:28.603 [221/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:04:28.864 [222/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:04:28.864 [223/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:04:28.864 [224/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:04:28.864 [225/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:29.125 [226/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:04:29.698 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:04:29.698 [228/267] Linking static target lib/librte_vhost.a 00:04:29.960 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:31.876 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:38.559 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:39.502 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:04:39.502 [233/267] Linking target lib/librte_eal.so.24.1 00:04:39.502 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:04:39.502 [235/267] Linking target lib/librte_pci.so.24.1 00:04:39.502 [236/267] Linking target lib/librte_dmadev.so.24.1 00:04:39.502 [237/267] Linking target lib/librte_ring.so.24.1 00:04:39.502 [238/267] Linking target lib/librte_meter.so.24.1 00:04:39.502 [239/267] Linking target lib/librte_timer.so.24.1 00:04:39.502 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:04:39.763 [241/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:04:39.763 [242/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:04:39.763 [243/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:04:39.763 [244/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:04:39.763 [245/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:04:39.763 [246/267] Linking target drivers/librte_bus_pci.so.24.1 00:04:39.763 [247/267] Linking target lib/librte_rcu.so.24.1 00:04:39.763 [248/267] Linking target lib/librte_mempool.so.24.1 00:04:40.024 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:04:40.024 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:04:40.024 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:04:40.024 [252/267] Linking target lib/librte_mbuf.so.24.1 00:04:40.024 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:04:40.024 [254/267] Linking target lib/librte_net.so.24.1 00:04:40.024 [255/267] Linking target lib/librte_reorder.so.24.1 00:04:40.024 [256/267] Linking target lib/librte_cryptodev.so.24.1 00:04:40.024 [257/267] Linking target lib/librte_compressdev.so.24.1 00:04:40.284 [258/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:04:40.284 [259/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:04:40.284 [260/267] Linking target lib/librte_hash.so.24.1 00:04:40.284 [261/267] Linking target lib/librte_security.so.24.1 00:04:40.284 [262/267] Linking target lib/librte_cmdline.so.24.1 00:04:40.284 [263/267] Linking target lib/librte_ethdev.so.24.1 00:04:40.544 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:04:40.544 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:04:40.544 [266/267] Linking target lib/librte_power.so.24.1 00:04:40.544 [267/267] Linking target lib/librte_vhost.so.24.1 00:04:40.544 INFO: autodetecting backend as ninja 00:04:40.544 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:04:44.754 CC lib/ut/ut.o 00:04:44.754 CC lib/log/log.o 00:04:44.754 CC lib/ut_mock/mock.o 00:04:44.755 CC lib/log/log_flags.o 00:04:44.755 CC lib/log/log_deprecated.o 00:04:44.755 LIB libspdk_ut.a 00:04:44.755 LIB libspdk_ut_mock.a 00:04:44.755 LIB libspdk_log.a 00:04:44.755 SO libspdk_ut.so.2.0 00:04:44.755 SO libspdk_ut_mock.so.6.0 00:04:44.755 SO libspdk_log.so.7.0 00:04:44.755 SYMLINK libspdk_ut.so 00:04:44.755 SYMLINK libspdk_log.so 00:04:44.755 SYMLINK libspdk_ut_mock.so 00:04:45.015 CC lib/ioat/ioat.o 00:04:45.015 CC lib/util/base64.o 00:04:45.015 CC lib/util/bit_array.o 00:04:45.015 CC lib/util/cpuset.o 00:04:45.015 CC lib/util/crc16.o 00:04:45.015 CC lib/util/crc32c.o 00:04:45.015 CC lib/util/crc32.o 00:04:45.015 CC lib/util/crc32_ieee.o 00:04:45.015 CC lib/dma/dma.o 00:04:45.015 CC lib/util/crc64.o 00:04:45.015 CC lib/util/dif.o 00:04:45.015 CXX lib/trace_parser/trace.o 00:04:45.015 CC lib/util/fd.o 00:04:45.015 CC lib/util/fd_group.o 00:04:45.015 CC lib/util/file.o 00:04:45.015 CC lib/util/hexlify.o 00:04:45.015 CC lib/util/iov.o 00:04:45.015 CC lib/util/math.o 00:04:45.015 CC lib/util/net.o 00:04:45.015 CC lib/util/pipe.o 00:04:45.015 CC lib/util/strerror_tls.o 00:04:45.015 CC lib/util/string.o 00:04:45.015 CC lib/util/uuid.o 00:04:45.015 CC lib/util/xor.o 00:04:45.015 CC lib/util/zipf.o 00:04:45.015 CC lib/util/md5.o 00:04:45.276 CC lib/vfio_user/host/vfio_user_pci.o 00:04:45.276 CC lib/vfio_user/host/vfio_user.o 00:04:45.276 LIB libspdk_dma.a 00:04:45.276 SO libspdk_dma.so.5.0 00:04:45.276 LIB libspdk_ioat.a 00:04:45.276 SYMLINK libspdk_dma.so 00:04:45.276 SO libspdk_ioat.so.7.0 00:04:45.537 SYMLINK libspdk_ioat.so 00:04:45.537 LIB libspdk_vfio_user.a 00:04:45.537 SO libspdk_vfio_user.so.5.0 00:04:45.537 LIB libspdk_util.a 00:04:45.537 SYMLINK libspdk_vfio_user.so 00:04:45.537 SO libspdk_util.so.10.0 00:04:45.798 SYMLINK libspdk_util.so 00:04:45.799 LIB libspdk_trace_parser.a 00:04:45.799 SO libspdk_trace_parser.so.6.0 00:04:46.059 SYMLINK libspdk_trace_parser.so 00:04:46.059 CC lib/conf/conf.o 00:04:46.059 CC lib/json/json_util.o 00:04:46.059 CC lib/json/json_parse.o 00:04:46.059 CC lib/json/json_write.o 00:04:46.059 CC lib/idxd/idxd.o 00:04:46.059 CC lib/idxd/idxd_user.o 00:04:46.059 CC lib/idxd/idxd_kernel.o 00:04:46.059 CC lib/rdma_provider/common.o 00:04:46.059 CC lib/rdma_utils/rdma_utils.o 00:04:46.059 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:46.059 CC lib/vmd/vmd.o 00:04:46.059 CC lib/vmd/led.o 00:04:46.059 CC lib/env_dpdk/env.o 00:04:46.059 CC lib/env_dpdk/memory.o 00:04:46.059 CC lib/env_dpdk/pci.o 00:04:46.059 CC lib/env_dpdk/init.o 00:04:46.059 CC lib/env_dpdk/threads.o 00:04:46.059 CC lib/env_dpdk/pci_ioat.o 00:04:46.059 CC lib/env_dpdk/pci_virtio.o 00:04:46.059 CC lib/env_dpdk/pci_vmd.o 00:04:46.059 CC lib/env_dpdk/pci_idxd.o 00:04:46.059 CC lib/env_dpdk/pci_event.o 00:04:46.059 CC lib/env_dpdk/sigbus_handler.o 00:04:46.059 CC lib/env_dpdk/pci_dpdk.o 00:04:46.059 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:46.059 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:46.320 LIB libspdk_conf.a 00:04:46.320 LIB libspdk_rdma_provider.a 00:04:46.320 SO libspdk_conf.so.6.0 00:04:46.320 SO libspdk_rdma_provider.so.6.0 00:04:46.320 LIB libspdk_rdma_utils.a 00:04:46.320 LIB libspdk_json.a 00:04:46.320 SYMLINK libspdk_conf.so 00:04:46.320 SO libspdk_rdma_utils.so.1.0 00:04:46.320 SO libspdk_json.so.6.0 00:04:46.320 SYMLINK libspdk_rdma_provider.so 00:04:46.581 SYMLINK libspdk_rdma_utils.so 00:04:46.581 SYMLINK libspdk_json.so 00:04:46.581 LIB libspdk_idxd.a 00:04:46.581 SO libspdk_idxd.so.12.1 00:04:46.581 LIB libspdk_vmd.a 00:04:46.842 SO libspdk_vmd.so.6.0 00:04:46.842 SYMLINK libspdk_idxd.so 00:04:46.842 SYMLINK libspdk_vmd.so 00:04:46.842 CC lib/jsonrpc/jsonrpc_server.o 00:04:46.842 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:46.842 CC lib/jsonrpc/jsonrpc_client.o 00:04:46.842 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:47.104 LIB libspdk_jsonrpc.a 00:04:47.104 SO libspdk_jsonrpc.so.6.0 00:04:47.365 SYMLINK libspdk_jsonrpc.so 00:04:47.365 LIB libspdk_env_dpdk.a 00:04:47.365 SO libspdk_env_dpdk.so.15.0 00:04:47.626 SYMLINK libspdk_env_dpdk.so 00:04:47.626 CC lib/rpc/rpc.o 00:04:47.885 LIB libspdk_rpc.a 00:04:47.885 SO libspdk_rpc.so.6.0 00:04:47.885 SYMLINK libspdk_rpc.so 00:04:48.146 CC lib/notify/notify.o 00:04:48.146 CC lib/notify/notify_rpc.o 00:04:48.408 CC lib/trace/trace.o 00:04:48.408 CC lib/trace/trace_rpc.o 00:04:48.408 CC lib/keyring/keyring.o 00:04:48.408 CC lib/trace/trace_flags.o 00:04:48.409 CC lib/keyring/keyring_rpc.o 00:04:48.409 LIB libspdk_notify.a 00:04:48.409 SO libspdk_notify.so.6.0 00:04:48.409 LIB libspdk_keyring.a 00:04:48.671 LIB libspdk_trace.a 00:04:48.671 SYMLINK libspdk_notify.so 00:04:48.671 SO libspdk_keyring.so.2.0 00:04:48.671 SO libspdk_trace.so.11.0 00:04:48.671 SYMLINK libspdk_keyring.so 00:04:48.671 SYMLINK libspdk_trace.so 00:04:48.931 CC lib/thread/thread.o 00:04:48.931 CC lib/thread/iobuf.o 00:04:48.931 CC lib/sock/sock.o 00:04:48.931 CC lib/sock/sock_rpc.o 00:04:49.505 LIB libspdk_sock.a 00:04:49.505 SO libspdk_sock.so.10.0 00:04:49.505 SYMLINK libspdk_sock.so 00:04:49.766 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:49.766 CC lib/nvme/nvme_ctrlr.o 00:04:49.766 CC lib/nvme/nvme_fabric.o 00:04:49.766 CC lib/nvme/nvme_ns_cmd.o 00:04:49.766 CC lib/nvme/nvme_ns.o 00:04:49.766 CC lib/nvme/nvme_pcie_common.o 00:04:49.766 CC lib/nvme/nvme_pcie.o 00:04:49.766 CC lib/nvme/nvme_qpair.o 00:04:49.766 CC lib/nvme/nvme.o 00:04:49.766 CC lib/nvme/nvme_quirks.o 00:04:49.766 CC lib/nvme/nvme_transport.o 00:04:49.766 CC lib/nvme/nvme_discovery.o 00:04:49.766 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:49.766 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:49.766 CC lib/nvme/nvme_tcp.o 00:04:49.766 CC lib/nvme/nvme_opal.o 00:04:49.766 CC lib/nvme/nvme_poll_group.o 00:04:49.766 CC lib/nvme/nvme_io_msg.o 00:04:49.766 CC lib/nvme/nvme_zns.o 00:04:49.766 CC lib/nvme/nvme_stubs.o 00:04:49.766 CC lib/nvme/nvme_auth.o 00:04:49.766 CC lib/nvme/nvme_cuse.o 00:04:49.766 CC lib/nvme/nvme_vfio_user.o 00:04:49.766 CC lib/nvme/nvme_rdma.o 00:04:50.337 LIB libspdk_thread.a 00:04:50.337 SO libspdk_thread.so.10.1 00:04:50.337 SYMLINK libspdk_thread.so 00:04:50.597 CC lib/init/json_config.o 00:04:50.597 CC lib/init/subsystem.o 00:04:50.597 CC lib/init/rpc.o 00:04:50.597 CC lib/init/subsystem_rpc.o 00:04:50.597 CC lib/vfu_tgt/tgt_endpoint.o 00:04:50.597 CC lib/vfu_tgt/tgt_rpc.o 00:04:50.597 CC lib/fsdev/fsdev.o 00:04:50.859 CC lib/fsdev/fsdev_io.o 00:04:50.859 CC lib/fsdev/fsdev_rpc.o 00:04:50.859 CC lib/blob/blobstore.o 00:04:50.859 CC lib/blob/request.o 00:04:50.859 CC lib/accel/accel.o 00:04:50.859 CC lib/accel/accel_sw.o 00:04:50.859 CC lib/blob/zeroes.o 00:04:50.859 CC lib/blob/blob_bs_dev.o 00:04:50.859 CC lib/accel/accel_rpc.o 00:04:50.859 CC lib/virtio/virtio.o 00:04:50.859 CC lib/virtio/virtio_vhost_user.o 00:04:50.859 CC lib/virtio/virtio_vfio_user.o 00:04:50.859 CC lib/virtio/virtio_pci.o 00:04:50.859 LIB libspdk_init.a 00:04:51.120 SO libspdk_init.so.6.0 00:04:51.120 LIB libspdk_vfu_tgt.a 00:04:51.120 LIB libspdk_virtio.a 00:04:51.120 SYMLINK libspdk_init.so 00:04:51.120 SO libspdk_vfu_tgt.so.3.0 00:04:51.120 SO libspdk_virtio.so.7.0 00:04:51.120 SYMLINK libspdk_vfu_tgt.so 00:04:51.120 SYMLINK libspdk_virtio.so 00:04:51.380 LIB libspdk_fsdev.a 00:04:51.380 SO libspdk_fsdev.so.1.0 00:04:51.380 CC lib/event/app.o 00:04:51.380 CC lib/event/reactor.o 00:04:51.380 CC lib/event/log_rpc.o 00:04:51.380 SYMLINK libspdk_fsdev.so 00:04:51.380 CC lib/event/app_rpc.o 00:04:51.380 CC lib/event/scheduler_static.o 00:04:51.641 LIB libspdk_accel.a 00:04:51.641 LIB libspdk_nvme.a 00:04:51.641 SO libspdk_accel.so.16.0 00:04:51.902 SO libspdk_nvme.so.14.0 00:04:51.902 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:51.902 SYMLINK libspdk_accel.so 00:04:51.902 LIB libspdk_event.a 00:04:51.903 SO libspdk_event.so.14.0 00:04:51.903 SYMLINK libspdk_event.so 00:04:51.903 SYMLINK libspdk_nvme.so 00:04:52.164 CC lib/bdev/bdev.o 00:04:52.164 CC lib/bdev/bdev_rpc.o 00:04:52.164 CC lib/bdev/scsi_nvme.o 00:04:52.164 CC lib/bdev/bdev_zone.o 00:04:52.164 CC lib/bdev/part.o 00:04:52.424 LIB libspdk_fuse_dispatcher.a 00:04:52.424 SO libspdk_fuse_dispatcher.so.1.0 00:04:52.424 SYMLINK libspdk_fuse_dispatcher.so 00:04:53.367 LIB libspdk_blob.a 00:04:53.367 SO libspdk_blob.so.11.0 00:04:53.367 SYMLINK libspdk_blob.so 00:04:53.941 CC lib/blobfs/blobfs.o 00:04:53.941 CC lib/lvol/lvol.o 00:04:53.941 CC lib/blobfs/tree.o 00:04:54.515 LIB libspdk_bdev.a 00:04:54.515 SO libspdk_bdev.so.16.0 00:04:54.515 LIB libspdk_blobfs.a 00:04:54.515 SYMLINK libspdk_bdev.so 00:04:54.515 SO libspdk_blobfs.so.10.0 00:04:54.777 LIB libspdk_lvol.a 00:04:54.777 SYMLINK libspdk_blobfs.so 00:04:54.777 SO libspdk_lvol.so.10.0 00:04:54.777 SYMLINK libspdk_lvol.so 00:04:55.037 CC lib/ublk/ublk.o 00:04:55.037 CC lib/ublk/ublk_rpc.o 00:04:55.037 CC lib/scsi/dev.o 00:04:55.037 CC lib/scsi/lun.o 00:04:55.037 CC lib/scsi/port.o 00:04:55.037 CC lib/scsi/scsi.o 00:04:55.037 CC lib/nbd/nbd.o 00:04:55.037 CC lib/scsi/scsi_bdev.o 00:04:55.037 CC lib/nbd/nbd_rpc.o 00:04:55.037 CC lib/nvmf/ctrlr.o 00:04:55.037 CC lib/scsi/scsi_pr.o 00:04:55.037 CC lib/scsi/scsi_rpc.o 00:04:55.037 CC lib/scsi/task.o 00:04:55.037 CC lib/nvmf/ctrlr_discovery.o 00:04:55.037 CC lib/nvmf/ctrlr_bdev.o 00:04:55.037 CC lib/nvmf/subsystem.o 00:04:55.037 CC lib/nvmf/nvmf.o 00:04:55.037 CC lib/nvmf/nvmf_rpc.o 00:04:55.037 CC lib/nvmf/transport.o 00:04:55.037 CC lib/nvmf/mdns_server.o 00:04:55.037 CC lib/ftl/ftl_core.o 00:04:55.037 CC lib/nvmf/tcp.o 00:04:55.037 CC lib/nvmf/stubs.o 00:04:55.037 CC lib/ftl/ftl_init.o 00:04:55.037 CC lib/ftl/ftl_layout.o 00:04:55.037 CC lib/ftl/ftl_debug.o 00:04:55.037 CC lib/nvmf/vfio_user.o 00:04:55.037 CC lib/ftl/ftl_io.o 00:04:55.037 CC lib/nvmf/rdma.o 00:04:55.037 CC lib/ftl/ftl_sb.o 00:04:55.037 CC lib/ftl/ftl_l2p.o 00:04:55.037 CC lib/nvmf/auth.o 00:04:55.037 CC lib/ftl/ftl_l2p_flat.o 00:04:55.037 CC lib/ftl/ftl_nv_cache.o 00:04:55.037 CC lib/ftl/ftl_band.o 00:04:55.037 CC lib/ftl/ftl_band_ops.o 00:04:55.037 CC lib/ftl/ftl_writer.o 00:04:55.037 CC lib/ftl/ftl_reloc.o 00:04:55.037 CC lib/ftl/ftl_rq.o 00:04:55.037 CC lib/ftl/ftl_p2l.o 00:04:55.037 CC lib/ftl/ftl_l2p_cache.o 00:04:55.037 CC lib/ftl/ftl_p2l_log.o 00:04:55.037 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:55.037 CC lib/ftl/mngt/ftl_mngt.o 00:04:55.037 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:55.037 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:55.037 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:55.037 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:55.037 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:55.037 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:55.037 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:55.037 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:55.037 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:55.037 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:55.037 CC lib/ftl/utils/ftl_md.o 00:04:55.037 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:55.037 CC lib/ftl/utils/ftl_mempool.o 00:04:55.037 CC lib/ftl/utils/ftl_bitmap.o 00:04:55.037 CC lib/ftl/utils/ftl_property.o 00:04:55.037 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:55.037 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:55.037 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:55.037 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:55.037 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:55.037 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:55.037 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:55.037 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:55.037 CC lib/ftl/utils/ftl_conf.o 00:04:55.037 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:55.037 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:55.037 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:55.037 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:55.037 CC lib/ftl/base/ftl_base_dev.o 00:04:55.037 CC lib/ftl/ftl_trace.o 00:04:55.037 CC lib/ftl/base/ftl_base_bdev.o 00:04:55.037 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:55.608 LIB libspdk_nbd.a 00:04:55.608 SO libspdk_nbd.so.7.0 00:04:55.608 SYMLINK libspdk_nbd.so 00:04:55.608 LIB libspdk_scsi.a 00:04:55.608 LIB libspdk_ublk.a 00:04:55.608 SO libspdk_scsi.so.9.0 00:04:55.608 SO libspdk_ublk.so.3.0 00:04:55.608 SYMLINK libspdk_scsi.so 00:04:55.608 SYMLINK libspdk_ublk.so 00:04:55.870 LIB libspdk_ftl.a 00:04:56.130 CC lib/vhost/vhost.o 00:04:56.130 CC lib/vhost/vhost_rpc.o 00:04:56.130 CC lib/vhost/vhost_blk.o 00:04:56.130 CC lib/vhost/vhost_scsi.o 00:04:56.130 CC lib/vhost/rte_vhost_user.o 00:04:56.130 CC lib/iscsi/conn.o 00:04:56.130 CC lib/iscsi/init_grp.o 00:04:56.130 CC lib/iscsi/iscsi.o 00:04:56.130 CC lib/iscsi/param.o 00:04:56.130 CC lib/iscsi/portal_grp.o 00:04:56.130 CC lib/iscsi/tgt_node.o 00:04:56.130 CC lib/iscsi/iscsi_subsystem.o 00:04:56.130 CC lib/iscsi/iscsi_rpc.o 00:04:56.130 CC lib/iscsi/task.o 00:04:56.130 SO libspdk_ftl.so.9.0 00:04:56.391 SYMLINK libspdk_ftl.so 00:04:56.963 LIB libspdk_nvmf.a 00:04:56.963 SO libspdk_nvmf.so.19.0 00:04:56.963 LIB libspdk_vhost.a 00:04:56.963 SO libspdk_vhost.so.8.0 00:04:57.224 SYMLINK libspdk_nvmf.so 00:04:57.224 SYMLINK libspdk_vhost.so 00:04:57.224 LIB libspdk_iscsi.a 00:04:57.486 SO libspdk_iscsi.so.8.0 00:04:57.486 SYMLINK libspdk_iscsi.so 00:04:58.060 CC module/vfu_device/vfu_virtio.o 00:04:58.060 CC module/vfu_device/vfu_virtio_scsi.o 00:04:58.060 CC module/vfu_device/vfu_virtio_blk.o 00:04:58.060 CC module/vfu_device/vfu_virtio_rpc.o 00:04:58.060 CC module/vfu_device/vfu_virtio_fs.o 00:04:58.060 CC module/env_dpdk/env_dpdk_rpc.o 00:04:58.321 CC module/sock/posix/posix.o 00:04:58.321 CC module/keyring/file/keyring.o 00:04:58.321 CC module/accel/ioat/accel_ioat.o 00:04:58.321 CC module/keyring/file/keyring_rpc.o 00:04:58.321 CC module/accel/ioat/accel_ioat_rpc.o 00:04:58.321 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:58.321 LIB libspdk_env_dpdk_rpc.a 00:04:58.321 CC module/blob/bdev/blob_bdev.o 00:04:58.321 CC module/scheduler/gscheduler/gscheduler.o 00:04:58.321 CC module/accel/error/accel_error.o 00:04:58.321 CC module/accel/iaa/accel_iaa.o 00:04:58.321 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:58.321 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:58.321 CC module/accel/error/accel_error_rpc.o 00:04:58.321 CC module/fsdev/aio/fsdev_aio.o 00:04:58.321 CC module/accel/iaa/accel_iaa_rpc.o 00:04:58.321 CC module/accel/dsa/accel_dsa.o 00:04:58.321 CC module/fsdev/aio/linux_aio_mgr.o 00:04:58.321 CC module/accel/dsa/accel_dsa_rpc.o 00:04:58.321 CC module/keyring/linux/keyring.o 00:04:58.321 CC module/keyring/linux/keyring_rpc.o 00:04:58.321 SO libspdk_env_dpdk_rpc.so.6.0 00:04:58.321 SYMLINK libspdk_env_dpdk_rpc.so 00:04:58.321 LIB libspdk_keyring_file.a 00:04:58.321 LIB libspdk_scheduler_gscheduler.a 00:04:58.321 LIB libspdk_scheduler_dpdk_governor.a 00:04:58.321 LIB libspdk_accel_ioat.a 00:04:58.321 LIB libspdk_keyring_linux.a 00:04:58.321 SO libspdk_keyring_file.so.2.0 00:04:58.321 LIB libspdk_accel_error.a 00:04:58.321 SO libspdk_scheduler_gscheduler.so.4.0 00:04:58.321 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:58.580 SO libspdk_keyring_linux.so.1.0 00:04:58.580 SO libspdk_accel_ioat.so.6.0 00:04:58.580 LIB libspdk_scheduler_dynamic.a 00:04:58.580 SO libspdk_accel_error.so.2.0 00:04:58.580 LIB libspdk_accel_iaa.a 00:04:58.580 SYMLINK libspdk_keyring_file.so 00:04:58.580 SO libspdk_accel_iaa.so.3.0 00:04:58.580 LIB libspdk_blob_bdev.a 00:04:58.580 SYMLINK libspdk_scheduler_gscheduler.so 00:04:58.580 SO libspdk_scheduler_dynamic.so.4.0 00:04:58.580 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:58.580 LIB libspdk_accel_dsa.a 00:04:58.580 SYMLINK libspdk_keyring_linux.so 00:04:58.580 SYMLINK libspdk_accel_ioat.so 00:04:58.580 SO libspdk_blob_bdev.so.11.0 00:04:58.580 SO libspdk_accel_dsa.so.5.0 00:04:58.580 SYMLINK libspdk_accel_error.so 00:04:58.580 SYMLINK libspdk_accel_iaa.so 00:04:58.580 SYMLINK libspdk_scheduler_dynamic.so 00:04:58.580 SYMLINK libspdk_blob_bdev.so 00:04:58.580 LIB libspdk_vfu_device.a 00:04:58.580 SYMLINK libspdk_accel_dsa.so 00:04:58.580 SO libspdk_vfu_device.so.3.0 00:04:58.841 SYMLINK libspdk_vfu_device.so 00:04:58.841 LIB libspdk_fsdev_aio.a 00:04:58.841 SO libspdk_fsdev_aio.so.1.0 00:04:58.841 LIB libspdk_sock_posix.a 00:04:58.841 SO libspdk_sock_posix.so.6.0 00:04:58.841 SYMLINK libspdk_fsdev_aio.so 00:04:59.102 SYMLINK libspdk_sock_posix.so 00:04:59.102 CC module/bdev/delay/vbdev_delay.o 00:04:59.102 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:59.102 CC module/blobfs/bdev/blobfs_bdev.o 00:04:59.102 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:59.102 CC module/bdev/null/bdev_null.o 00:04:59.102 CC module/bdev/null/bdev_null_rpc.o 00:04:59.102 CC module/bdev/error/vbdev_error.o 00:04:59.102 CC module/bdev/error/vbdev_error_rpc.o 00:04:59.102 CC module/bdev/gpt/gpt.o 00:04:59.102 CC module/bdev/gpt/vbdev_gpt.o 00:04:59.102 CC module/bdev/passthru/vbdev_passthru.o 00:04:59.102 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:59.102 CC module/bdev/split/vbdev_split.o 00:04:59.102 CC module/bdev/lvol/vbdev_lvol.o 00:04:59.102 CC module/bdev/nvme/bdev_nvme.o 00:04:59.102 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:59.102 CC module/bdev/split/vbdev_split_rpc.o 00:04:59.102 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:59.102 CC module/bdev/nvme/nvme_rpc.o 00:04:59.102 CC module/bdev/nvme/vbdev_opal.o 00:04:59.102 CC module/bdev/nvme/bdev_mdns_client.o 00:04:59.102 CC module/bdev/raid/bdev_raid_rpc.o 00:04:59.102 CC module/bdev/raid/bdev_raid.o 00:04:59.102 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:59.102 CC module/bdev/malloc/bdev_malloc.o 00:04:59.102 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:59.102 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:59.102 CC module/bdev/raid/bdev_raid_sb.o 00:04:59.102 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:59.102 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:59.102 CC module/bdev/aio/bdev_aio.o 00:04:59.102 CC module/bdev/raid/raid0.o 00:04:59.102 CC module/bdev/aio/bdev_aio_rpc.o 00:04:59.102 CC module/bdev/raid/raid1.o 00:04:59.102 CC module/bdev/raid/concat.o 00:04:59.102 CC module/bdev/ftl/bdev_ftl.o 00:04:59.102 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:59.102 CC module/bdev/iscsi/bdev_iscsi.o 00:04:59.102 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:59.102 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:59.102 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:59.102 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:59.363 LIB libspdk_blobfs_bdev.a 00:04:59.363 SO libspdk_blobfs_bdev.so.6.0 00:04:59.363 LIB libspdk_bdev_null.a 00:04:59.363 LIB libspdk_bdev_error.a 00:04:59.363 SO libspdk_bdev_null.so.6.0 00:04:59.363 SO libspdk_bdev_error.so.6.0 00:04:59.363 LIB libspdk_bdev_passthru.a 00:04:59.363 LIB libspdk_bdev_split.a 00:04:59.363 SYMLINK libspdk_blobfs_bdev.so 00:04:59.363 LIB libspdk_bdev_gpt.a 00:04:59.363 LIB libspdk_bdev_delay.a 00:04:59.363 SO libspdk_bdev_passthru.so.6.0 00:04:59.625 LIB libspdk_bdev_zone_block.a 00:04:59.625 SO libspdk_bdev_split.so.6.0 00:04:59.625 SO libspdk_bdev_delay.so.6.0 00:04:59.625 SYMLINK libspdk_bdev_error.so 00:04:59.625 SO libspdk_bdev_gpt.so.6.0 00:04:59.625 SYMLINK libspdk_bdev_null.so 00:04:59.625 LIB libspdk_bdev_ftl.a 00:04:59.625 SO libspdk_bdev_zone_block.so.6.0 00:04:59.625 SYMLINK libspdk_bdev_passthru.so 00:04:59.625 SO libspdk_bdev_ftl.so.6.0 00:04:59.625 SYMLINK libspdk_bdev_gpt.so 00:04:59.625 LIB libspdk_bdev_aio.a 00:04:59.625 SYMLINK libspdk_bdev_split.so 00:04:59.625 SYMLINK libspdk_bdev_delay.so 00:04:59.625 LIB libspdk_bdev_malloc.a 00:04:59.625 LIB libspdk_bdev_iscsi.a 00:04:59.625 SO libspdk_bdev_aio.so.6.0 00:04:59.625 SYMLINK libspdk_bdev_zone_block.so 00:04:59.625 SO libspdk_bdev_malloc.so.6.0 00:04:59.625 SO libspdk_bdev_iscsi.so.6.0 00:04:59.625 SYMLINK libspdk_bdev_ftl.so 00:04:59.625 LIB libspdk_bdev_lvol.a 00:04:59.625 LIB libspdk_bdev_virtio.a 00:04:59.625 SO libspdk_bdev_lvol.so.6.0 00:04:59.625 SYMLINK libspdk_bdev_aio.so 00:04:59.625 SYMLINK libspdk_bdev_malloc.so 00:04:59.625 SYMLINK libspdk_bdev_iscsi.so 00:04:59.625 SO libspdk_bdev_virtio.so.6.0 00:04:59.886 SYMLINK libspdk_bdev_lvol.so 00:04:59.886 SYMLINK libspdk_bdev_virtio.so 00:05:00.147 LIB libspdk_bdev_raid.a 00:05:00.148 SO libspdk_bdev_raid.so.6.0 00:05:00.148 SYMLINK libspdk_bdev_raid.so 00:05:01.090 LIB libspdk_bdev_nvme.a 00:05:01.090 SO libspdk_bdev_nvme.so.7.0 00:05:01.351 SYMLINK libspdk_bdev_nvme.so 00:05:01.924 CC module/event/subsystems/iobuf/iobuf.o 00:05:01.924 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:01.924 CC module/event/subsystems/sock/sock.o 00:05:01.924 CC module/event/subsystems/vmd/vmd.o 00:05:01.924 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:01.924 CC module/event/subsystems/keyring/keyring.o 00:05:01.924 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:05:01.924 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:01.924 CC module/event/subsystems/scheduler/scheduler.o 00:05:01.924 CC module/event/subsystems/fsdev/fsdev.o 00:05:02.183 LIB libspdk_event_keyring.a 00:05:02.183 LIB libspdk_event_fsdev.a 00:05:02.183 LIB libspdk_event_iobuf.a 00:05:02.183 LIB libspdk_event_vmd.a 00:05:02.183 LIB libspdk_event_vhost_blk.a 00:05:02.183 LIB libspdk_event_sock.a 00:05:02.183 SO libspdk_event_keyring.so.1.0 00:05:02.183 LIB libspdk_event_scheduler.a 00:05:02.183 LIB libspdk_event_vfu_tgt.a 00:05:02.183 SO libspdk_event_iobuf.so.3.0 00:05:02.183 SO libspdk_event_fsdev.so.1.0 00:05:02.183 SO libspdk_event_vhost_blk.so.3.0 00:05:02.183 SO libspdk_event_vfu_tgt.so.3.0 00:05:02.183 SO libspdk_event_sock.so.5.0 00:05:02.183 SO libspdk_event_scheduler.so.4.0 00:05:02.183 SO libspdk_event_vmd.so.6.0 00:05:02.183 SYMLINK libspdk_event_keyring.so 00:05:02.183 SYMLINK libspdk_event_fsdev.so 00:05:02.183 SYMLINK libspdk_event_iobuf.so 00:05:02.183 SYMLINK libspdk_event_vhost_blk.so 00:05:02.183 SYMLINK libspdk_event_vfu_tgt.so 00:05:02.183 SYMLINK libspdk_event_scheduler.so 00:05:02.183 SYMLINK libspdk_event_sock.so 00:05:02.183 SYMLINK libspdk_event_vmd.so 00:05:02.753 CC module/event/subsystems/accel/accel.o 00:05:02.753 LIB libspdk_event_accel.a 00:05:02.753 SO libspdk_event_accel.so.6.0 00:05:02.753 SYMLINK libspdk_event_accel.so 00:05:03.324 CC module/event/subsystems/bdev/bdev.o 00:05:03.324 LIB libspdk_event_bdev.a 00:05:03.324 SO libspdk_event_bdev.so.6.0 00:05:03.585 SYMLINK libspdk_event_bdev.so 00:05:03.846 CC module/event/subsystems/scsi/scsi.o 00:05:03.846 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:03.846 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:03.846 CC module/event/subsystems/nbd/nbd.o 00:05:03.846 CC module/event/subsystems/ublk/ublk.o 00:05:03.847 LIB libspdk_event_ublk.a 00:05:03.847 LIB libspdk_event_nbd.a 00:05:04.108 LIB libspdk_event_scsi.a 00:05:04.108 SO libspdk_event_ublk.so.3.0 00:05:04.108 SO libspdk_event_nbd.so.6.0 00:05:04.108 SO libspdk_event_scsi.so.6.0 00:05:04.108 LIB libspdk_event_nvmf.a 00:05:04.108 SYMLINK libspdk_event_ublk.so 00:05:04.108 SYMLINK libspdk_event_nbd.so 00:05:04.108 SYMLINK libspdk_event_scsi.so 00:05:04.108 SO libspdk_event_nvmf.so.6.0 00:05:04.108 SYMLINK libspdk_event_nvmf.so 00:05:04.369 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:04.369 CC module/event/subsystems/iscsi/iscsi.o 00:05:04.629 LIB libspdk_event_vhost_scsi.a 00:05:04.629 LIB libspdk_event_iscsi.a 00:05:04.629 SO libspdk_event_vhost_scsi.so.3.0 00:05:04.629 SO libspdk_event_iscsi.so.6.0 00:05:04.629 SYMLINK libspdk_event_vhost_scsi.so 00:05:04.629 SYMLINK libspdk_event_iscsi.so 00:05:04.890 SO libspdk.so.6.0 00:05:04.890 SYMLINK libspdk.so 00:05:05.461 CXX app/trace/trace.o 00:05:05.461 CC app/trace_record/trace_record.o 00:05:05.461 CC app/spdk_lspci/spdk_lspci.o 00:05:05.461 CC app/spdk_top/spdk_top.o 00:05:05.461 CC app/spdk_nvme_discover/discovery_aer.o 00:05:05.461 CC test/rpc_client/rpc_client_test.o 00:05:05.461 TEST_HEADER include/spdk/accel.h 00:05:05.461 TEST_HEADER include/spdk/accel_module.h 00:05:05.461 TEST_HEADER include/spdk/assert.h 00:05:05.461 CC app/spdk_nvme_identify/identify.o 00:05:05.461 TEST_HEADER include/spdk/barrier.h 00:05:05.461 TEST_HEADER include/spdk/base64.h 00:05:05.461 TEST_HEADER include/spdk/bdev.h 00:05:05.461 CC app/spdk_nvme_perf/perf.o 00:05:05.461 TEST_HEADER include/spdk/bdev_module.h 00:05:05.461 TEST_HEADER include/spdk/bit_array.h 00:05:05.461 TEST_HEADER include/spdk/bdev_zone.h 00:05:05.461 TEST_HEADER include/spdk/bit_pool.h 00:05:05.461 TEST_HEADER include/spdk/blob_bdev.h 00:05:05.461 TEST_HEADER include/spdk/blobfs.h 00:05:05.461 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:05.461 TEST_HEADER include/spdk/blob.h 00:05:05.461 TEST_HEADER include/spdk/conf.h 00:05:05.461 TEST_HEADER include/spdk/config.h 00:05:05.461 TEST_HEADER include/spdk/cpuset.h 00:05:05.461 TEST_HEADER include/spdk/crc16.h 00:05:05.461 TEST_HEADER include/spdk/crc32.h 00:05:05.461 TEST_HEADER include/spdk/dif.h 00:05:05.461 TEST_HEADER include/spdk/crc64.h 00:05:05.461 TEST_HEADER include/spdk/dma.h 00:05:05.461 TEST_HEADER include/spdk/endian.h 00:05:05.461 TEST_HEADER include/spdk/env_dpdk.h 00:05:05.461 TEST_HEADER include/spdk/env.h 00:05:05.461 TEST_HEADER include/spdk/event.h 00:05:05.461 TEST_HEADER include/spdk/fd_group.h 00:05:05.461 TEST_HEADER include/spdk/file.h 00:05:05.461 TEST_HEADER include/spdk/fd.h 00:05:05.461 TEST_HEADER include/spdk/fsdev.h 00:05:05.461 TEST_HEADER include/spdk/fsdev_module.h 00:05:05.461 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:05.461 TEST_HEADER include/spdk/fuse_dispatcher.h 00:05:05.461 TEST_HEADER include/spdk/ftl.h 00:05:05.461 TEST_HEADER include/spdk/hexlify.h 00:05:05.461 CC app/nvmf_tgt/nvmf_main.o 00:05:05.461 TEST_HEADER include/spdk/gpt_spec.h 00:05:05.461 TEST_HEADER include/spdk/idxd.h 00:05:05.461 TEST_HEADER include/spdk/histogram_data.h 00:05:05.461 CC app/spdk_dd/spdk_dd.o 00:05:05.461 TEST_HEADER include/spdk/idxd_spec.h 00:05:05.461 TEST_HEADER include/spdk/init.h 00:05:05.461 TEST_HEADER include/spdk/ioat.h 00:05:05.461 TEST_HEADER include/spdk/ioat_spec.h 00:05:05.461 TEST_HEADER include/spdk/iscsi_spec.h 00:05:05.461 TEST_HEADER include/spdk/json.h 00:05:05.461 TEST_HEADER include/spdk/jsonrpc.h 00:05:05.461 TEST_HEADER include/spdk/keyring.h 00:05:05.461 TEST_HEADER include/spdk/keyring_module.h 00:05:05.461 TEST_HEADER include/spdk/likely.h 00:05:05.461 TEST_HEADER include/spdk/log.h 00:05:05.461 TEST_HEADER include/spdk/lvol.h 00:05:05.461 TEST_HEADER include/spdk/md5.h 00:05:05.461 TEST_HEADER include/spdk/mmio.h 00:05:05.461 TEST_HEADER include/spdk/memory.h 00:05:05.461 TEST_HEADER include/spdk/net.h 00:05:05.461 TEST_HEADER include/spdk/notify.h 00:05:05.461 TEST_HEADER include/spdk/nbd.h 00:05:05.461 TEST_HEADER include/spdk/nvme.h 00:05:05.461 TEST_HEADER include/spdk/nvme_intel.h 00:05:05.461 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:05.461 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:05.461 TEST_HEADER include/spdk/nvme_spec.h 00:05:05.461 CC app/iscsi_tgt/iscsi_tgt.o 00:05:05.461 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:05.461 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:05.461 TEST_HEADER include/spdk/nvme_zns.h 00:05:05.461 TEST_HEADER include/spdk/nvmf.h 00:05:05.461 TEST_HEADER include/spdk/nvmf_transport.h 00:05:05.461 TEST_HEADER include/spdk/nvmf_spec.h 00:05:05.461 TEST_HEADER include/spdk/pci_ids.h 00:05:05.461 TEST_HEADER include/spdk/opal.h 00:05:05.461 TEST_HEADER include/spdk/pipe.h 00:05:05.461 TEST_HEADER include/spdk/opal_spec.h 00:05:05.461 TEST_HEADER include/spdk/queue.h 00:05:05.461 TEST_HEADER include/spdk/rpc.h 00:05:05.461 TEST_HEADER include/spdk/scheduler.h 00:05:05.461 TEST_HEADER include/spdk/reduce.h 00:05:05.461 TEST_HEADER include/spdk/scsi.h 00:05:05.461 TEST_HEADER include/spdk/scsi_spec.h 00:05:05.461 TEST_HEADER include/spdk/stdinc.h 00:05:05.461 TEST_HEADER include/spdk/string.h 00:05:05.461 TEST_HEADER include/spdk/sock.h 00:05:05.461 TEST_HEADER include/spdk/thread.h 00:05:05.461 TEST_HEADER include/spdk/trace.h 00:05:05.461 TEST_HEADER include/spdk/tree.h 00:05:05.461 CC app/spdk_tgt/spdk_tgt.o 00:05:05.461 TEST_HEADER include/spdk/trace_parser.h 00:05:05.461 TEST_HEADER include/spdk/ublk.h 00:05:05.461 TEST_HEADER include/spdk/util.h 00:05:05.461 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:05.461 TEST_HEADER include/spdk/uuid.h 00:05:05.461 TEST_HEADER include/spdk/version.h 00:05:05.461 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:05.461 TEST_HEADER include/spdk/vhost.h 00:05:05.461 TEST_HEADER include/spdk/vmd.h 00:05:05.461 TEST_HEADER include/spdk/xor.h 00:05:05.461 TEST_HEADER include/spdk/zipf.h 00:05:05.461 CXX test/cpp_headers/accel.o 00:05:05.461 CXX test/cpp_headers/accel_module.o 00:05:05.461 CXX test/cpp_headers/assert.o 00:05:05.461 CXX test/cpp_headers/barrier.o 00:05:05.461 CXX test/cpp_headers/bdev.o 00:05:05.461 CXX test/cpp_headers/base64.o 00:05:05.461 CXX test/cpp_headers/bit_array.o 00:05:05.461 CXX test/cpp_headers/bdev_module.o 00:05:05.461 CXX test/cpp_headers/bdev_zone.o 00:05:05.461 CXX test/cpp_headers/bit_pool.o 00:05:05.461 CXX test/cpp_headers/blobfs_bdev.o 00:05:05.461 CXX test/cpp_headers/blob_bdev.o 00:05:05.461 CXX test/cpp_headers/blobfs.o 00:05:05.461 CXX test/cpp_headers/blob.o 00:05:05.461 CXX test/cpp_headers/config.o 00:05:05.461 CXX test/cpp_headers/crc16.o 00:05:05.461 CXX test/cpp_headers/conf.o 00:05:05.461 CXX test/cpp_headers/cpuset.o 00:05:05.461 CXX test/cpp_headers/dif.o 00:05:05.461 CXX test/cpp_headers/crc32.o 00:05:05.461 CXX test/cpp_headers/crc64.o 00:05:05.461 CXX test/cpp_headers/env_dpdk.o 00:05:05.461 CXX test/cpp_headers/env.o 00:05:05.461 CXX test/cpp_headers/endian.o 00:05:05.461 CXX test/cpp_headers/dma.o 00:05:05.461 CXX test/cpp_headers/file.o 00:05:05.461 CXX test/cpp_headers/fsdev_module.o 00:05:05.461 CXX test/cpp_headers/ftl.o 00:05:05.461 CXX test/cpp_headers/fuse_dispatcher.o 00:05:05.461 CC test/thread/poller_perf/poller_perf.o 00:05:05.461 CXX test/cpp_headers/event.o 00:05:05.461 CC test/app/histogram_perf/histogram_perf.o 00:05:05.461 CXX test/cpp_headers/fd.o 00:05:05.461 CXX test/cpp_headers/fd_group.o 00:05:05.461 CXX test/cpp_headers/hexlify.o 00:05:05.461 CXX test/cpp_headers/fsdev.o 00:05:05.461 CC examples/util/zipf/zipf.o 00:05:05.461 CC test/env/pci/pci_ut.o 00:05:05.461 LINK spdk_lspci 00:05:05.720 CXX test/cpp_headers/gpt_spec.o 00:05:05.720 LINK rpc_client_test 00:05:05.720 CXX test/cpp_headers/histogram_data.o 00:05:05.720 CXX test/cpp_headers/idxd.o 00:05:05.720 CXX test/cpp_headers/idxd_spec.o 00:05:05.720 CC examples/ioat/perf/perf.o 00:05:05.720 CXX test/cpp_headers/init.o 00:05:05.720 CXX test/cpp_headers/ioat.o 00:05:05.720 CXX test/cpp_headers/json.o 00:05:05.720 CXX test/cpp_headers/ioat_spec.o 00:05:05.720 CXX test/cpp_headers/iscsi_spec.o 00:05:05.720 CXX test/cpp_headers/jsonrpc.o 00:05:05.720 CXX test/cpp_headers/keyring.o 00:05:05.720 CXX test/cpp_headers/keyring_module.o 00:05:05.720 CXX test/cpp_headers/likely.o 00:05:05.720 CXX test/cpp_headers/md5.o 00:05:05.720 CXX test/cpp_headers/log.o 00:05:05.720 CC test/env/vtophys/vtophys.o 00:05:05.720 CC test/env/memory/memory_ut.o 00:05:05.720 CXX test/cpp_headers/lvol.o 00:05:05.720 CC test/app/bdev_svc/bdev_svc.o 00:05:05.720 CXX test/cpp_headers/net.o 00:05:05.720 CC test/dma/test_dma/test_dma.o 00:05:05.720 CXX test/cpp_headers/memory.o 00:05:05.720 CXX test/cpp_headers/nvme.o 00:05:05.720 CXX test/cpp_headers/nbd.o 00:05:05.720 CXX test/cpp_headers/nvme_intel.o 00:05:05.720 CXX test/cpp_headers/mmio.o 00:05:05.720 CXX test/cpp_headers/nvme_ocssd.o 00:05:05.720 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:05.720 CXX test/cpp_headers/notify.o 00:05:05.720 CXX test/cpp_headers/nvmf_cmd.o 00:05:05.720 CXX test/cpp_headers/nvme_spec.o 00:05:05.720 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:05.720 CXX test/cpp_headers/nvme_zns.o 00:05:05.720 CXX test/cpp_headers/nvmf_spec.o 00:05:05.720 LINK interrupt_tgt 00:05:05.720 CXX test/cpp_headers/pci_ids.o 00:05:05.720 CXX test/cpp_headers/nvmf.o 00:05:05.720 CXX test/cpp_headers/nvmf_transport.o 00:05:05.720 CC examples/ioat/verify/verify.o 00:05:05.720 CXX test/cpp_headers/opal.o 00:05:05.720 CXX test/cpp_headers/queue.o 00:05:05.720 CXX test/cpp_headers/opal_spec.o 00:05:05.720 CXX test/cpp_headers/pipe.o 00:05:05.720 CC test/app/stub/stub.o 00:05:05.720 CXX test/cpp_headers/reduce.o 00:05:05.720 CXX test/cpp_headers/scsi_spec.o 00:05:05.720 CXX test/cpp_headers/rpc.o 00:05:05.720 CXX test/cpp_headers/scsi.o 00:05:05.720 CXX test/cpp_headers/sock.o 00:05:05.720 LINK spdk_nvme_discover 00:05:05.720 CXX test/cpp_headers/thread.o 00:05:05.720 CXX test/cpp_headers/scheduler.o 00:05:05.720 CXX test/cpp_headers/trace.o 00:05:05.720 CC test/app/jsoncat/jsoncat.o 00:05:05.720 CXX test/cpp_headers/ublk.o 00:05:05.720 CXX test/cpp_headers/util.o 00:05:05.720 CXX test/cpp_headers/uuid.o 00:05:05.720 CXX test/cpp_headers/version.o 00:05:05.720 CC app/fio/nvme/fio_plugin.o 00:05:05.720 CXX test/cpp_headers/vhost.o 00:05:05.720 CXX test/cpp_headers/xor.o 00:05:05.720 CXX test/cpp_headers/stdinc.o 00:05:05.720 CXX test/cpp_headers/zipf.o 00:05:05.720 CXX test/cpp_headers/vmd.o 00:05:05.720 CXX test/cpp_headers/string.o 00:05:05.720 CXX test/cpp_headers/tree.o 00:05:05.720 CXX test/cpp_headers/trace_parser.o 00:05:05.720 LINK nvmf_tgt 00:05:05.720 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:05.720 CXX test/cpp_headers/vfio_user_pci.o 00:05:05.720 CXX test/cpp_headers/vfio_user_spec.o 00:05:05.979 LINK spdk_trace 00:05:05.979 LINK poller_perf 00:05:05.979 LINK iscsi_tgt 00:05:05.979 LINK zipf 00:05:05.979 CC app/fio/bdev/fio_plugin.o 00:05:05.979 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:05.979 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:05.979 LINK histogram_perf 00:05:05.979 LINK spdk_dd 00:05:05.979 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:06.238 LINK verify 00:05:06.238 LINK ioat_perf 00:05:06.238 CC test/env/mem_callbacks/mem_callbacks.o 00:05:06.238 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:06.238 LINK spdk_trace_record 00:05:06.238 LINK pci_ut 00:05:06.238 LINK env_dpdk_post_init 00:05:06.238 LINK spdk_tgt 00:05:06.496 CC app/vhost/vhost.o 00:05:06.496 LINK bdev_svc 00:05:06.496 CC examples/vmd/led/led.o 00:05:06.496 CC examples/sock/hello_world/hello_sock.o 00:05:06.496 CC examples/idxd/perf/perf.o 00:05:06.496 CC examples/vmd/lsvmd/lsvmd.o 00:05:06.496 LINK test_dma 00:05:06.496 LINK spdk_nvme_perf 00:05:06.496 LINK vtophys 00:05:06.496 CC test/event/reactor_perf/reactor_perf.o 00:05:06.496 CC test/event/reactor/reactor.o 00:05:06.496 CC examples/thread/thread/thread_ex.o 00:05:06.496 CC test/event/event_perf/event_perf.o 00:05:06.496 LINK spdk_bdev 00:05:06.496 LINK nvme_fuzz 00:05:06.496 LINK jsoncat 00:05:06.496 CC test/event/app_repeat/app_repeat.o 00:05:06.496 LINK stub 00:05:06.496 CC test/event/scheduler/scheduler.o 00:05:06.755 LINK vhost 00:05:06.755 LINK led 00:05:06.755 LINK lsvmd 00:05:06.755 LINK reactor_perf 00:05:06.755 LINK reactor 00:05:06.755 LINK mem_callbacks 00:05:06.755 LINK event_perf 00:05:06.755 LINK hello_sock 00:05:06.755 LINK app_repeat 00:05:06.755 LINK vhost_fuzz 00:05:06.755 LINK idxd_perf 00:05:06.755 LINK spdk_nvme 00:05:06.755 LINK thread 00:05:06.755 LINK scheduler 00:05:07.085 LINK spdk_top 00:05:07.085 LINK spdk_nvme_identify 00:05:07.085 CC test/nvme/aer/aer.o 00:05:07.085 CC test/nvme/e2edp/nvme_dp.o 00:05:07.085 CC test/nvme/boot_partition/boot_partition.o 00:05:07.085 CC test/nvme/simple_copy/simple_copy.o 00:05:07.085 CC test/nvme/connect_stress/connect_stress.o 00:05:07.085 CC test/nvme/cuse/cuse.o 00:05:07.085 CC test/nvme/reset/reset.o 00:05:07.085 CC test/nvme/compliance/nvme_compliance.o 00:05:07.085 CC test/nvme/fdp/fdp.o 00:05:07.085 CC test/nvme/sgl/sgl.o 00:05:07.085 CC test/nvme/startup/startup.o 00:05:07.085 CC test/nvme/fused_ordering/fused_ordering.o 00:05:07.085 CC test/nvme/reserve/reserve.o 00:05:07.085 CC test/nvme/overhead/overhead.o 00:05:07.085 CC test/nvme/err_injection/err_injection.o 00:05:07.085 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:07.085 CC test/blobfs/mkfs/mkfs.o 00:05:07.085 CC test/accel/dif/dif.o 00:05:07.085 CC test/lvol/esnap/esnap.o 00:05:07.380 LINK boot_partition 00:05:07.380 LINK startup 00:05:07.380 LINK err_injection 00:05:07.380 CC examples/nvme/reconnect/reconnect.o 00:05:07.380 LINK doorbell_aers 00:05:07.380 LINK connect_stress 00:05:07.380 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:07.380 CC examples/nvme/abort/abort.o 00:05:07.380 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:07.380 CC examples/nvme/hello_world/hello_world.o 00:05:07.380 CC examples/nvme/hotplug/hotplug.o 00:05:07.380 CC examples/nvme/arbitration/arbitration.o 00:05:07.380 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:07.380 LINK simple_copy 00:05:07.380 LINK fused_ordering 00:05:07.380 LINK reserve 00:05:07.380 LINK reset 00:05:07.380 LINK nvme_dp 00:05:07.380 LINK aer 00:05:07.380 LINK sgl 00:05:07.380 LINK mkfs 00:05:07.380 LINK memory_ut 00:05:07.380 LINK nvme_compliance 00:05:07.380 LINK overhead 00:05:07.380 LINK fdp 00:05:07.380 CC examples/accel/perf/accel_perf.o 00:05:07.380 LINK cmb_copy 00:05:07.380 LINK pmr_persistence 00:05:07.380 CC examples/blob/cli/blobcli.o 00:05:07.380 CC examples/blob/hello_world/hello_blob.o 00:05:07.380 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:07.380 LINK hello_world 00:05:07.380 LINK hotplug 00:05:07.642 LINK arbitration 00:05:07.642 LINK reconnect 00:05:07.642 LINK iscsi_fuzz 00:05:07.642 LINK abort 00:05:07.642 LINK dif 00:05:07.642 LINK nvme_manage 00:05:07.642 LINK hello_blob 00:05:07.642 LINK hello_fsdev 00:05:07.903 LINK accel_perf 00:05:07.903 LINK blobcli 00:05:08.163 LINK cuse 00:05:08.163 CC test/bdev/bdevio/bdevio.o 00:05:08.424 CC examples/bdev/hello_world/hello_bdev.o 00:05:08.424 CC examples/bdev/bdevperf/bdevperf.o 00:05:08.685 LINK bdevio 00:05:08.685 LINK hello_bdev 00:05:09.255 LINK bdevperf 00:05:09.825 CC examples/nvmf/nvmf/nvmf.o 00:05:10.086 LINK nvmf 00:05:10.348 LINK esnap 00:05:10.609 00:05:10.609 real 0m53.306s 00:05:10.609 user 7m41.138s 00:05:10.609 sys 4m24.795s 00:05:10.609 08:20:02 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:05:10.609 08:20:02 make -- common/autotest_common.sh@10 -- $ set +x 00:05:10.609 ************************************ 00:05:10.609 END TEST make 00:05:10.609 ************************************ 00:05:10.609 08:20:02 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:10.609 08:20:02 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:10.609 08:20:02 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:10.609 08:20:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:10.609 08:20:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:05:10.609 08:20:02 -- pm/common@44 -- $ pid=3431104 00:05:10.609 08:20:02 -- pm/common@50 -- $ kill -TERM 3431104 00:05:10.609 08:20:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:10.609 08:20:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:05:10.609 08:20:02 -- pm/common@44 -- $ pid=3431105 00:05:10.609 08:20:02 -- pm/common@50 -- $ kill -TERM 3431105 00:05:10.609 08:20:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:10.609 08:20:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:05:10.609 08:20:02 -- pm/common@44 -- $ pid=3431106 00:05:10.609 08:20:02 -- pm/common@50 -- $ kill -TERM 3431106 00:05:10.609 08:20:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:10.609 08:20:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:05:10.609 08:20:02 -- pm/common@44 -- $ pid=3431130 00:05:10.609 08:20:02 -- pm/common@50 -- $ sudo -E kill -TERM 3431130 00:05:10.871 08:20:02 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:10.871 08:20:02 -- common/autotest_common.sh@1681 -- # lcov --version 00:05:10.871 08:20:02 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:10.871 08:20:02 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:10.871 08:20:02 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:10.871 08:20:02 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:10.871 08:20:02 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:10.871 08:20:02 -- scripts/common.sh@336 -- # IFS=.-: 00:05:10.871 08:20:02 -- scripts/common.sh@336 -- # read -ra ver1 00:05:10.871 08:20:02 -- scripts/common.sh@337 -- # IFS=.-: 00:05:10.871 08:20:02 -- scripts/common.sh@337 -- # read -ra ver2 00:05:10.871 08:20:02 -- scripts/common.sh@338 -- # local 'op=<' 00:05:10.871 08:20:02 -- scripts/common.sh@340 -- # ver1_l=2 00:05:10.871 08:20:02 -- scripts/common.sh@341 -- # ver2_l=1 00:05:10.871 08:20:02 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:10.871 08:20:02 -- scripts/common.sh@344 -- # case "$op" in 00:05:10.871 08:20:02 -- scripts/common.sh@345 -- # : 1 00:05:10.871 08:20:02 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:10.871 08:20:02 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:10.871 08:20:02 -- scripts/common.sh@365 -- # decimal 1 00:05:10.871 08:20:02 -- scripts/common.sh@353 -- # local d=1 00:05:10.871 08:20:02 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:10.871 08:20:02 -- scripts/common.sh@355 -- # echo 1 00:05:10.871 08:20:02 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:10.871 08:20:02 -- scripts/common.sh@366 -- # decimal 2 00:05:10.871 08:20:02 -- scripts/common.sh@353 -- # local d=2 00:05:10.871 08:20:02 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:10.871 08:20:02 -- scripts/common.sh@355 -- # echo 2 00:05:10.871 08:20:02 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:10.871 08:20:02 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:10.871 08:20:02 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:10.871 08:20:02 -- scripts/common.sh@368 -- # return 0 00:05:10.871 08:20:02 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:10.871 08:20:02 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:10.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.871 --rc genhtml_branch_coverage=1 00:05:10.871 --rc genhtml_function_coverage=1 00:05:10.871 --rc genhtml_legend=1 00:05:10.871 --rc geninfo_all_blocks=1 00:05:10.871 --rc geninfo_unexecuted_blocks=1 00:05:10.871 00:05:10.871 ' 00:05:10.871 08:20:02 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:10.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.871 --rc genhtml_branch_coverage=1 00:05:10.871 --rc genhtml_function_coverage=1 00:05:10.871 --rc genhtml_legend=1 00:05:10.871 --rc geninfo_all_blocks=1 00:05:10.871 --rc geninfo_unexecuted_blocks=1 00:05:10.871 00:05:10.871 ' 00:05:10.871 08:20:02 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:10.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.871 --rc genhtml_branch_coverage=1 00:05:10.871 --rc genhtml_function_coverage=1 00:05:10.871 --rc genhtml_legend=1 00:05:10.871 --rc geninfo_all_blocks=1 00:05:10.871 --rc geninfo_unexecuted_blocks=1 00:05:10.871 00:05:10.871 ' 00:05:10.871 08:20:02 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:10.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.871 --rc genhtml_branch_coverage=1 00:05:10.871 --rc genhtml_function_coverage=1 00:05:10.871 --rc genhtml_legend=1 00:05:10.871 --rc geninfo_all_blocks=1 00:05:10.871 --rc geninfo_unexecuted_blocks=1 00:05:10.871 00:05:10.871 ' 00:05:10.871 08:20:02 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:10.871 08:20:02 -- nvmf/common.sh@7 -- # uname -s 00:05:10.871 08:20:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:10.872 08:20:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:10.872 08:20:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:10.872 08:20:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:10.872 08:20:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:10.872 08:20:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:10.872 08:20:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:10.872 08:20:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:10.872 08:20:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:10.872 08:20:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:10.872 08:20:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:10.872 08:20:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:10.872 08:20:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:10.872 08:20:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:10.872 08:20:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:10.872 08:20:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:10.872 08:20:02 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:10.872 08:20:02 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:10.872 08:20:02 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:10.872 08:20:02 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:10.872 08:20:02 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:10.872 08:20:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.872 08:20:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.872 08:20:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.872 08:20:02 -- paths/export.sh@5 -- # export PATH 00:05:10.872 08:20:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.872 08:20:02 -- nvmf/common.sh@51 -- # : 0 00:05:10.872 08:20:02 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:10.872 08:20:02 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:10.872 08:20:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:10.872 08:20:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:10.872 08:20:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:10.872 08:20:02 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:10.872 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:10.872 08:20:02 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:10.872 08:20:02 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:10.872 08:20:02 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:10.872 08:20:02 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:10.872 08:20:02 -- spdk/autotest.sh@32 -- # uname -s 00:05:10.872 08:20:02 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:10.872 08:20:02 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:10.872 08:20:02 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:05:10.872 08:20:02 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:05:10.872 08:20:02 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:05:10.872 08:20:02 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:10.872 08:20:02 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:10.872 08:20:02 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:10.872 08:20:02 -- spdk/autotest.sh@48 -- # udevadm_pid=3496267 00:05:10.872 08:20:02 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:10.872 08:20:02 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:10.872 08:20:02 -- pm/common@17 -- # local monitor 00:05:10.872 08:20:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:10.872 08:20:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:10.872 08:20:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:10.872 08:20:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:10.872 08:20:02 -- pm/common@21 -- # date +%s 00:05:10.872 08:20:02 -- pm/common@25 -- # sleep 1 00:05:10.872 08:20:02 -- pm/common@21 -- # date +%s 00:05:10.872 08:20:02 -- pm/common@21 -- # date +%s 00:05:10.872 08:20:02 -- pm/common@21 -- # date +%s 00:05:10.872 08:20:02 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1727763602 00:05:10.872 08:20:02 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1727763602 00:05:10.872 08:20:02 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1727763602 00:05:10.872 08:20:02 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1727763602 00:05:10.872 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1727763602_collect-cpu-load.pm.log 00:05:10.872 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1727763602_collect-vmstat.pm.log 00:05:10.872 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1727763602_collect-cpu-temp.pm.log 00:05:10.872 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1727763602_collect-bmc-pm.bmc.pm.log 00:05:11.815 08:20:03 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:11.815 08:20:03 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:11.815 08:20:03 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:11.815 08:20:03 -- common/autotest_common.sh@10 -- # set +x 00:05:11.815 08:20:03 -- spdk/autotest.sh@59 -- # create_test_list 00:05:11.815 08:20:03 -- common/autotest_common.sh@748 -- # xtrace_disable 00:05:11.815 08:20:03 -- common/autotest_common.sh@10 -- # set +x 00:05:12.077 08:20:03 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:05:12.077 08:20:03 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:12.077 08:20:03 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:12.077 08:20:03 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:05:12.077 08:20:03 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:12.077 08:20:03 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:12.077 08:20:03 -- common/autotest_common.sh@1455 -- # uname 00:05:12.077 08:20:03 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:05:12.077 08:20:03 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:12.077 08:20:03 -- common/autotest_common.sh@1475 -- # uname 00:05:12.077 08:20:03 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:05:12.077 08:20:03 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:12.077 08:20:03 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:12.077 lcov: LCOV version 1.15 00:05:12.077 08:20:03 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:05:34.051 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:34.051 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:05:42.194 08:20:33 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:42.194 08:20:33 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:42.194 08:20:33 -- common/autotest_common.sh@10 -- # set +x 00:05:42.194 08:20:33 -- spdk/autotest.sh@78 -- # rm -f 00:05:42.194 08:20:33 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:45.491 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:05:45.491 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:05:45.491 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:05:45.491 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:05:45.491 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:05:45.491 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:05:45.491 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:05:45.491 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:05:45.491 0000:65:00.0 (144d a80a): Already using the nvme driver 00:05:45.491 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:05:45.491 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:05:45.491 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:05:45.491 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:05:45.752 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:05:45.752 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:05:45.752 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:05:45.752 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:05:46.015 08:20:37 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:46.015 08:20:37 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:46.015 08:20:37 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:46.015 08:20:37 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:46.015 08:20:37 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:46.015 08:20:37 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:46.015 08:20:37 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:46.015 08:20:37 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:46.015 08:20:37 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:46.015 08:20:37 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:46.015 08:20:37 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:46.015 08:20:37 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:46.015 08:20:37 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:46.015 08:20:37 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:46.015 08:20:37 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:46.015 No valid GPT data, bailing 00:05:46.015 08:20:37 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:46.015 08:20:37 -- scripts/common.sh@394 -- # pt= 00:05:46.015 08:20:37 -- scripts/common.sh@395 -- # return 1 00:05:46.015 08:20:37 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:46.015 1+0 records in 00:05:46.015 1+0 records out 00:05:46.015 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00478241 s, 219 MB/s 00:05:46.015 08:20:37 -- spdk/autotest.sh@105 -- # sync 00:05:46.015 08:20:37 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:46.015 08:20:37 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:46.015 08:20:37 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:54.161 08:20:45 -- spdk/autotest.sh@111 -- # uname -s 00:05:54.161 08:20:45 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:54.161 08:20:45 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:54.161 08:20:45 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:57.464 Hugepages 00:05:57.464 node hugesize free / total 00:05:57.464 node0 1048576kB 0 / 0 00:05:57.464 node0 2048kB 0 / 0 00:05:57.464 node1 1048576kB 0 / 0 00:05:57.464 node1 2048kB 0 / 0 00:05:57.464 00:05:57.464 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:57.464 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:05:57.464 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:05:57.464 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:05:57.464 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:05:57.464 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:05:57.464 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:05:57.464 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:05:57.464 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:05:57.464 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:05:57.464 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:05:57.464 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:05:57.464 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:05:57.464 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:05:57.464 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:05:57.464 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:05:57.464 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:05:57.465 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:05:57.465 08:20:48 -- spdk/autotest.sh@117 -- # uname -s 00:05:57.465 08:20:48 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:57.465 08:20:48 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:57.465 08:20:48 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:00.768 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:06:00.768 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:06:00.768 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:06:00.768 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:06:00.768 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:06:00.768 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:06:00.768 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:06:00.768 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:06:00.768 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:06:00.768 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:06:00.768 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:06:00.768 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:06:00.768 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:06:00.768 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:06:00.768 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:06:00.768 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:06:02.684 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:06:02.945 08:20:54 -- common/autotest_common.sh@1515 -- # sleep 1 00:06:03.888 08:20:55 -- common/autotest_common.sh@1516 -- # bdfs=() 00:06:03.888 08:20:55 -- common/autotest_common.sh@1516 -- # local bdfs 00:06:03.888 08:20:55 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:06:03.888 08:20:55 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:06:03.888 08:20:55 -- common/autotest_common.sh@1496 -- # bdfs=() 00:06:03.888 08:20:55 -- common/autotest_common.sh@1496 -- # local bdfs 00:06:03.888 08:20:55 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:03.888 08:20:55 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:03.888 08:20:55 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:06:03.888 08:20:55 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:06:03.888 08:20:55 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:06:03.888 08:20:55 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:07.193 Waiting for block devices as requested 00:06:07.193 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:06:07.193 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:06:07.453 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:06:07.453 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:06:07.453 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:06:07.715 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:06:07.715 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:06:07.715 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:06:07.977 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:06:07.977 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:06:07.977 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:06:08.238 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:06:08.238 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:06:08.238 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:06:08.499 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:06:08.499 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:06:08.499 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:06:08.760 08:21:00 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:06:08.760 08:21:00 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:06:08.760 08:21:00 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:06:08.760 08:21:00 -- common/autotest_common.sh@1485 -- # grep 0000:65:00.0/nvme/nvme 00:06:08.760 08:21:00 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:06:08.760 08:21:00 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:06:08.760 08:21:00 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:06:08.760 08:21:00 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:06:08.760 08:21:00 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:06:08.760 08:21:00 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:06:08.760 08:21:00 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:06:08.760 08:21:00 -- common/autotest_common.sh@1529 -- # grep oacs 00:06:08.760 08:21:00 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:06:08.760 08:21:00 -- common/autotest_common.sh@1529 -- # oacs=' 0x5f' 00:06:08.760 08:21:00 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:06:08.760 08:21:00 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:06:08.760 08:21:00 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:06:08.760 08:21:00 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:06:08.760 08:21:00 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:06:08.760 08:21:00 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:06:08.760 08:21:00 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:06:08.760 08:21:00 -- common/autotest_common.sh@1541 -- # continue 00:06:08.760 08:21:00 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:08.760 08:21:00 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:08.760 08:21:00 -- common/autotest_common.sh@10 -- # set +x 00:06:09.022 08:21:00 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:09.022 08:21:00 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:09.022 08:21:00 -- common/autotest_common.sh@10 -- # set +x 00:06:09.022 08:21:00 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:12.491 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:06:12.491 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:06:12.491 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:06:12.491 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:06:12.491 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:06:12.491 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:06:12.491 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:06:12.491 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:06:12.491 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:06:12.491 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:06:12.491 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:06:12.491 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:06:12.491 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:06:12.492 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:06:12.492 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:06:12.492 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:06:12.492 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:06:12.752 08:21:04 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:12.752 08:21:04 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:12.752 08:21:04 -- common/autotest_common.sh@10 -- # set +x 00:06:13.013 08:21:04 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:13.013 08:21:04 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:06:13.013 08:21:04 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:06:13.013 08:21:04 -- common/autotest_common.sh@1561 -- # bdfs=() 00:06:13.013 08:21:04 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:06:13.013 08:21:04 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:06:13.013 08:21:04 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:06:13.013 08:21:04 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:06:13.013 08:21:04 -- common/autotest_common.sh@1496 -- # bdfs=() 00:06:13.013 08:21:04 -- common/autotest_common.sh@1496 -- # local bdfs 00:06:13.013 08:21:04 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:13.013 08:21:04 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:13.013 08:21:04 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:06:13.013 08:21:04 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:06:13.013 08:21:04 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:06:13.013 08:21:04 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:06:13.013 08:21:04 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:06:13.013 08:21:04 -- common/autotest_common.sh@1564 -- # device=0xa80a 00:06:13.013 08:21:04 -- common/autotest_common.sh@1565 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:06:13.013 08:21:04 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:06:13.013 08:21:04 -- common/autotest_common.sh@1570 -- # return 0 00:06:13.013 08:21:04 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:06:13.013 08:21:04 -- common/autotest_common.sh@1578 -- # return 0 00:06:13.013 08:21:04 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:13.013 08:21:04 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:13.013 08:21:04 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:13.013 08:21:04 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:13.013 08:21:04 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:13.013 08:21:04 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:13.013 08:21:04 -- common/autotest_common.sh@10 -- # set +x 00:06:13.013 08:21:04 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:06:13.013 08:21:04 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:13.013 08:21:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:13.013 08:21:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:13.013 08:21:04 -- common/autotest_common.sh@10 -- # set +x 00:06:13.013 ************************************ 00:06:13.013 START TEST env 00:06:13.013 ************************************ 00:06:13.013 08:21:04 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:13.013 * Looking for test storage... 00:06:13.275 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:06:13.275 08:21:04 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:13.275 08:21:04 env -- common/autotest_common.sh@1681 -- # lcov --version 00:06:13.275 08:21:04 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:13.275 08:21:04 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:13.275 08:21:04 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:13.275 08:21:04 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:13.275 08:21:04 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:13.275 08:21:04 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:13.275 08:21:04 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:13.275 08:21:04 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:13.275 08:21:04 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:13.275 08:21:04 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:13.275 08:21:04 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:13.275 08:21:04 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:13.275 08:21:04 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:13.275 08:21:04 env -- scripts/common.sh@344 -- # case "$op" in 00:06:13.275 08:21:04 env -- scripts/common.sh@345 -- # : 1 00:06:13.275 08:21:04 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:13.275 08:21:04 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:13.275 08:21:04 env -- scripts/common.sh@365 -- # decimal 1 00:06:13.275 08:21:04 env -- scripts/common.sh@353 -- # local d=1 00:06:13.275 08:21:04 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:13.275 08:21:04 env -- scripts/common.sh@355 -- # echo 1 00:06:13.275 08:21:04 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:13.275 08:21:04 env -- scripts/common.sh@366 -- # decimal 2 00:06:13.275 08:21:04 env -- scripts/common.sh@353 -- # local d=2 00:06:13.275 08:21:04 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:13.275 08:21:04 env -- scripts/common.sh@355 -- # echo 2 00:06:13.275 08:21:04 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:13.275 08:21:04 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:13.275 08:21:04 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:13.275 08:21:04 env -- scripts/common.sh@368 -- # return 0 00:06:13.275 08:21:04 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:13.275 08:21:04 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:13.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.275 --rc genhtml_branch_coverage=1 00:06:13.275 --rc genhtml_function_coverage=1 00:06:13.275 --rc genhtml_legend=1 00:06:13.275 --rc geninfo_all_blocks=1 00:06:13.275 --rc geninfo_unexecuted_blocks=1 00:06:13.275 00:06:13.275 ' 00:06:13.275 08:21:04 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:13.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.275 --rc genhtml_branch_coverage=1 00:06:13.275 --rc genhtml_function_coverage=1 00:06:13.275 --rc genhtml_legend=1 00:06:13.275 --rc geninfo_all_blocks=1 00:06:13.275 --rc geninfo_unexecuted_blocks=1 00:06:13.275 00:06:13.275 ' 00:06:13.275 08:21:04 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:13.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.275 --rc genhtml_branch_coverage=1 00:06:13.275 --rc genhtml_function_coverage=1 00:06:13.275 --rc genhtml_legend=1 00:06:13.275 --rc geninfo_all_blocks=1 00:06:13.275 --rc geninfo_unexecuted_blocks=1 00:06:13.275 00:06:13.275 ' 00:06:13.275 08:21:04 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:13.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.275 --rc genhtml_branch_coverage=1 00:06:13.275 --rc genhtml_function_coverage=1 00:06:13.275 --rc genhtml_legend=1 00:06:13.275 --rc geninfo_all_blocks=1 00:06:13.275 --rc geninfo_unexecuted_blocks=1 00:06:13.275 00:06:13.275 ' 00:06:13.275 08:21:04 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:13.275 08:21:04 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:13.275 08:21:04 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:13.275 08:21:04 env -- common/autotest_common.sh@10 -- # set +x 00:06:13.275 ************************************ 00:06:13.275 START TEST env_memory 00:06:13.275 ************************************ 00:06:13.275 08:21:04 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:13.275 00:06:13.275 00:06:13.275 CUnit - A unit testing framework for C - Version 2.1-3 00:06:13.275 http://cunit.sourceforge.net/ 00:06:13.275 00:06:13.275 00:06:13.275 Suite: memory 00:06:13.275 Test: alloc and free memory map ...[2024-10-01 08:21:05.029857] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:13.275 passed 00:06:13.275 Test: mem map translation ...[2024-10-01 08:21:05.055256] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:13.275 [2024-10-01 08:21:05.055276] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:13.275 [2024-10-01 08:21:05.055323] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:13.275 [2024-10-01 08:21:05.055329] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:13.537 passed 00:06:13.537 Test: mem map registration ...[2024-10-01 08:21:05.110374] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:13.537 [2024-10-01 08:21:05.110389] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:13.537 passed 00:06:13.537 Test: mem map adjacent registrations ...passed 00:06:13.537 00:06:13.537 Run Summary: Type Total Ran Passed Failed Inactive 00:06:13.537 suites 1 1 n/a 0 0 00:06:13.537 tests 4 4 4 0 0 00:06:13.537 asserts 152 152 152 0 n/a 00:06:13.537 00:06:13.537 Elapsed time = 0.194 seconds 00:06:13.537 00:06:13.537 real 0m0.208s 00:06:13.537 user 0m0.198s 00:06:13.537 sys 0m0.009s 00:06:13.537 08:21:05 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:13.537 08:21:05 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:13.537 ************************************ 00:06:13.537 END TEST env_memory 00:06:13.537 ************************************ 00:06:13.537 08:21:05 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:13.537 08:21:05 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:13.537 08:21:05 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:13.537 08:21:05 env -- common/autotest_common.sh@10 -- # set +x 00:06:13.537 ************************************ 00:06:13.537 START TEST env_vtophys 00:06:13.537 ************************************ 00:06:13.537 08:21:05 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:13.537 EAL: lib.eal log level changed from notice to debug 00:06:13.537 EAL: Detected lcore 0 as core 0 on socket 0 00:06:13.537 EAL: Detected lcore 1 as core 1 on socket 0 00:06:13.537 EAL: Detected lcore 2 as core 2 on socket 0 00:06:13.537 EAL: Detected lcore 3 as core 3 on socket 0 00:06:13.537 EAL: Detected lcore 4 as core 4 on socket 0 00:06:13.537 EAL: Detected lcore 5 as core 5 on socket 0 00:06:13.537 EAL: Detected lcore 6 as core 6 on socket 0 00:06:13.537 EAL: Detected lcore 7 as core 7 on socket 0 00:06:13.537 EAL: Detected lcore 8 as core 8 on socket 0 00:06:13.537 EAL: Detected lcore 9 as core 9 on socket 0 00:06:13.537 EAL: Detected lcore 10 as core 10 on socket 0 00:06:13.537 EAL: Detected lcore 11 as core 11 on socket 0 00:06:13.537 EAL: Detected lcore 12 as core 12 on socket 0 00:06:13.537 EAL: Detected lcore 13 as core 13 on socket 0 00:06:13.537 EAL: Detected lcore 14 as core 14 on socket 0 00:06:13.537 EAL: Detected lcore 15 as core 15 on socket 0 00:06:13.537 EAL: Detected lcore 16 as core 16 on socket 0 00:06:13.537 EAL: Detected lcore 17 as core 17 on socket 0 00:06:13.537 EAL: Detected lcore 18 as core 18 on socket 0 00:06:13.537 EAL: Detected lcore 19 as core 19 on socket 0 00:06:13.537 EAL: Detected lcore 20 as core 20 on socket 0 00:06:13.537 EAL: Detected lcore 21 as core 21 on socket 0 00:06:13.537 EAL: Detected lcore 22 as core 22 on socket 0 00:06:13.537 EAL: Detected lcore 23 as core 23 on socket 0 00:06:13.537 EAL: Detected lcore 24 as core 24 on socket 0 00:06:13.537 EAL: Detected lcore 25 as core 25 on socket 0 00:06:13.537 EAL: Detected lcore 26 as core 26 on socket 0 00:06:13.537 EAL: Detected lcore 27 as core 27 on socket 0 00:06:13.537 EAL: Detected lcore 28 as core 28 on socket 0 00:06:13.537 EAL: Detected lcore 29 as core 29 on socket 0 00:06:13.537 EAL: Detected lcore 30 as core 30 on socket 0 00:06:13.537 EAL: Detected lcore 31 as core 31 on socket 0 00:06:13.537 EAL: Detected lcore 32 as core 32 on socket 0 00:06:13.537 EAL: Detected lcore 33 as core 33 on socket 0 00:06:13.537 EAL: Detected lcore 34 as core 34 on socket 0 00:06:13.537 EAL: Detected lcore 35 as core 35 on socket 0 00:06:13.537 EAL: Detected lcore 36 as core 0 on socket 1 00:06:13.537 EAL: Detected lcore 37 as core 1 on socket 1 00:06:13.537 EAL: Detected lcore 38 as core 2 on socket 1 00:06:13.537 EAL: Detected lcore 39 as core 3 on socket 1 00:06:13.537 EAL: Detected lcore 40 as core 4 on socket 1 00:06:13.537 EAL: Detected lcore 41 as core 5 on socket 1 00:06:13.537 EAL: Detected lcore 42 as core 6 on socket 1 00:06:13.537 EAL: Detected lcore 43 as core 7 on socket 1 00:06:13.537 EAL: Detected lcore 44 as core 8 on socket 1 00:06:13.537 EAL: Detected lcore 45 as core 9 on socket 1 00:06:13.537 EAL: Detected lcore 46 as core 10 on socket 1 00:06:13.537 EAL: Detected lcore 47 as core 11 on socket 1 00:06:13.537 EAL: Detected lcore 48 as core 12 on socket 1 00:06:13.537 EAL: Detected lcore 49 as core 13 on socket 1 00:06:13.537 EAL: Detected lcore 50 as core 14 on socket 1 00:06:13.537 EAL: Detected lcore 51 as core 15 on socket 1 00:06:13.537 EAL: Detected lcore 52 as core 16 on socket 1 00:06:13.537 EAL: Detected lcore 53 as core 17 on socket 1 00:06:13.537 EAL: Detected lcore 54 as core 18 on socket 1 00:06:13.537 EAL: Detected lcore 55 as core 19 on socket 1 00:06:13.538 EAL: Detected lcore 56 as core 20 on socket 1 00:06:13.538 EAL: Detected lcore 57 as core 21 on socket 1 00:06:13.538 EAL: Detected lcore 58 as core 22 on socket 1 00:06:13.538 EAL: Detected lcore 59 as core 23 on socket 1 00:06:13.538 EAL: Detected lcore 60 as core 24 on socket 1 00:06:13.538 EAL: Detected lcore 61 as core 25 on socket 1 00:06:13.538 EAL: Detected lcore 62 as core 26 on socket 1 00:06:13.538 EAL: Detected lcore 63 as core 27 on socket 1 00:06:13.538 EAL: Detected lcore 64 as core 28 on socket 1 00:06:13.538 EAL: Detected lcore 65 as core 29 on socket 1 00:06:13.538 EAL: Detected lcore 66 as core 30 on socket 1 00:06:13.538 EAL: Detected lcore 67 as core 31 on socket 1 00:06:13.538 EAL: Detected lcore 68 as core 32 on socket 1 00:06:13.538 EAL: Detected lcore 69 as core 33 on socket 1 00:06:13.538 EAL: Detected lcore 70 as core 34 on socket 1 00:06:13.538 EAL: Detected lcore 71 as core 35 on socket 1 00:06:13.538 EAL: Detected lcore 72 as core 0 on socket 0 00:06:13.538 EAL: Detected lcore 73 as core 1 on socket 0 00:06:13.538 EAL: Detected lcore 74 as core 2 on socket 0 00:06:13.538 EAL: Detected lcore 75 as core 3 on socket 0 00:06:13.538 EAL: Detected lcore 76 as core 4 on socket 0 00:06:13.538 EAL: Detected lcore 77 as core 5 on socket 0 00:06:13.538 EAL: Detected lcore 78 as core 6 on socket 0 00:06:13.538 EAL: Detected lcore 79 as core 7 on socket 0 00:06:13.538 EAL: Detected lcore 80 as core 8 on socket 0 00:06:13.538 EAL: Detected lcore 81 as core 9 on socket 0 00:06:13.538 EAL: Detected lcore 82 as core 10 on socket 0 00:06:13.538 EAL: Detected lcore 83 as core 11 on socket 0 00:06:13.538 EAL: Detected lcore 84 as core 12 on socket 0 00:06:13.538 EAL: Detected lcore 85 as core 13 on socket 0 00:06:13.538 EAL: Detected lcore 86 as core 14 on socket 0 00:06:13.538 EAL: Detected lcore 87 as core 15 on socket 0 00:06:13.538 EAL: Detected lcore 88 as core 16 on socket 0 00:06:13.538 EAL: Detected lcore 89 as core 17 on socket 0 00:06:13.538 EAL: Detected lcore 90 as core 18 on socket 0 00:06:13.538 EAL: Detected lcore 91 as core 19 on socket 0 00:06:13.538 EAL: Detected lcore 92 as core 20 on socket 0 00:06:13.538 EAL: Detected lcore 93 as core 21 on socket 0 00:06:13.538 EAL: Detected lcore 94 as core 22 on socket 0 00:06:13.538 EAL: Detected lcore 95 as core 23 on socket 0 00:06:13.538 EAL: Detected lcore 96 as core 24 on socket 0 00:06:13.538 EAL: Detected lcore 97 as core 25 on socket 0 00:06:13.538 EAL: Detected lcore 98 as core 26 on socket 0 00:06:13.538 EAL: Detected lcore 99 as core 27 on socket 0 00:06:13.538 EAL: Detected lcore 100 as core 28 on socket 0 00:06:13.538 EAL: Detected lcore 101 as core 29 on socket 0 00:06:13.538 EAL: Detected lcore 102 as core 30 on socket 0 00:06:13.538 EAL: Detected lcore 103 as core 31 on socket 0 00:06:13.538 EAL: Detected lcore 104 as core 32 on socket 0 00:06:13.538 EAL: Detected lcore 105 as core 33 on socket 0 00:06:13.538 EAL: Detected lcore 106 as core 34 on socket 0 00:06:13.538 EAL: Detected lcore 107 as core 35 on socket 0 00:06:13.538 EAL: Detected lcore 108 as core 0 on socket 1 00:06:13.538 EAL: Detected lcore 109 as core 1 on socket 1 00:06:13.538 EAL: Detected lcore 110 as core 2 on socket 1 00:06:13.538 EAL: Detected lcore 111 as core 3 on socket 1 00:06:13.538 EAL: Detected lcore 112 as core 4 on socket 1 00:06:13.538 EAL: Detected lcore 113 as core 5 on socket 1 00:06:13.538 EAL: Detected lcore 114 as core 6 on socket 1 00:06:13.538 EAL: Detected lcore 115 as core 7 on socket 1 00:06:13.538 EAL: Detected lcore 116 as core 8 on socket 1 00:06:13.538 EAL: Detected lcore 117 as core 9 on socket 1 00:06:13.538 EAL: Detected lcore 118 as core 10 on socket 1 00:06:13.538 EAL: Detected lcore 119 as core 11 on socket 1 00:06:13.538 EAL: Detected lcore 120 as core 12 on socket 1 00:06:13.538 EAL: Detected lcore 121 as core 13 on socket 1 00:06:13.538 EAL: Detected lcore 122 as core 14 on socket 1 00:06:13.538 EAL: Detected lcore 123 as core 15 on socket 1 00:06:13.538 EAL: Detected lcore 124 as core 16 on socket 1 00:06:13.538 EAL: Detected lcore 125 as core 17 on socket 1 00:06:13.538 EAL: Detected lcore 126 as core 18 on socket 1 00:06:13.538 EAL: Detected lcore 127 as core 19 on socket 1 00:06:13.538 EAL: Skipped lcore 128 as core 20 on socket 1 00:06:13.538 EAL: Skipped lcore 129 as core 21 on socket 1 00:06:13.538 EAL: Skipped lcore 130 as core 22 on socket 1 00:06:13.538 EAL: Skipped lcore 131 as core 23 on socket 1 00:06:13.538 EAL: Skipped lcore 132 as core 24 on socket 1 00:06:13.538 EAL: Skipped lcore 133 as core 25 on socket 1 00:06:13.538 EAL: Skipped lcore 134 as core 26 on socket 1 00:06:13.538 EAL: Skipped lcore 135 as core 27 on socket 1 00:06:13.538 EAL: Skipped lcore 136 as core 28 on socket 1 00:06:13.538 EAL: Skipped lcore 137 as core 29 on socket 1 00:06:13.538 EAL: Skipped lcore 138 as core 30 on socket 1 00:06:13.538 EAL: Skipped lcore 139 as core 31 on socket 1 00:06:13.538 EAL: Skipped lcore 140 as core 32 on socket 1 00:06:13.538 EAL: Skipped lcore 141 as core 33 on socket 1 00:06:13.538 EAL: Skipped lcore 142 as core 34 on socket 1 00:06:13.538 EAL: Skipped lcore 143 as core 35 on socket 1 00:06:13.538 EAL: Maximum logical cores by configuration: 128 00:06:13.538 EAL: Detected CPU lcores: 128 00:06:13.538 EAL: Detected NUMA nodes: 2 00:06:13.538 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:13.538 EAL: Detected shared linkage of DPDK 00:06:13.538 EAL: No shared files mode enabled, IPC will be disabled 00:06:13.538 EAL: Bus pci wants IOVA as 'DC' 00:06:13.538 EAL: Buses did not request a specific IOVA mode. 00:06:13.538 EAL: IOMMU is available, selecting IOVA as VA mode. 00:06:13.538 EAL: Selected IOVA mode 'VA' 00:06:13.538 EAL: Probing VFIO support... 00:06:13.538 EAL: IOMMU type 1 (Type 1) is supported 00:06:13.538 EAL: IOMMU type 7 (sPAPR) is not supported 00:06:13.538 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:06:13.538 EAL: VFIO support initialized 00:06:13.538 EAL: Ask a virtual area of 0x2e000 bytes 00:06:13.538 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:13.538 EAL: Setting up physically contiguous memory... 00:06:13.538 EAL: Setting maximum number of open files to 524288 00:06:13.538 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:13.538 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:06:13.538 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:13.538 EAL: Ask a virtual area of 0x61000 bytes 00:06:13.538 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:13.538 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:13.538 EAL: Ask a virtual area of 0x400000000 bytes 00:06:13.538 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:13.538 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:13.538 EAL: Ask a virtual area of 0x61000 bytes 00:06:13.538 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:13.538 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:13.538 EAL: Ask a virtual area of 0x400000000 bytes 00:06:13.538 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:13.538 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:13.538 EAL: Ask a virtual area of 0x61000 bytes 00:06:13.538 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:13.538 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:13.538 EAL: Ask a virtual area of 0x400000000 bytes 00:06:13.538 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:13.538 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:13.538 EAL: Ask a virtual area of 0x61000 bytes 00:06:13.538 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:13.538 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:13.538 EAL: Ask a virtual area of 0x400000000 bytes 00:06:13.538 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:13.538 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:13.538 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:06:13.538 EAL: Ask a virtual area of 0x61000 bytes 00:06:13.538 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:06:13.538 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:13.538 EAL: Ask a virtual area of 0x400000000 bytes 00:06:13.538 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:06:13.538 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:06:13.538 EAL: Ask a virtual area of 0x61000 bytes 00:06:13.538 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:06:13.538 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:13.538 EAL: Ask a virtual area of 0x400000000 bytes 00:06:13.538 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:06:13.538 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:06:13.538 EAL: Ask a virtual area of 0x61000 bytes 00:06:13.538 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:06:13.538 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:13.538 EAL: Ask a virtual area of 0x400000000 bytes 00:06:13.538 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:06:13.538 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:06:13.538 EAL: Ask a virtual area of 0x61000 bytes 00:06:13.538 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:06:13.538 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:13.538 EAL: Ask a virtual area of 0x400000000 bytes 00:06:13.538 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:06:13.538 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:06:13.538 EAL: Hugepages will be freed exactly as allocated. 00:06:13.538 EAL: No shared files mode enabled, IPC is disabled 00:06:13.538 EAL: No shared files mode enabled, IPC is disabled 00:06:13.538 EAL: TSC frequency is ~2400000 KHz 00:06:13.538 EAL: Main lcore 0 is ready (tid=7f83130d8a00;cpuset=[0]) 00:06:13.538 EAL: Trying to obtain current memory policy. 00:06:13.538 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:13.538 EAL: Restoring previous memory policy: 0 00:06:13.538 EAL: request: mp_malloc_sync 00:06:13.538 EAL: No shared files mode enabled, IPC is disabled 00:06:13.538 EAL: Heap on socket 0 was expanded by 2MB 00:06:13.538 EAL: No shared files mode enabled, IPC is disabled 00:06:13.538 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:13.538 EAL: Mem event callback 'spdk:(nil)' registered 00:06:13.538 00:06:13.538 00:06:13.538 CUnit - A unit testing framework for C - Version 2.1-3 00:06:13.538 http://cunit.sourceforge.net/ 00:06:13.538 00:06:13.538 00:06:13.538 Suite: components_suite 00:06:13.538 Test: vtophys_malloc_test ...passed 00:06:13.538 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:13.538 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:13.538 EAL: Restoring previous memory policy: 4 00:06:13.538 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.538 EAL: request: mp_malloc_sync 00:06:13.538 EAL: No shared files mode enabled, IPC is disabled 00:06:13.538 EAL: Heap on socket 0 was expanded by 4MB 00:06:13.539 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.539 EAL: request: mp_malloc_sync 00:06:13.539 EAL: No shared files mode enabled, IPC is disabled 00:06:13.539 EAL: Heap on socket 0 was shrunk by 4MB 00:06:13.539 EAL: Trying to obtain current memory policy. 00:06:13.539 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:13.539 EAL: Restoring previous memory policy: 4 00:06:13.539 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.539 EAL: request: mp_malloc_sync 00:06:13.539 EAL: No shared files mode enabled, IPC is disabled 00:06:13.539 EAL: Heap on socket 0 was expanded by 6MB 00:06:13.539 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.539 EAL: request: mp_malloc_sync 00:06:13.539 EAL: No shared files mode enabled, IPC is disabled 00:06:13.539 EAL: Heap on socket 0 was shrunk by 6MB 00:06:13.539 EAL: Trying to obtain current memory policy. 00:06:13.539 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:13.539 EAL: Restoring previous memory policy: 4 00:06:13.539 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.539 EAL: request: mp_malloc_sync 00:06:13.539 EAL: No shared files mode enabled, IPC is disabled 00:06:13.539 EAL: Heap on socket 0 was expanded by 10MB 00:06:13.539 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.539 EAL: request: mp_malloc_sync 00:06:13.539 EAL: No shared files mode enabled, IPC is disabled 00:06:13.539 EAL: Heap on socket 0 was shrunk by 10MB 00:06:13.539 EAL: Trying to obtain current memory policy. 00:06:13.539 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:13.539 EAL: Restoring previous memory policy: 4 00:06:13.539 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.539 EAL: request: mp_malloc_sync 00:06:13.539 EAL: No shared files mode enabled, IPC is disabled 00:06:13.539 EAL: Heap on socket 0 was expanded by 18MB 00:06:13.539 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.539 EAL: request: mp_malloc_sync 00:06:13.539 EAL: No shared files mode enabled, IPC is disabled 00:06:13.539 EAL: Heap on socket 0 was shrunk by 18MB 00:06:13.539 EAL: Trying to obtain current memory policy. 00:06:13.539 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:13.539 EAL: Restoring previous memory policy: 4 00:06:13.539 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.539 EAL: request: mp_malloc_sync 00:06:13.539 EAL: No shared files mode enabled, IPC is disabled 00:06:13.539 EAL: Heap on socket 0 was expanded by 34MB 00:06:13.539 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.799 EAL: request: mp_malloc_sync 00:06:13.799 EAL: No shared files mode enabled, IPC is disabled 00:06:13.799 EAL: Heap on socket 0 was shrunk by 34MB 00:06:13.799 EAL: Trying to obtain current memory policy. 00:06:13.799 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:13.799 EAL: Restoring previous memory policy: 4 00:06:13.799 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.799 EAL: request: mp_malloc_sync 00:06:13.799 EAL: No shared files mode enabled, IPC is disabled 00:06:13.799 EAL: Heap on socket 0 was expanded by 66MB 00:06:13.799 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.799 EAL: request: mp_malloc_sync 00:06:13.799 EAL: No shared files mode enabled, IPC is disabled 00:06:13.799 EAL: Heap on socket 0 was shrunk by 66MB 00:06:13.799 EAL: Trying to obtain current memory policy. 00:06:13.799 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:13.799 EAL: Restoring previous memory policy: 4 00:06:13.799 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.799 EAL: request: mp_malloc_sync 00:06:13.799 EAL: No shared files mode enabled, IPC is disabled 00:06:13.799 EAL: Heap on socket 0 was expanded by 130MB 00:06:13.799 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.799 EAL: request: mp_malloc_sync 00:06:13.799 EAL: No shared files mode enabled, IPC is disabled 00:06:13.799 EAL: Heap on socket 0 was shrunk by 130MB 00:06:13.799 EAL: Trying to obtain current memory policy. 00:06:13.799 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:13.799 EAL: Restoring previous memory policy: 4 00:06:13.799 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.799 EAL: request: mp_malloc_sync 00:06:13.799 EAL: No shared files mode enabled, IPC is disabled 00:06:13.799 EAL: Heap on socket 0 was expanded by 258MB 00:06:13.799 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.799 EAL: request: mp_malloc_sync 00:06:13.799 EAL: No shared files mode enabled, IPC is disabled 00:06:13.799 EAL: Heap on socket 0 was shrunk by 258MB 00:06:13.799 EAL: Trying to obtain current memory policy. 00:06:13.799 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:13.799 EAL: Restoring previous memory policy: 4 00:06:13.799 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.799 EAL: request: mp_malloc_sync 00:06:13.799 EAL: No shared files mode enabled, IPC is disabled 00:06:13.799 EAL: Heap on socket 0 was expanded by 514MB 00:06:14.061 EAL: Calling mem event callback 'spdk:(nil)' 00:06:14.061 EAL: request: mp_malloc_sync 00:06:14.061 EAL: No shared files mode enabled, IPC is disabled 00:06:14.061 EAL: Heap on socket 0 was shrunk by 514MB 00:06:14.061 EAL: Trying to obtain current memory policy. 00:06:14.061 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:14.061 EAL: Restoring previous memory policy: 4 00:06:14.061 EAL: Calling mem event callback 'spdk:(nil)' 00:06:14.061 EAL: request: mp_malloc_sync 00:06:14.061 EAL: No shared files mode enabled, IPC is disabled 00:06:14.061 EAL: Heap on socket 0 was expanded by 1026MB 00:06:14.322 EAL: Calling mem event callback 'spdk:(nil)' 00:06:14.322 EAL: request: mp_malloc_sync 00:06:14.322 EAL: No shared files mode enabled, IPC is disabled 00:06:14.322 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:14.322 passed 00:06:14.322 00:06:14.322 Run Summary: Type Total Ran Passed Failed Inactive 00:06:14.322 suites 1 1 n/a 0 0 00:06:14.322 tests 2 2 2 0 0 00:06:14.322 asserts 497 497 497 0 n/a 00:06:14.322 00:06:14.322 Elapsed time = 0.642 seconds 00:06:14.322 EAL: Calling mem event callback 'spdk:(nil)' 00:06:14.322 EAL: request: mp_malloc_sync 00:06:14.322 EAL: No shared files mode enabled, IPC is disabled 00:06:14.322 EAL: Heap on socket 0 was shrunk by 2MB 00:06:14.322 EAL: No shared files mode enabled, IPC is disabled 00:06:14.322 EAL: No shared files mode enabled, IPC is disabled 00:06:14.322 EAL: No shared files mode enabled, IPC is disabled 00:06:14.322 00:06:14.322 real 0m0.764s 00:06:14.322 user 0m0.403s 00:06:14.322 sys 0m0.329s 00:06:14.322 08:21:06 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:14.322 08:21:06 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:14.322 ************************************ 00:06:14.322 END TEST env_vtophys 00:06:14.322 ************************************ 00:06:14.322 08:21:06 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:14.322 08:21:06 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:14.322 08:21:06 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:14.322 08:21:06 env -- common/autotest_common.sh@10 -- # set +x 00:06:14.322 ************************************ 00:06:14.322 START TEST env_pci 00:06:14.322 ************************************ 00:06:14.322 08:21:06 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:14.322 00:06:14.322 00:06:14.322 CUnit - A unit testing framework for C - Version 2.1-3 00:06:14.322 http://cunit.sourceforge.net/ 00:06:14.322 00:06:14.322 00:06:14.322 Suite: pci 00:06:14.322 Test: pci_hook ...[2024-10-01 08:21:06.121925] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3515292 has claimed it 00:06:14.582 EAL: Cannot find device (10000:00:01.0) 00:06:14.582 EAL: Failed to attach device on primary process 00:06:14.582 passed 00:06:14.582 00:06:14.582 Run Summary: Type Total Ran Passed Failed Inactive 00:06:14.582 suites 1 1 n/a 0 0 00:06:14.582 tests 1 1 1 0 0 00:06:14.582 asserts 25 25 25 0 n/a 00:06:14.582 00:06:14.582 Elapsed time = 0.030 seconds 00:06:14.582 00:06:14.582 real 0m0.050s 00:06:14.582 user 0m0.016s 00:06:14.582 sys 0m0.034s 00:06:14.582 08:21:06 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:14.582 08:21:06 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:14.582 ************************************ 00:06:14.582 END TEST env_pci 00:06:14.582 ************************************ 00:06:14.582 08:21:06 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:14.582 08:21:06 env -- env/env.sh@15 -- # uname 00:06:14.582 08:21:06 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:14.582 08:21:06 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:14.582 08:21:06 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:14.582 08:21:06 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:14.582 08:21:06 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:14.582 08:21:06 env -- common/autotest_common.sh@10 -- # set +x 00:06:14.582 ************************************ 00:06:14.582 START TEST env_dpdk_post_init 00:06:14.582 ************************************ 00:06:14.582 08:21:06 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:14.582 EAL: Detected CPU lcores: 128 00:06:14.582 EAL: Detected NUMA nodes: 2 00:06:14.582 EAL: Detected shared linkage of DPDK 00:06:14.582 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:14.582 EAL: Selected IOVA mode 'VA' 00:06:14.582 EAL: VFIO support initialized 00:06:14.582 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:14.582 EAL: Using IOMMU type 1 (Type 1) 00:06:14.844 EAL: Ignore mapping IO port bar(1) 00:06:14.844 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:06:15.104 EAL: Ignore mapping IO port bar(1) 00:06:15.104 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:06:15.104 EAL: Ignore mapping IO port bar(1) 00:06:15.364 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:06:15.364 EAL: Ignore mapping IO port bar(1) 00:06:15.624 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:06:15.624 EAL: Ignore mapping IO port bar(1) 00:06:15.884 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:06:15.884 EAL: Ignore mapping IO port bar(1) 00:06:15.884 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:06:16.145 EAL: Ignore mapping IO port bar(1) 00:06:16.145 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:06:16.406 EAL: Ignore mapping IO port bar(1) 00:06:16.406 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:06:16.667 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:06:16.667 EAL: Ignore mapping IO port bar(1) 00:06:16.928 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:06:16.928 EAL: Ignore mapping IO port bar(1) 00:06:17.188 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:06:17.188 EAL: Ignore mapping IO port bar(1) 00:06:17.448 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:06:17.448 EAL: Ignore mapping IO port bar(1) 00:06:17.448 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:06:17.709 EAL: Ignore mapping IO port bar(1) 00:06:17.709 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:06:17.969 EAL: Ignore mapping IO port bar(1) 00:06:17.969 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:06:18.229 EAL: Ignore mapping IO port bar(1) 00:06:18.229 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:06:18.491 EAL: Ignore mapping IO port bar(1) 00:06:18.491 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:06:18.491 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:06:18.491 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:06:18.491 Starting DPDK initialization... 00:06:18.491 Starting SPDK post initialization... 00:06:18.491 SPDK NVMe probe 00:06:18.491 Attaching to 0000:65:00.0 00:06:18.491 Attached to 0000:65:00.0 00:06:18.491 Cleaning up... 00:06:20.404 00:06:20.404 real 0m5.711s 00:06:20.404 user 0m0.095s 00:06:20.404 sys 0m0.161s 00:06:20.404 08:21:11 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:20.404 08:21:11 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:20.404 ************************************ 00:06:20.404 END TEST env_dpdk_post_init 00:06:20.404 ************************************ 00:06:20.404 08:21:11 env -- env/env.sh@26 -- # uname 00:06:20.404 08:21:11 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:20.404 08:21:11 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:20.404 08:21:11 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:20.404 08:21:11 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:20.404 08:21:11 env -- common/autotest_common.sh@10 -- # set +x 00:06:20.404 ************************************ 00:06:20.404 START TEST env_mem_callbacks 00:06:20.404 ************************************ 00:06:20.404 08:21:12 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:20.404 EAL: Detected CPU lcores: 128 00:06:20.404 EAL: Detected NUMA nodes: 2 00:06:20.404 EAL: Detected shared linkage of DPDK 00:06:20.404 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:20.404 EAL: Selected IOVA mode 'VA' 00:06:20.404 EAL: VFIO support initialized 00:06:20.404 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:20.404 00:06:20.405 00:06:20.405 CUnit - A unit testing framework for C - Version 2.1-3 00:06:20.405 http://cunit.sourceforge.net/ 00:06:20.405 00:06:20.405 00:06:20.405 Suite: memory 00:06:20.405 Test: test ... 00:06:20.405 register 0x200000200000 2097152 00:06:20.405 malloc 3145728 00:06:20.405 register 0x200000400000 4194304 00:06:20.405 buf 0x200000500000 len 3145728 PASSED 00:06:20.405 malloc 64 00:06:20.405 buf 0x2000004fff40 len 64 PASSED 00:06:20.405 malloc 4194304 00:06:20.405 register 0x200000800000 6291456 00:06:20.405 buf 0x200000a00000 len 4194304 PASSED 00:06:20.405 free 0x200000500000 3145728 00:06:20.405 free 0x2000004fff40 64 00:06:20.405 unregister 0x200000400000 4194304 PASSED 00:06:20.405 free 0x200000a00000 4194304 00:06:20.405 unregister 0x200000800000 6291456 PASSED 00:06:20.405 malloc 8388608 00:06:20.405 register 0x200000400000 10485760 00:06:20.405 buf 0x200000600000 len 8388608 PASSED 00:06:20.405 free 0x200000600000 8388608 00:06:20.405 unregister 0x200000400000 10485760 PASSED 00:06:20.405 passed 00:06:20.405 00:06:20.405 Run Summary: Type Total Ran Passed Failed Inactive 00:06:20.405 suites 1 1 n/a 0 0 00:06:20.405 tests 1 1 1 0 0 00:06:20.405 asserts 15 15 15 0 n/a 00:06:20.405 00:06:20.405 Elapsed time = 0.004 seconds 00:06:20.405 00:06:20.405 real 0m0.054s 00:06:20.405 user 0m0.018s 00:06:20.405 sys 0m0.036s 00:06:20.405 08:21:12 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:20.405 08:21:12 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:20.405 ************************************ 00:06:20.405 END TEST env_mem_callbacks 00:06:20.405 ************************************ 00:06:20.405 00:06:20.405 real 0m7.391s 00:06:20.405 user 0m0.987s 00:06:20.405 sys 0m0.948s 00:06:20.405 08:21:12 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:20.405 08:21:12 env -- common/autotest_common.sh@10 -- # set +x 00:06:20.405 ************************************ 00:06:20.405 END TEST env 00:06:20.405 ************************************ 00:06:20.405 08:21:12 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:20.405 08:21:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:20.405 08:21:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:20.405 08:21:12 -- common/autotest_common.sh@10 -- # set +x 00:06:20.405 ************************************ 00:06:20.405 START TEST rpc 00:06:20.405 ************************************ 00:06:20.405 08:21:12 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:20.665 * Looking for test storage... 00:06:20.665 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:20.665 08:21:12 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:20.665 08:21:12 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:20.665 08:21:12 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:20.665 08:21:12 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:20.665 08:21:12 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:20.665 08:21:12 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:20.665 08:21:12 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:20.665 08:21:12 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:20.665 08:21:12 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:20.666 08:21:12 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:20.666 08:21:12 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:20.666 08:21:12 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:20.666 08:21:12 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:20.666 08:21:12 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:20.666 08:21:12 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:20.666 08:21:12 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:20.666 08:21:12 rpc -- scripts/common.sh@345 -- # : 1 00:06:20.666 08:21:12 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:20.666 08:21:12 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:20.666 08:21:12 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:20.666 08:21:12 rpc -- scripts/common.sh@353 -- # local d=1 00:06:20.666 08:21:12 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:20.666 08:21:12 rpc -- scripts/common.sh@355 -- # echo 1 00:06:20.666 08:21:12 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:20.666 08:21:12 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:20.666 08:21:12 rpc -- scripts/common.sh@353 -- # local d=2 00:06:20.666 08:21:12 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:20.666 08:21:12 rpc -- scripts/common.sh@355 -- # echo 2 00:06:20.666 08:21:12 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:20.666 08:21:12 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:20.666 08:21:12 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:20.666 08:21:12 rpc -- scripts/common.sh@368 -- # return 0 00:06:20.666 08:21:12 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:20.666 08:21:12 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:20.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.666 --rc genhtml_branch_coverage=1 00:06:20.666 --rc genhtml_function_coverage=1 00:06:20.666 --rc genhtml_legend=1 00:06:20.666 --rc geninfo_all_blocks=1 00:06:20.666 --rc geninfo_unexecuted_blocks=1 00:06:20.666 00:06:20.666 ' 00:06:20.666 08:21:12 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:20.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.666 --rc genhtml_branch_coverage=1 00:06:20.666 --rc genhtml_function_coverage=1 00:06:20.666 --rc genhtml_legend=1 00:06:20.666 --rc geninfo_all_blocks=1 00:06:20.666 --rc geninfo_unexecuted_blocks=1 00:06:20.666 00:06:20.666 ' 00:06:20.666 08:21:12 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:20.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.666 --rc genhtml_branch_coverage=1 00:06:20.666 --rc genhtml_function_coverage=1 00:06:20.666 --rc genhtml_legend=1 00:06:20.666 --rc geninfo_all_blocks=1 00:06:20.666 --rc geninfo_unexecuted_blocks=1 00:06:20.666 00:06:20.666 ' 00:06:20.666 08:21:12 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:20.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.666 --rc genhtml_branch_coverage=1 00:06:20.666 --rc genhtml_function_coverage=1 00:06:20.666 --rc genhtml_legend=1 00:06:20.666 --rc geninfo_all_blocks=1 00:06:20.666 --rc geninfo_unexecuted_blocks=1 00:06:20.666 00:06:20.666 ' 00:06:20.666 08:21:12 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3517208 00:06:20.666 08:21:12 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:20.666 08:21:12 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:20.666 08:21:12 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3517208 00:06:20.666 08:21:12 rpc -- common/autotest_common.sh@831 -- # '[' -z 3517208 ']' 00:06:20.666 08:21:12 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.666 08:21:12 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:20.666 08:21:12 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.666 08:21:12 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:20.666 08:21:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.666 [2024-10-01 08:21:12.450721] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:06:20.666 [2024-10-01 08:21:12.450773] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3517208 ] 00:06:20.926 [2024-10-01 08:21:12.511702] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.926 [2024-10-01 08:21:12.574899] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:20.926 [2024-10-01 08:21:12.574938] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3517208' to capture a snapshot of events at runtime. 00:06:20.926 [2024-10-01 08:21:12.574945] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:20.926 [2024-10-01 08:21:12.574952] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:20.926 [2024-10-01 08:21:12.574958] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3517208 for offline analysis/debug. 00:06:20.926 [2024-10-01 08:21:12.575520] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.516 08:21:13 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:21.516 08:21:13 rpc -- common/autotest_common.sh@864 -- # return 0 00:06:21.516 08:21:13 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:21.516 08:21:13 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:21.516 08:21:13 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:21.516 08:21:13 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:21.516 08:21:13 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:21.516 08:21:13 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:21.516 08:21:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.516 ************************************ 00:06:21.516 START TEST rpc_integrity 00:06:21.516 ************************************ 00:06:21.516 08:21:13 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:21.516 08:21:13 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:21.516 08:21:13 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.516 08:21:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:21.516 08:21:13 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.516 08:21:13 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:21.516 08:21:13 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:21.516 08:21:13 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:21.516 08:21:13 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:21.516 08:21:13 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.516 08:21:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:21.777 08:21:13 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.777 08:21:13 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:21.777 08:21:13 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:21.777 08:21:13 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.777 08:21:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:21.777 08:21:13 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.777 08:21:13 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:21.777 { 00:06:21.777 "name": "Malloc0", 00:06:21.777 "aliases": [ 00:06:21.777 "68c3b639-f2ce-41a4-9f80-c150326a1ef2" 00:06:21.777 ], 00:06:21.777 "product_name": "Malloc disk", 00:06:21.777 "block_size": 512, 00:06:21.777 "num_blocks": 16384, 00:06:21.777 "uuid": "68c3b639-f2ce-41a4-9f80-c150326a1ef2", 00:06:21.777 "assigned_rate_limits": { 00:06:21.777 "rw_ios_per_sec": 0, 00:06:21.777 "rw_mbytes_per_sec": 0, 00:06:21.777 "r_mbytes_per_sec": 0, 00:06:21.777 "w_mbytes_per_sec": 0 00:06:21.777 }, 00:06:21.777 "claimed": false, 00:06:21.777 "zoned": false, 00:06:21.777 "supported_io_types": { 00:06:21.777 "read": true, 00:06:21.777 "write": true, 00:06:21.777 "unmap": true, 00:06:21.777 "flush": true, 00:06:21.777 "reset": true, 00:06:21.777 "nvme_admin": false, 00:06:21.777 "nvme_io": false, 00:06:21.777 "nvme_io_md": false, 00:06:21.777 "write_zeroes": true, 00:06:21.777 "zcopy": true, 00:06:21.777 "get_zone_info": false, 00:06:21.777 "zone_management": false, 00:06:21.777 "zone_append": false, 00:06:21.777 "compare": false, 00:06:21.777 "compare_and_write": false, 00:06:21.777 "abort": true, 00:06:21.777 "seek_hole": false, 00:06:21.777 "seek_data": false, 00:06:21.777 "copy": true, 00:06:21.777 "nvme_iov_md": false 00:06:21.777 }, 00:06:21.777 "memory_domains": [ 00:06:21.777 { 00:06:21.777 "dma_device_id": "system", 00:06:21.777 "dma_device_type": 1 00:06:21.777 }, 00:06:21.777 { 00:06:21.777 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:21.777 "dma_device_type": 2 00:06:21.777 } 00:06:21.777 ], 00:06:21.777 "driver_specific": {} 00:06:21.777 } 00:06:21.777 ]' 00:06:21.777 08:21:13 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:21.777 08:21:13 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:21.777 08:21:13 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:21.777 08:21:13 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.777 08:21:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:21.777 [2024-10-01 08:21:13.412600] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:21.777 [2024-10-01 08:21:13.412635] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:21.777 [2024-10-01 08:21:13.412648] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12c0370 00:06:21.777 [2024-10-01 08:21:13.412656] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:21.777 [2024-10-01 08:21:13.414019] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:21.777 [2024-10-01 08:21:13.414040] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:21.777 Passthru0 00:06:21.777 08:21:13 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.777 08:21:13 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:21.777 08:21:13 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.777 08:21:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:21.777 08:21:13 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.777 08:21:13 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:21.777 { 00:06:21.777 "name": "Malloc0", 00:06:21.777 "aliases": [ 00:06:21.777 "68c3b639-f2ce-41a4-9f80-c150326a1ef2" 00:06:21.778 ], 00:06:21.778 "product_name": "Malloc disk", 00:06:21.778 "block_size": 512, 00:06:21.778 "num_blocks": 16384, 00:06:21.778 "uuid": "68c3b639-f2ce-41a4-9f80-c150326a1ef2", 00:06:21.778 "assigned_rate_limits": { 00:06:21.778 "rw_ios_per_sec": 0, 00:06:21.778 "rw_mbytes_per_sec": 0, 00:06:21.778 "r_mbytes_per_sec": 0, 00:06:21.778 "w_mbytes_per_sec": 0 00:06:21.778 }, 00:06:21.778 "claimed": true, 00:06:21.778 "claim_type": "exclusive_write", 00:06:21.778 "zoned": false, 00:06:21.778 "supported_io_types": { 00:06:21.778 "read": true, 00:06:21.778 "write": true, 00:06:21.778 "unmap": true, 00:06:21.778 "flush": true, 00:06:21.778 "reset": true, 00:06:21.778 "nvme_admin": false, 00:06:21.778 "nvme_io": false, 00:06:21.778 "nvme_io_md": false, 00:06:21.778 "write_zeroes": true, 00:06:21.778 "zcopy": true, 00:06:21.778 "get_zone_info": false, 00:06:21.778 "zone_management": false, 00:06:21.778 "zone_append": false, 00:06:21.778 "compare": false, 00:06:21.778 "compare_and_write": false, 00:06:21.778 "abort": true, 00:06:21.778 "seek_hole": false, 00:06:21.778 "seek_data": false, 00:06:21.778 "copy": true, 00:06:21.778 "nvme_iov_md": false 00:06:21.778 }, 00:06:21.778 "memory_domains": [ 00:06:21.778 { 00:06:21.778 "dma_device_id": "system", 00:06:21.778 "dma_device_type": 1 00:06:21.778 }, 00:06:21.778 { 00:06:21.778 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:21.778 "dma_device_type": 2 00:06:21.778 } 00:06:21.778 ], 00:06:21.778 "driver_specific": {} 00:06:21.778 }, 00:06:21.778 { 00:06:21.778 "name": "Passthru0", 00:06:21.778 "aliases": [ 00:06:21.778 "b699fc8f-b66b-58e9-8a4a-c93644a7a7e8" 00:06:21.778 ], 00:06:21.778 "product_name": "passthru", 00:06:21.778 "block_size": 512, 00:06:21.778 "num_blocks": 16384, 00:06:21.778 "uuid": "b699fc8f-b66b-58e9-8a4a-c93644a7a7e8", 00:06:21.778 "assigned_rate_limits": { 00:06:21.778 "rw_ios_per_sec": 0, 00:06:21.778 "rw_mbytes_per_sec": 0, 00:06:21.778 "r_mbytes_per_sec": 0, 00:06:21.778 "w_mbytes_per_sec": 0 00:06:21.778 }, 00:06:21.778 "claimed": false, 00:06:21.778 "zoned": false, 00:06:21.778 "supported_io_types": { 00:06:21.778 "read": true, 00:06:21.778 "write": true, 00:06:21.778 "unmap": true, 00:06:21.778 "flush": true, 00:06:21.778 "reset": true, 00:06:21.778 "nvme_admin": false, 00:06:21.778 "nvme_io": false, 00:06:21.778 "nvme_io_md": false, 00:06:21.778 "write_zeroes": true, 00:06:21.778 "zcopy": true, 00:06:21.778 "get_zone_info": false, 00:06:21.778 "zone_management": false, 00:06:21.778 "zone_append": false, 00:06:21.778 "compare": false, 00:06:21.778 "compare_and_write": false, 00:06:21.778 "abort": true, 00:06:21.778 "seek_hole": false, 00:06:21.778 "seek_data": false, 00:06:21.778 "copy": true, 00:06:21.778 "nvme_iov_md": false 00:06:21.778 }, 00:06:21.778 "memory_domains": [ 00:06:21.778 { 00:06:21.778 "dma_device_id": "system", 00:06:21.778 "dma_device_type": 1 00:06:21.778 }, 00:06:21.778 { 00:06:21.778 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:21.778 "dma_device_type": 2 00:06:21.778 } 00:06:21.778 ], 00:06:21.778 "driver_specific": { 00:06:21.778 "passthru": { 00:06:21.778 "name": "Passthru0", 00:06:21.778 "base_bdev_name": "Malloc0" 00:06:21.778 } 00:06:21.778 } 00:06:21.778 } 00:06:21.778 ]' 00:06:21.778 08:21:13 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:21.778 08:21:13 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:21.778 08:21:13 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:21.778 08:21:13 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.778 08:21:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:21.778 08:21:13 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.778 08:21:13 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:21.778 08:21:13 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.778 08:21:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:21.778 08:21:13 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.778 08:21:13 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:21.778 08:21:13 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.778 08:21:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:21.778 08:21:13 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.778 08:21:13 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:21.778 08:21:13 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:21.778 08:21:13 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:21.778 00:06:21.778 real 0m0.298s 00:06:21.778 user 0m0.181s 00:06:21.778 sys 0m0.045s 00:06:21.778 08:21:13 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:21.778 08:21:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:21.778 ************************************ 00:06:21.778 END TEST rpc_integrity 00:06:21.778 ************************************ 00:06:22.039 08:21:13 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:22.039 08:21:13 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:22.039 08:21:13 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:22.039 08:21:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.039 ************************************ 00:06:22.039 START TEST rpc_plugins 00:06:22.039 ************************************ 00:06:22.039 08:21:13 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:06:22.039 08:21:13 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:22.039 08:21:13 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.039 08:21:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:22.039 08:21:13 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.039 08:21:13 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:22.039 08:21:13 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:22.039 08:21:13 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.039 08:21:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:22.039 08:21:13 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.039 08:21:13 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:22.039 { 00:06:22.039 "name": "Malloc1", 00:06:22.039 "aliases": [ 00:06:22.039 "f7923784-759a-4c69-acae-3456ae46d8f1" 00:06:22.039 ], 00:06:22.039 "product_name": "Malloc disk", 00:06:22.039 "block_size": 4096, 00:06:22.039 "num_blocks": 256, 00:06:22.039 "uuid": "f7923784-759a-4c69-acae-3456ae46d8f1", 00:06:22.039 "assigned_rate_limits": { 00:06:22.039 "rw_ios_per_sec": 0, 00:06:22.039 "rw_mbytes_per_sec": 0, 00:06:22.039 "r_mbytes_per_sec": 0, 00:06:22.039 "w_mbytes_per_sec": 0 00:06:22.039 }, 00:06:22.039 "claimed": false, 00:06:22.039 "zoned": false, 00:06:22.039 "supported_io_types": { 00:06:22.039 "read": true, 00:06:22.039 "write": true, 00:06:22.039 "unmap": true, 00:06:22.039 "flush": true, 00:06:22.039 "reset": true, 00:06:22.039 "nvme_admin": false, 00:06:22.039 "nvme_io": false, 00:06:22.039 "nvme_io_md": false, 00:06:22.039 "write_zeroes": true, 00:06:22.039 "zcopy": true, 00:06:22.039 "get_zone_info": false, 00:06:22.039 "zone_management": false, 00:06:22.039 "zone_append": false, 00:06:22.039 "compare": false, 00:06:22.039 "compare_and_write": false, 00:06:22.039 "abort": true, 00:06:22.039 "seek_hole": false, 00:06:22.039 "seek_data": false, 00:06:22.039 "copy": true, 00:06:22.039 "nvme_iov_md": false 00:06:22.039 }, 00:06:22.039 "memory_domains": [ 00:06:22.039 { 00:06:22.039 "dma_device_id": "system", 00:06:22.039 "dma_device_type": 1 00:06:22.039 }, 00:06:22.039 { 00:06:22.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:22.039 "dma_device_type": 2 00:06:22.039 } 00:06:22.039 ], 00:06:22.039 "driver_specific": {} 00:06:22.039 } 00:06:22.039 ]' 00:06:22.039 08:21:13 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:22.039 08:21:13 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:22.039 08:21:13 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:22.039 08:21:13 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.039 08:21:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:22.039 08:21:13 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.039 08:21:13 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:22.039 08:21:13 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.039 08:21:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:22.039 08:21:13 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.039 08:21:13 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:22.039 08:21:13 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:22.039 08:21:13 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:22.039 00:06:22.039 real 0m0.141s 00:06:22.039 user 0m0.085s 00:06:22.039 sys 0m0.020s 00:06:22.039 08:21:13 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:22.039 08:21:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:22.039 ************************************ 00:06:22.039 END TEST rpc_plugins 00:06:22.039 ************************************ 00:06:22.039 08:21:13 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:22.039 08:21:13 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:22.039 08:21:13 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:22.039 08:21:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.040 ************************************ 00:06:22.040 START TEST rpc_trace_cmd_test 00:06:22.040 ************************************ 00:06:22.040 08:21:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:06:22.040 08:21:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:22.040 08:21:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:22.040 08:21:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.040 08:21:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:22.299 08:21:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.299 08:21:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:22.299 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3517208", 00:06:22.299 "tpoint_group_mask": "0x8", 00:06:22.299 "iscsi_conn": { 00:06:22.299 "mask": "0x2", 00:06:22.299 "tpoint_mask": "0x0" 00:06:22.299 }, 00:06:22.299 "scsi": { 00:06:22.299 "mask": "0x4", 00:06:22.299 "tpoint_mask": "0x0" 00:06:22.299 }, 00:06:22.299 "bdev": { 00:06:22.299 "mask": "0x8", 00:06:22.299 "tpoint_mask": "0xffffffffffffffff" 00:06:22.299 }, 00:06:22.299 "nvmf_rdma": { 00:06:22.299 "mask": "0x10", 00:06:22.299 "tpoint_mask": "0x0" 00:06:22.299 }, 00:06:22.299 "nvmf_tcp": { 00:06:22.299 "mask": "0x20", 00:06:22.299 "tpoint_mask": "0x0" 00:06:22.299 }, 00:06:22.299 "ftl": { 00:06:22.299 "mask": "0x40", 00:06:22.299 "tpoint_mask": "0x0" 00:06:22.299 }, 00:06:22.299 "blobfs": { 00:06:22.299 "mask": "0x80", 00:06:22.299 "tpoint_mask": "0x0" 00:06:22.299 }, 00:06:22.299 "dsa": { 00:06:22.299 "mask": "0x200", 00:06:22.299 "tpoint_mask": "0x0" 00:06:22.299 }, 00:06:22.299 "thread": { 00:06:22.299 "mask": "0x400", 00:06:22.299 "tpoint_mask": "0x0" 00:06:22.299 }, 00:06:22.299 "nvme_pcie": { 00:06:22.299 "mask": "0x800", 00:06:22.299 "tpoint_mask": "0x0" 00:06:22.299 }, 00:06:22.299 "iaa": { 00:06:22.299 "mask": "0x1000", 00:06:22.299 "tpoint_mask": "0x0" 00:06:22.299 }, 00:06:22.299 "nvme_tcp": { 00:06:22.299 "mask": "0x2000", 00:06:22.299 "tpoint_mask": "0x0" 00:06:22.299 }, 00:06:22.299 "bdev_nvme": { 00:06:22.299 "mask": "0x4000", 00:06:22.299 "tpoint_mask": "0x0" 00:06:22.299 }, 00:06:22.299 "sock": { 00:06:22.299 "mask": "0x8000", 00:06:22.299 "tpoint_mask": "0x0" 00:06:22.299 }, 00:06:22.299 "blob": { 00:06:22.299 "mask": "0x10000", 00:06:22.299 "tpoint_mask": "0x0" 00:06:22.299 }, 00:06:22.299 "bdev_raid": { 00:06:22.299 "mask": "0x20000", 00:06:22.299 "tpoint_mask": "0x0" 00:06:22.299 } 00:06:22.299 }' 00:06:22.299 08:21:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:22.299 08:21:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:06:22.299 08:21:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:22.299 08:21:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:22.299 08:21:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:22.299 08:21:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:22.299 08:21:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:22.299 08:21:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:22.299 08:21:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:22.299 08:21:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:22.299 00:06:22.299 real 0m0.245s 00:06:22.299 user 0m0.204s 00:06:22.299 sys 0m0.029s 00:06:22.299 08:21:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:22.299 08:21:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:22.299 ************************************ 00:06:22.299 END TEST rpc_trace_cmd_test 00:06:22.299 ************************************ 00:06:22.559 08:21:14 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:22.559 08:21:14 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:22.559 08:21:14 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:22.559 08:21:14 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:22.559 08:21:14 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:22.559 08:21:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.559 ************************************ 00:06:22.559 START TEST rpc_daemon_integrity 00:06:22.559 ************************************ 00:06:22.559 08:21:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:22.559 08:21:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:22.559 08:21:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.559 08:21:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:22.559 08:21:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.559 08:21:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:22.559 08:21:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:22.559 08:21:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:22.559 08:21:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:22.559 08:21:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.559 08:21:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:22.559 08:21:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.559 08:21:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:22.559 08:21:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:22.559 08:21:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.559 08:21:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:22.559 08:21:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.559 08:21:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:22.559 { 00:06:22.559 "name": "Malloc2", 00:06:22.559 "aliases": [ 00:06:22.559 "da2f9221-44dd-4ad0-945a-cd332f75d93c" 00:06:22.559 ], 00:06:22.559 "product_name": "Malloc disk", 00:06:22.559 "block_size": 512, 00:06:22.559 "num_blocks": 16384, 00:06:22.559 "uuid": "da2f9221-44dd-4ad0-945a-cd332f75d93c", 00:06:22.559 "assigned_rate_limits": { 00:06:22.559 "rw_ios_per_sec": 0, 00:06:22.559 "rw_mbytes_per_sec": 0, 00:06:22.559 "r_mbytes_per_sec": 0, 00:06:22.559 "w_mbytes_per_sec": 0 00:06:22.559 }, 00:06:22.559 "claimed": false, 00:06:22.560 "zoned": false, 00:06:22.560 "supported_io_types": { 00:06:22.560 "read": true, 00:06:22.560 "write": true, 00:06:22.560 "unmap": true, 00:06:22.560 "flush": true, 00:06:22.560 "reset": true, 00:06:22.560 "nvme_admin": false, 00:06:22.560 "nvme_io": false, 00:06:22.560 "nvme_io_md": false, 00:06:22.560 "write_zeroes": true, 00:06:22.560 "zcopy": true, 00:06:22.560 "get_zone_info": false, 00:06:22.560 "zone_management": false, 00:06:22.560 "zone_append": false, 00:06:22.560 "compare": false, 00:06:22.560 "compare_and_write": false, 00:06:22.560 "abort": true, 00:06:22.560 "seek_hole": false, 00:06:22.560 "seek_data": false, 00:06:22.560 "copy": true, 00:06:22.560 "nvme_iov_md": false 00:06:22.560 }, 00:06:22.560 "memory_domains": [ 00:06:22.560 { 00:06:22.560 "dma_device_id": "system", 00:06:22.560 "dma_device_type": 1 00:06:22.560 }, 00:06:22.560 { 00:06:22.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:22.560 "dma_device_type": 2 00:06:22.560 } 00:06:22.560 ], 00:06:22.560 "driver_specific": {} 00:06:22.560 } 00:06:22.560 ]' 00:06:22.560 08:21:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:22.560 08:21:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:22.560 08:21:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:22.560 08:21:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.560 08:21:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:22.560 [2024-10-01 08:21:14.311039] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:22.560 [2024-10-01 08:21:14.311069] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:22.560 [2024-10-01 08:21:14.311082] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x13f1eb0 00:06:22.560 [2024-10-01 08:21:14.311089] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:22.560 [2024-10-01 08:21:14.312401] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:22.560 [2024-10-01 08:21:14.312422] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:22.560 Passthru0 00:06:22.560 08:21:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.560 08:21:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:22.560 08:21:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.560 08:21:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:22.560 08:21:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.560 08:21:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:22.560 { 00:06:22.560 "name": "Malloc2", 00:06:22.560 "aliases": [ 00:06:22.560 "da2f9221-44dd-4ad0-945a-cd332f75d93c" 00:06:22.560 ], 00:06:22.560 "product_name": "Malloc disk", 00:06:22.560 "block_size": 512, 00:06:22.560 "num_blocks": 16384, 00:06:22.560 "uuid": "da2f9221-44dd-4ad0-945a-cd332f75d93c", 00:06:22.560 "assigned_rate_limits": { 00:06:22.560 "rw_ios_per_sec": 0, 00:06:22.560 "rw_mbytes_per_sec": 0, 00:06:22.560 "r_mbytes_per_sec": 0, 00:06:22.560 "w_mbytes_per_sec": 0 00:06:22.560 }, 00:06:22.560 "claimed": true, 00:06:22.560 "claim_type": "exclusive_write", 00:06:22.560 "zoned": false, 00:06:22.560 "supported_io_types": { 00:06:22.560 "read": true, 00:06:22.560 "write": true, 00:06:22.560 "unmap": true, 00:06:22.560 "flush": true, 00:06:22.560 "reset": true, 00:06:22.560 "nvme_admin": false, 00:06:22.560 "nvme_io": false, 00:06:22.560 "nvme_io_md": false, 00:06:22.560 "write_zeroes": true, 00:06:22.560 "zcopy": true, 00:06:22.560 "get_zone_info": false, 00:06:22.560 "zone_management": false, 00:06:22.560 "zone_append": false, 00:06:22.560 "compare": false, 00:06:22.560 "compare_and_write": false, 00:06:22.560 "abort": true, 00:06:22.560 "seek_hole": false, 00:06:22.560 "seek_data": false, 00:06:22.560 "copy": true, 00:06:22.560 "nvme_iov_md": false 00:06:22.560 }, 00:06:22.560 "memory_domains": [ 00:06:22.560 { 00:06:22.560 "dma_device_id": "system", 00:06:22.560 "dma_device_type": 1 00:06:22.560 }, 00:06:22.560 { 00:06:22.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:22.560 "dma_device_type": 2 00:06:22.560 } 00:06:22.560 ], 00:06:22.560 "driver_specific": {} 00:06:22.560 }, 00:06:22.560 { 00:06:22.560 "name": "Passthru0", 00:06:22.560 "aliases": [ 00:06:22.560 "da9d48d7-7b59-5087-9d04-4245e2f9b922" 00:06:22.560 ], 00:06:22.560 "product_name": "passthru", 00:06:22.560 "block_size": 512, 00:06:22.560 "num_blocks": 16384, 00:06:22.560 "uuid": "da9d48d7-7b59-5087-9d04-4245e2f9b922", 00:06:22.560 "assigned_rate_limits": { 00:06:22.560 "rw_ios_per_sec": 0, 00:06:22.560 "rw_mbytes_per_sec": 0, 00:06:22.560 "r_mbytes_per_sec": 0, 00:06:22.560 "w_mbytes_per_sec": 0 00:06:22.560 }, 00:06:22.560 "claimed": false, 00:06:22.560 "zoned": false, 00:06:22.560 "supported_io_types": { 00:06:22.560 "read": true, 00:06:22.560 "write": true, 00:06:22.560 "unmap": true, 00:06:22.560 "flush": true, 00:06:22.560 "reset": true, 00:06:22.560 "nvme_admin": false, 00:06:22.560 "nvme_io": false, 00:06:22.560 "nvme_io_md": false, 00:06:22.560 "write_zeroes": true, 00:06:22.560 "zcopy": true, 00:06:22.560 "get_zone_info": false, 00:06:22.560 "zone_management": false, 00:06:22.560 "zone_append": false, 00:06:22.560 "compare": false, 00:06:22.560 "compare_and_write": false, 00:06:22.560 "abort": true, 00:06:22.560 "seek_hole": false, 00:06:22.560 "seek_data": false, 00:06:22.560 "copy": true, 00:06:22.560 "nvme_iov_md": false 00:06:22.560 }, 00:06:22.560 "memory_domains": [ 00:06:22.560 { 00:06:22.560 "dma_device_id": "system", 00:06:22.560 "dma_device_type": 1 00:06:22.560 }, 00:06:22.560 { 00:06:22.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:22.560 "dma_device_type": 2 00:06:22.560 } 00:06:22.560 ], 00:06:22.560 "driver_specific": { 00:06:22.560 "passthru": { 00:06:22.560 "name": "Passthru0", 00:06:22.560 "base_bdev_name": "Malloc2" 00:06:22.560 } 00:06:22.560 } 00:06:22.560 } 00:06:22.560 ]' 00:06:22.560 08:21:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:22.880 08:21:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:22.880 08:21:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:22.880 08:21:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.880 08:21:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:22.880 08:21:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.880 08:21:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:22.880 08:21:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.880 08:21:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:22.880 08:21:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.880 08:21:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:22.880 08:21:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.880 08:21:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:22.880 08:21:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.880 08:21:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:22.880 08:21:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:22.880 08:21:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:22.880 00:06:22.880 real 0m0.286s 00:06:22.880 user 0m0.185s 00:06:22.880 sys 0m0.035s 00:06:22.880 08:21:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:22.880 08:21:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:22.880 ************************************ 00:06:22.880 END TEST rpc_daemon_integrity 00:06:22.880 ************************************ 00:06:22.881 08:21:14 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:22.881 08:21:14 rpc -- rpc/rpc.sh@84 -- # killprocess 3517208 00:06:22.881 08:21:14 rpc -- common/autotest_common.sh@950 -- # '[' -z 3517208 ']' 00:06:22.881 08:21:14 rpc -- common/autotest_common.sh@954 -- # kill -0 3517208 00:06:22.881 08:21:14 rpc -- common/autotest_common.sh@955 -- # uname 00:06:22.881 08:21:14 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:22.881 08:21:14 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3517208 00:06:22.881 08:21:14 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:22.881 08:21:14 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:22.881 08:21:14 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3517208' 00:06:22.881 killing process with pid 3517208 00:06:22.881 08:21:14 rpc -- common/autotest_common.sh@969 -- # kill 3517208 00:06:22.881 08:21:14 rpc -- common/autotest_common.sh@974 -- # wait 3517208 00:06:23.141 00:06:23.141 real 0m2.586s 00:06:23.141 user 0m3.378s 00:06:23.141 sys 0m0.703s 00:06:23.141 08:21:14 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:23.141 08:21:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.141 ************************************ 00:06:23.141 END TEST rpc 00:06:23.141 ************************************ 00:06:23.141 08:21:14 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:23.141 08:21:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:23.141 08:21:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.141 08:21:14 -- common/autotest_common.sh@10 -- # set +x 00:06:23.141 ************************************ 00:06:23.141 START TEST skip_rpc 00:06:23.141 ************************************ 00:06:23.141 08:21:14 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:23.141 * Looking for test storage... 00:06:23.402 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:23.402 08:21:14 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:23.402 08:21:14 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:23.402 08:21:14 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:23.402 08:21:15 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:23.402 08:21:15 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:23.402 08:21:15 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:23.402 08:21:15 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:23.402 08:21:15 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:23.402 08:21:15 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:23.402 08:21:15 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:23.402 08:21:15 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:23.402 08:21:15 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:23.402 08:21:15 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:23.402 08:21:15 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:23.402 08:21:15 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:23.402 08:21:15 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:23.402 08:21:15 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:23.402 08:21:15 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:23.402 08:21:15 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:23.402 08:21:15 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:23.402 08:21:15 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:23.402 08:21:15 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:23.402 08:21:15 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:23.402 08:21:15 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:23.402 08:21:15 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:23.402 08:21:15 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:23.402 08:21:15 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:23.402 08:21:15 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:23.402 08:21:15 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:23.402 08:21:15 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:23.402 08:21:15 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:23.402 08:21:15 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:23.402 08:21:15 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:23.402 08:21:15 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:23.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.402 --rc genhtml_branch_coverage=1 00:06:23.402 --rc genhtml_function_coverage=1 00:06:23.402 --rc genhtml_legend=1 00:06:23.402 --rc geninfo_all_blocks=1 00:06:23.402 --rc geninfo_unexecuted_blocks=1 00:06:23.402 00:06:23.402 ' 00:06:23.402 08:21:15 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:23.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.402 --rc genhtml_branch_coverage=1 00:06:23.402 --rc genhtml_function_coverage=1 00:06:23.402 --rc genhtml_legend=1 00:06:23.402 --rc geninfo_all_blocks=1 00:06:23.402 --rc geninfo_unexecuted_blocks=1 00:06:23.402 00:06:23.402 ' 00:06:23.402 08:21:15 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:23.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.402 --rc genhtml_branch_coverage=1 00:06:23.402 --rc genhtml_function_coverage=1 00:06:23.402 --rc genhtml_legend=1 00:06:23.402 --rc geninfo_all_blocks=1 00:06:23.402 --rc geninfo_unexecuted_blocks=1 00:06:23.402 00:06:23.402 ' 00:06:23.402 08:21:15 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:23.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.402 --rc genhtml_branch_coverage=1 00:06:23.402 --rc genhtml_function_coverage=1 00:06:23.402 --rc genhtml_legend=1 00:06:23.402 --rc geninfo_all_blocks=1 00:06:23.402 --rc geninfo_unexecuted_blocks=1 00:06:23.402 00:06:23.402 ' 00:06:23.402 08:21:15 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:23.402 08:21:15 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:23.402 08:21:15 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:23.402 08:21:15 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:23.402 08:21:15 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.403 08:21:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.403 ************************************ 00:06:23.403 START TEST skip_rpc 00:06:23.403 ************************************ 00:06:23.403 08:21:15 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:06:23.403 08:21:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3517942 00:06:23.403 08:21:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:23.403 08:21:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:23.403 08:21:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:23.403 [2024-10-01 08:21:15.157292] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:06:23.403 [2024-10-01 08:21:15.157348] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3517942 ] 00:06:23.403 [2024-10-01 08:21:15.220304] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.662 [2024-10-01 08:21:15.294096] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.948 08:21:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:28.948 08:21:20 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:28.948 08:21:20 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:28.948 08:21:20 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:28.948 08:21:20 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:28.948 08:21:20 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:28.948 08:21:20 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:28.948 08:21:20 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:06:28.948 08:21:20 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.948 08:21:20 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.948 08:21:20 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:28.948 08:21:20 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:28.948 08:21:20 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:28.948 08:21:20 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:28.948 08:21:20 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:28.948 08:21:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:28.948 08:21:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3517942 00:06:28.948 08:21:20 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 3517942 ']' 00:06:28.948 08:21:20 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 3517942 00:06:28.948 08:21:20 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:06:28.948 08:21:20 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:28.948 08:21:20 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3517942 00:06:28.948 08:21:20 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:28.948 08:21:20 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:28.948 08:21:20 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3517942' 00:06:28.948 killing process with pid 3517942 00:06:28.948 08:21:20 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 3517942 00:06:28.948 08:21:20 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 3517942 00:06:28.948 00:06:28.948 real 0m5.309s 00:06:28.948 user 0m5.101s 00:06:28.948 sys 0m0.254s 00:06:28.948 08:21:20 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:28.948 08:21:20 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.948 ************************************ 00:06:28.948 END TEST skip_rpc 00:06:28.948 ************************************ 00:06:28.948 08:21:20 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:28.948 08:21:20 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:28.948 08:21:20 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:28.948 08:21:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.948 ************************************ 00:06:28.948 START TEST skip_rpc_with_json 00:06:28.948 ************************************ 00:06:28.948 08:21:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:06:28.948 08:21:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:28.948 08:21:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:28.948 08:21:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3519095 00:06:28.948 08:21:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:28.948 08:21:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3519095 00:06:28.948 08:21:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 3519095 ']' 00:06:28.948 08:21:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.948 08:21:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:28.948 08:21:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.948 08:21:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:28.948 08:21:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:28.948 [2024-10-01 08:21:20.525114] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:06:28.948 [2024-10-01 08:21:20.525162] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3519095 ] 00:06:28.948 [2024-10-01 08:21:20.583358] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.948 [2024-10-01 08:21:20.647547] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.520 08:21:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:29.520 08:21:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:06:29.520 08:21:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:29.520 08:21:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:29.520 08:21:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:29.520 [2024-10-01 08:21:21.320467] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:29.520 request: 00:06:29.520 { 00:06:29.520 "trtype": "tcp", 00:06:29.520 "method": "nvmf_get_transports", 00:06:29.520 "req_id": 1 00:06:29.520 } 00:06:29.520 Got JSON-RPC error response 00:06:29.520 response: 00:06:29.520 { 00:06:29.520 "code": -19, 00:06:29.520 "message": "No such device" 00:06:29.520 } 00:06:29.520 08:21:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:29.520 08:21:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:29.520 08:21:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:29.520 08:21:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:29.520 [2024-10-01 08:21:21.332603] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:29.520 08:21:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:29.520 08:21:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:29.520 08:21:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:29.520 08:21:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:29.782 08:21:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:29.782 08:21:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:29.782 { 00:06:29.782 "subsystems": [ 00:06:29.782 { 00:06:29.782 "subsystem": "fsdev", 00:06:29.782 "config": [ 00:06:29.782 { 00:06:29.782 "method": "fsdev_set_opts", 00:06:29.782 "params": { 00:06:29.782 "fsdev_io_pool_size": 65535, 00:06:29.782 "fsdev_io_cache_size": 256 00:06:29.782 } 00:06:29.782 } 00:06:29.782 ] 00:06:29.782 }, 00:06:29.782 { 00:06:29.782 "subsystem": "vfio_user_target", 00:06:29.782 "config": null 00:06:29.782 }, 00:06:29.782 { 00:06:29.782 "subsystem": "keyring", 00:06:29.782 "config": [] 00:06:29.782 }, 00:06:29.782 { 00:06:29.782 "subsystem": "iobuf", 00:06:29.782 "config": [ 00:06:29.782 { 00:06:29.782 "method": "iobuf_set_options", 00:06:29.782 "params": { 00:06:29.782 "small_pool_count": 8192, 00:06:29.782 "large_pool_count": 1024, 00:06:29.782 "small_bufsize": 8192, 00:06:29.782 "large_bufsize": 135168 00:06:29.782 } 00:06:29.782 } 00:06:29.782 ] 00:06:29.782 }, 00:06:29.782 { 00:06:29.782 "subsystem": "sock", 00:06:29.782 "config": [ 00:06:29.782 { 00:06:29.782 "method": "sock_set_default_impl", 00:06:29.782 "params": { 00:06:29.782 "impl_name": "posix" 00:06:29.782 } 00:06:29.782 }, 00:06:29.782 { 00:06:29.782 "method": "sock_impl_set_options", 00:06:29.782 "params": { 00:06:29.782 "impl_name": "ssl", 00:06:29.782 "recv_buf_size": 4096, 00:06:29.782 "send_buf_size": 4096, 00:06:29.782 "enable_recv_pipe": true, 00:06:29.782 "enable_quickack": false, 00:06:29.782 "enable_placement_id": 0, 00:06:29.782 "enable_zerocopy_send_server": true, 00:06:29.782 "enable_zerocopy_send_client": false, 00:06:29.782 "zerocopy_threshold": 0, 00:06:29.782 "tls_version": 0, 00:06:29.782 "enable_ktls": false 00:06:29.782 } 00:06:29.782 }, 00:06:29.782 { 00:06:29.782 "method": "sock_impl_set_options", 00:06:29.782 "params": { 00:06:29.782 "impl_name": "posix", 00:06:29.782 "recv_buf_size": 2097152, 00:06:29.782 "send_buf_size": 2097152, 00:06:29.782 "enable_recv_pipe": true, 00:06:29.782 "enable_quickack": false, 00:06:29.782 "enable_placement_id": 0, 00:06:29.782 "enable_zerocopy_send_server": true, 00:06:29.782 "enable_zerocopy_send_client": false, 00:06:29.782 "zerocopy_threshold": 0, 00:06:29.782 "tls_version": 0, 00:06:29.782 "enable_ktls": false 00:06:29.782 } 00:06:29.782 } 00:06:29.782 ] 00:06:29.782 }, 00:06:29.782 { 00:06:29.782 "subsystem": "vmd", 00:06:29.782 "config": [] 00:06:29.782 }, 00:06:29.782 { 00:06:29.782 "subsystem": "accel", 00:06:29.782 "config": [ 00:06:29.782 { 00:06:29.782 "method": "accel_set_options", 00:06:29.782 "params": { 00:06:29.782 "small_cache_size": 128, 00:06:29.782 "large_cache_size": 16, 00:06:29.782 "task_count": 2048, 00:06:29.782 "sequence_count": 2048, 00:06:29.782 "buf_count": 2048 00:06:29.782 } 00:06:29.782 } 00:06:29.782 ] 00:06:29.782 }, 00:06:29.782 { 00:06:29.782 "subsystem": "bdev", 00:06:29.782 "config": [ 00:06:29.782 { 00:06:29.782 "method": "bdev_set_options", 00:06:29.782 "params": { 00:06:29.782 "bdev_io_pool_size": 65535, 00:06:29.782 "bdev_io_cache_size": 256, 00:06:29.782 "bdev_auto_examine": true, 00:06:29.782 "iobuf_small_cache_size": 128, 00:06:29.782 "iobuf_large_cache_size": 16 00:06:29.782 } 00:06:29.782 }, 00:06:29.782 { 00:06:29.782 "method": "bdev_raid_set_options", 00:06:29.782 "params": { 00:06:29.782 "process_window_size_kb": 1024, 00:06:29.782 "process_max_bandwidth_mb_sec": 0 00:06:29.782 } 00:06:29.782 }, 00:06:29.782 { 00:06:29.782 "method": "bdev_iscsi_set_options", 00:06:29.782 "params": { 00:06:29.782 "timeout_sec": 30 00:06:29.782 } 00:06:29.782 }, 00:06:29.782 { 00:06:29.782 "method": "bdev_nvme_set_options", 00:06:29.782 "params": { 00:06:29.782 "action_on_timeout": "none", 00:06:29.782 "timeout_us": 0, 00:06:29.782 "timeout_admin_us": 0, 00:06:29.782 "keep_alive_timeout_ms": 10000, 00:06:29.782 "arbitration_burst": 0, 00:06:29.782 "low_priority_weight": 0, 00:06:29.782 "medium_priority_weight": 0, 00:06:29.782 "high_priority_weight": 0, 00:06:29.782 "nvme_adminq_poll_period_us": 10000, 00:06:29.782 "nvme_ioq_poll_period_us": 0, 00:06:29.782 "io_queue_requests": 0, 00:06:29.782 "delay_cmd_submit": true, 00:06:29.782 "transport_retry_count": 4, 00:06:29.782 "bdev_retry_count": 3, 00:06:29.782 "transport_ack_timeout": 0, 00:06:29.782 "ctrlr_loss_timeout_sec": 0, 00:06:29.782 "reconnect_delay_sec": 0, 00:06:29.782 "fast_io_fail_timeout_sec": 0, 00:06:29.782 "disable_auto_failback": false, 00:06:29.782 "generate_uuids": false, 00:06:29.782 "transport_tos": 0, 00:06:29.782 "nvme_error_stat": false, 00:06:29.782 "rdma_srq_size": 0, 00:06:29.782 "io_path_stat": false, 00:06:29.782 "allow_accel_sequence": false, 00:06:29.782 "rdma_max_cq_size": 0, 00:06:29.782 "rdma_cm_event_timeout_ms": 0, 00:06:29.782 "dhchap_digests": [ 00:06:29.782 "sha256", 00:06:29.782 "sha384", 00:06:29.782 "sha512" 00:06:29.782 ], 00:06:29.782 "dhchap_dhgroups": [ 00:06:29.782 "null", 00:06:29.782 "ffdhe2048", 00:06:29.782 "ffdhe3072", 00:06:29.782 "ffdhe4096", 00:06:29.782 "ffdhe6144", 00:06:29.782 "ffdhe8192" 00:06:29.782 ] 00:06:29.782 } 00:06:29.782 }, 00:06:29.782 { 00:06:29.782 "method": "bdev_nvme_set_hotplug", 00:06:29.782 "params": { 00:06:29.782 "period_us": 100000, 00:06:29.782 "enable": false 00:06:29.782 } 00:06:29.782 }, 00:06:29.782 { 00:06:29.782 "method": "bdev_wait_for_examine" 00:06:29.782 } 00:06:29.782 ] 00:06:29.782 }, 00:06:29.782 { 00:06:29.782 "subsystem": "scsi", 00:06:29.782 "config": null 00:06:29.782 }, 00:06:29.782 { 00:06:29.782 "subsystem": "scheduler", 00:06:29.782 "config": [ 00:06:29.782 { 00:06:29.782 "method": "framework_set_scheduler", 00:06:29.782 "params": { 00:06:29.782 "name": "static" 00:06:29.782 } 00:06:29.782 } 00:06:29.782 ] 00:06:29.782 }, 00:06:29.782 { 00:06:29.782 "subsystem": "vhost_scsi", 00:06:29.782 "config": [] 00:06:29.782 }, 00:06:29.782 { 00:06:29.782 "subsystem": "vhost_blk", 00:06:29.782 "config": [] 00:06:29.782 }, 00:06:29.782 { 00:06:29.782 "subsystem": "ublk", 00:06:29.782 "config": [] 00:06:29.782 }, 00:06:29.782 { 00:06:29.782 "subsystem": "nbd", 00:06:29.782 "config": [] 00:06:29.782 }, 00:06:29.782 { 00:06:29.782 "subsystem": "nvmf", 00:06:29.783 "config": [ 00:06:29.783 { 00:06:29.783 "method": "nvmf_set_config", 00:06:29.783 "params": { 00:06:29.783 "discovery_filter": "match_any", 00:06:29.783 "admin_cmd_passthru": { 00:06:29.783 "identify_ctrlr": false 00:06:29.783 }, 00:06:29.783 "dhchap_digests": [ 00:06:29.783 "sha256", 00:06:29.783 "sha384", 00:06:29.783 "sha512" 00:06:29.783 ], 00:06:29.783 "dhchap_dhgroups": [ 00:06:29.783 "null", 00:06:29.783 "ffdhe2048", 00:06:29.783 "ffdhe3072", 00:06:29.783 "ffdhe4096", 00:06:29.783 "ffdhe6144", 00:06:29.783 "ffdhe8192" 00:06:29.783 ] 00:06:29.783 } 00:06:29.783 }, 00:06:29.783 { 00:06:29.783 "method": "nvmf_set_max_subsystems", 00:06:29.783 "params": { 00:06:29.783 "max_subsystems": 1024 00:06:29.783 } 00:06:29.783 }, 00:06:29.783 { 00:06:29.783 "method": "nvmf_set_crdt", 00:06:29.783 "params": { 00:06:29.783 "crdt1": 0, 00:06:29.783 "crdt2": 0, 00:06:29.783 "crdt3": 0 00:06:29.783 } 00:06:29.783 }, 00:06:29.783 { 00:06:29.783 "method": "nvmf_create_transport", 00:06:29.783 "params": { 00:06:29.783 "trtype": "TCP", 00:06:29.783 "max_queue_depth": 128, 00:06:29.783 "max_io_qpairs_per_ctrlr": 127, 00:06:29.783 "in_capsule_data_size": 4096, 00:06:29.783 "max_io_size": 131072, 00:06:29.783 "io_unit_size": 131072, 00:06:29.783 "max_aq_depth": 128, 00:06:29.783 "num_shared_buffers": 511, 00:06:29.783 "buf_cache_size": 4294967295, 00:06:29.783 "dif_insert_or_strip": false, 00:06:29.783 "zcopy": false, 00:06:29.783 "c2h_success": true, 00:06:29.783 "sock_priority": 0, 00:06:29.783 "abort_timeout_sec": 1, 00:06:29.783 "ack_timeout": 0, 00:06:29.783 "data_wr_pool_size": 0 00:06:29.783 } 00:06:29.783 } 00:06:29.783 ] 00:06:29.783 }, 00:06:29.783 { 00:06:29.783 "subsystem": "iscsi", 00:06:29.783 "config": [ 00:06:29.783 { 00:06:29.783 "method": "iscsi_set_options", 00:06:29.783 "params": { 00:06:29.783 "node_base": "iqn.2016-06.io.spdk", 00:06:29.783 "max_sessions": 128, 00:06:29.783 "max_connections_per_session": 2, 00:06:29.783 "max_queue_depth": 64, 00:06:29.783 "default_time2wait": 2, 00:06:29.783 "default_time2retain": 20, 00:06:29.783 "first_burst_length": 8192, 00:06:29.783 "immediate_data": true, 00:06:29.783 "allow_duplicated_isid": false, 00:06:29.783 "error_recovery_level": 0, 00:06:29.783 "nop_timeout": 60, 00:06:29.783 "nop_in_interval": 30, 00:06:29.783 "disable_chap": false, 00:06:29.783 "require_chap": false, 00:06:29.783 "mutual_chap": false, 00:06:29.783 "chap_group": 0, 00:06:29.783 "max_large_datain_per_connection": 64, 00:06:29.783 "max_r2t_per_connection": 4, 00:06:29.783 "pdu_pool_size": 36864, 00:06:29.783 "immediate_data_pool_size": 16384, 00:06:29.783 "data_out_pool_size": 2048 00:06:29.783 } 00:06:29.783 } 00:06:29.783 ] 00:06:29.783 } 00:06:29.783 ] 00:06:29.783 } 00:06:29.783 08:21:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:29.783 08:21:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3519095 00:06:29.783 08:21:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 3519095 ']' 00:06:29.783 08:21:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 3519095 00:06:29.783 08:21:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:29.783 08:21:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:29.783 08:21:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3519095 00:06:29.783 08:21:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:29.783 08:21:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:29.783 08:21:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3519095' 00:06:29.783 killing process with pid 3519095 00:06:29.783 08:21:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 3519095 00:06:29.783 08:21:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 3519095 00:06:30.044 08:21:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3519345 00:06:30.044 08:21:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:30.044 08:21:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:35.337 08:21:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3519345 00:06:35.337 08:21:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 3519345 ']' 00:06:35.337 08:21:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 3519345 00:06:35.337 08:21:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:35.337 08:21:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:35.337 08:21:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3519345 00:06:35.337 08:21:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:35.337 08:21:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:35.338 08:21:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3519345' 00:06:35.338 killing process with pid 3519345 00:06:35.338 08:21:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 3519345 00:06:35.338 08:21:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 3519345 00:06:35.338 08:21:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:35.338 08:21:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:35.338 00:06:35.338 real 0m6.595s 00:06:35.338 user 0m6.491s 00:06:35.338 sys 0m0.528s 00:06:35.338 08:21:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.338 08:21:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:35.338 ************************************ 00:06:35.338 END TEST skip_rpc_with_json 00:06:35.338 ************************************ 00:06:35.338 08:21:27 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:35.338 08:21:27 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:35.338 08:21:27 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:35.338 08:21:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.338 ************************************ 00:06:35.338 START TEST skip_rpc_with_delay 00:06:35.338 ************************************ 00:06:35.338 08:21:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:06:35.338 08:21:27 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:35.338 08:21:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:35.338 08:21:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:35.338 08:21:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:35.338 08:21:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:35.338 08:21:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:35.599 08:21:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:35.599 08:21:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:35.599 08:21:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:35.599 08:21:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:35.599 08:21:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:35.599 08:21:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:35.599 [2024-10-01 08:21:27.200646] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:35.599 [2024-10-01 08:21:27.200719] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:35.599 08:21:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:35.599 08:21:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:35.599 08:21:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:35.599 08:21:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:35.599 00:06:35.599 real 0m0.058s 00:06:35.599 user 0m0.041s 00:06:35.599 sys 0m0.017s 00:06:35.599 08:21:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.599 08:21:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:35.599 ************************************ 00:06:35.599 END TEST skip_rpc_with_delay 00:06:35.599 ************************************ 00:06:35.599 08:21:27 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:35.599 08:21:27 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:35.599 08:21:27 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:35.599 08:21:27 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:35.599 08:21:27 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:35.599 08:21:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.599 ************************************ 00:06:35.599 START TEST exit_on_failed_rpc_init 00:06:35.599 ************************************ 00:06:35.599 08:21:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:06:35.599 08:21:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3520502 00:06:35.599 08:21:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3520502 00:06:35.599 08:21:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:35.599 08:21:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 3520502 ']' 00:06:35.599 08:21:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.599 08:21:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:35.599 08:21:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.599 08:21:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:35.599 08:21:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:35.599 [2024-10-01 08:21:27.364846] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:06:35.599 [2024-10-01 08:21:27.364918] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3520502 ] 00:06:35.859 [2024-10-01 08:21:27.432230] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.859 [2024-10-01 08:21:27.506012] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.429 08:21:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:36.429 08:21:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:06:36.429 08:21:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:36.429 08:21:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:36.429 08:21:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:36.429 08:21:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:36.430 08:21:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:36.430 08:21:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:36.430 08:21:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:36.430 08:21:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:36.430 08:21:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:36.430 08:21:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:36.430 08:21:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:36.430 08:21:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:36.430 08:21:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:36.430 [2024-10-01 08:21:28.223679] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:06:36.430 [2024-10-01 08:21:28.223732] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3520607 ] 00:06:36.690 [2024-10-01 08:21:28.301493] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.690 [2024-10-01 08:21:28.365938] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:36.690 [2024-10-01 08:21:28.366000] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:36.690 [2024-10-01 08:21:28.366010] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:36.690 [2024-10-01 08:21:28.366017] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:36.690 08:21:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:36.690 08:21:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:36.690 08:21:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:36.690 08:21:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:36.690 08:21:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:36.690 08:21:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:36.690 08:21:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:36.690 08:21:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3520502 00:06:36.690 08:21:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 3520502 ']' 00:06:36.690 08:21:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 3520502 00:06:36.690 08:21:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:06:36.691 08:21:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:36.691 08:21:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3520502 00:06:36.691 08:21:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:36.691 08:21:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:36.691 08:21:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3520502' 00:06:36.691 killing process with pid 3520502 00:06:36.691 08:21:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 3520502 00:06:36.691 08:21:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 3520502 00:06:36.956 00:06:36.956 real 0m1.433s 00:06:36.956 user 0m1.711s 00:06:36.956 sys 0m0.401s 00:06:36.956 08:21:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:36.956 08:21:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:36.956 ************************************ 00:06:36.956 END TEST exit_on_failed_rpc_init 00:06:36.956 ************************************ 00:06:36.956 08:21:28 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:36.956 00:06:36.956 real 0m13.902s 00:06:36.956 user 0m13.571s 00:06:36.956 sys 0m1.511s 00:06:36.957 08:21:28 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:36.957 08:21:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.957 ************************************ 00:06:36.957 END TEST skip_rpc 00:06:36.957 ************************************ 00:06:37.218 08:21:28 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:37.218 08:21:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:37.218 08:21:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.218 08:21:28 -- common/autotest_common.sh@10 -- # set +x 00:06:37.218 ************************************ 00:06:37.218 START TEST rpc_client 00:06:37.218 ************************************ 00:06:37.218 08:21:28 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:37.218 * Looking for test storage... 00:06:37.218 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:37.218 08:21:28 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:37.218 08:21:28 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:06:37.218 08:21:28 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:37.218 08:21:29 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:37.218 08:21:29 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:37.218 08:21:29 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:37.218 08:21:29 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:37.218 08:21:29 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:37.218 08:21:29 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:37.218 08:21:29 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:37.218 08:21:29 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:37.218 08:21:29 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:37.218 08:21:29 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:37.218 08:21:29 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:37.218 08:21:29 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:37.218 08:21:29 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:37.218 08:21:29 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:37.218 08:21:29 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:37.218 08:21:29 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:37.218 08:21:29 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:37.218 08:21:29 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:37.218 08:21:29 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:37.218 08:21:29 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:37.218 08:21:29 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:37.218 08:21:29 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:37.218 08:21:29 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:37.218 08:21:29 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:37.218 08:21:29 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:37.218 08:21:29 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:37.218 08:21:29 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:37.218 08:21:29 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:37.218 08:21:29 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:37.218 08:21:29 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:37.218 08:21:29 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:37.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.218 --rc genhtml_branch_coverage=1 00:06:37.218 --rc genhtml_function_coverage=1 00:06:37.218 --rc genhtml_legend=1 00:06:37.218 --rc geninfo_all_blocks=1 00:06:37.218 --rc geninfo_unexecuted_blocks=1 00:06:37.218 00:06:37.218 ' 00:06:37.218 08:21:29 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:37.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.218 --rc genhtml_branch_coverage=1 00:06:37.218 --rc genhtml_function_coverage=1 00:06:37.218 --rc genhtml_legend=1 00:06:37.218 --rc geninfo_all_blocks=1 00:06:37.218 --rc geninfo_unexecuted_blocks=1 00:06:37.218 00:06:37.218 ' 00:06:37.218 08:21:29 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:37.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.218 --rc genhtml_branch_coverage=1 00:06:37.218 --rc genhtml_function_coverage=1 00:06:37.218 --rc genhtml_legend=1 00:06:37.218 --rc geninfo_all_blocks=1 00:06:37.218 --rc geninfo_unexecuted_blocks=1 00:06:37.218 00:06:37.218 ' 00:06:37.218 08:21:29 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:37.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.218 --rc genhtml_branch_coverage=1 00:06:37.218 --rc genhtml_function_coverage=1 00:06:37.218 --rc genhtml_legend=1 00:06:37.218 --rc geninfo_all_blocks=1 00:06:37.218 --rc geninfo_unexecuted_blocks=1 00:06:37.218 00:06:37.218 ' 00:06:37.219 08:21:29 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:37.479 OK 00:06:37.479 08:21:29 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:37.479 00:06:37.479 real 0m0.223s 00:06:37.479 user 0m0.136s 00:06:37.479 sys 0m0.099s 00:06:37.479 08:21:29 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.479 08:21:29 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:37.479 ************************************ 00:06:37.479 END TEST rpc_client 00:06:37.479 ************************************ 00:06:37.479 08:21:29 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:37.479 08:21:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:37.479 08:21:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.479 08:21:29 -- common/autotest_common.sh@10 -- # set +x 00:06:37.479 ************************************ 00:06:37.479 START TEST json_config 00:06:37.479 ************************************ 00:06:37.479 08:21:29 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:37.479 08:21:29 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:37.479 08:21:29 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:06:37.479 08:21:29 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:37.479 08:21:29 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:37.479 08:21:29 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:37.479 08:21:29 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:37.479 08:21:29 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:37.479 08:21:29 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:37.479 08:21:29 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:37.479 08:21:29 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:37.479 08:21:29 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:37.479 08:21:29 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:37.479 08:21:29 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:37.479 08:21:29 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:37.479 08:21:29 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:37.479 08:21:29 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:37.479 08:21:29 json_config -- scripts/common.sh@345 -- # : 1 00:06:37.479 08:21:29 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:37.479 08:21:29 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:37.479 08:21:29 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:37.479 08:21:29 json_config -- scripts/common.sh@353 -- # local d=1 00:06:37.479 08:21:29 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:37.741 08:21:29 json_config -- scripts/common.sh@355 -- # echo 1 00:06:37.741 08:21:29 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:37.741 08:21:29 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:37.741 08:21:29 json_config -- scripts/common.sh@353 -- # local d=2 00:06:37.741 08:21:29 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:37.741 08:21:29 json_config -- scripts/common.sh@355 -- # echo 2 00:06:37.741 08:21:29 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:37.741 08:21:29 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:37.741 08:21:29 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:37.741 08:21:29 json_config -- scripts/common.sh@368 -- # return 0 00:06:37.741 08:21:29 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:37.741 08:21:29 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:37.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.741 --rc genhtml_branch_coverage=1 00:06:37.741 --rc genhtml_function_coverage=1 00:06:37.741 --rc genhtml_legend=1 00:06:37.741 --rc geninfo_all_blocks=1 00:06:37.741 --rc geninfo_unexecuted_blocks=1 00:06:37.741 00:06:37.741 ' 00:06:37.741 08:21:29 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:37.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.741 --rc genhtml_branch_coverage=1 00:06:37.741 --rc genhtml_function_coverage=1 00:06:37.741 --rc genhtml_legend=1 00:06:37.741 --rc geninfo_all_blocks=1 00:06:37.741 --rc geninfo_unexecuted_blocks=1 00:06:37.741 00:06:37.741 ' 00:06:37.741 08:21:29 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:37.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.741 --rc genhtml_branch_coverage=1 00:06:37.741 --rc genhtml_function_coverage=1 00:06:37.741 --rc genhtml_legend=1 00:06:37.741 --rc geninfo_all_blocks=1 00:06:37.741 --rc geninfo_unexecuted_blocks=1 00:06:37.741 00:06:37.741 ' 00:06:37.741 08:21:29 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:37.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.741 --rc genhtml_branch_coverage=1 00:06:37.741 --rc genhtml_function_coverage=1 00:06:37.741 --rc genhtml_legend=1 00:06:37.741 --rc geninfo_all_blocks=1 00:06:37.741 --rc geninfo_unexecuted_blocks=1 00:06:37.741 00:06:37.741 ' 00:06:37.741 08:21:29 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:37.741 08:21:29 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:37.741 08:21:29 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:37.741 08:21:29 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:37.741 08:21:29 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:37.741 08:21:29 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:37.741 08:21:29 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:37.741 08:21:29 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:37.741 08:21:29 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:37.741 08:21:29 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:37.741 08:21:29 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:37.741 08:21:29 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:37.741 08:21:29 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:37.741 08:21:29 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:37.741 08:21:29 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:37.741 08:21:29 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:37.741 08:21:29 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:37.741 08:21:29 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:37.741 08:21:29 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:37.741 08:21:29 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:37.741 08:21:29 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:37.741 08:21:29 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:37.741 08:21:29 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:37.742 08:21:29 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.742 08:21:29 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.742 08:21:29 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.742 08:21:29 json_config -- paths/export.sh@5 -- # export PATH 00:06:37.742 08:21:29 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.742 08:21:29 json_config -- nvmf/common.sh@51 -- # : 0 00:06:37.742 08:21:29 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:37.742 08:21:29 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:37.742 08:21:29 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:37.742 08:21:29 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:37.742 08:21:29 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:37.742 08:21:29 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:37.742 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:37.742 08:21:29 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:37.742 08:21:29 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:37.742 08:21:29 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:37.742 08:21:29 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:37.742 08:21:29 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:37.742 08:21:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:37.742 08:21:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:37.742 08:21:29 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:37.742 08:21:29 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:37.742 08:21:29 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:37.742 08:21:29 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:37.742 08:21:29 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:37.742 08:21:29 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:37.742 08:21:29 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:37.742 08:21:29 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:37.742 08:21:29 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:37.742 08:21:29 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:37.742 08:21:29 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:37.742 08:21:29 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:06:37.742 INFO: JSON configuration test init 00:06:37.742 08:21:29 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:06:37.742 08:21:29 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:06:37.742 08:21:29 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:37.742 08:21:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:37.742 08:21:29 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:06:37.742 08:21:29 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:37.742 08:21:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:37.742 08:21:29 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:06:37.742 08:21:29 json_config -- json_config/common.sh@9 -- # local app=target 00:06:37.742 08:21:29 json_config -- json_config/common.sh@10 -- # shift 00:06:37.742 08:21:29 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:37.742 08:21:29 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:37.742 08:21:29 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:37.742 08:21:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:37.742 08:21:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:37.742 08:21:29 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3520972 00:06:37.742 08:21:29 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:37.742 Waiting for target to run... 00:06:37.742 08:21:29 json_config -- json_config/common.sh@25 -- # waitforlisten 3520972 /var/tmp/spdk_tgt.sock 00:06:37.742 08:21:29 json_config -- common/autotest_common.sh@831 -- # '[' -z 3520972 ']' 00:06:37.742 08:21:29 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:37.742 08:21:29 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:37.742 08:21:29 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:37.742 08:21:29 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:37.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:37.742 08:21:29 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:37.742 08:21:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:37.742 [2024-10-01 08:21:29.434703] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:06:37.742 [2024-10-01 08:21:29.434780] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3520972 ] 00:06:38.004 [2024-10-01 08:21:29.702257] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.004 [2024-10-01 08:21:29.755701] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.576 08:21:30 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:38.576 08:21:30 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:38.576 08:21:30 json_config -- json_config/common.sh@26 -- # echo '' 00:06:38.576 00:06:38.576 08:21:30 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:06:38.576 08:21:30 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:06:38.576 08:21:30 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:38.576 08:21:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:38.576 08:21:30 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:06:38.576 08:21:30 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:06:38.576 08:21:30 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:38.576 08:21:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:38.576 08:21:30 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:38.576 08:21:30 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:06:38.576 08:21:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:39.147 08:21:30 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:06:39.147 08:21:30 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:39.147 08:21:30 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:39.147 08:21:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:39.147 08:21:30 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:39.147 08:21:30 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:39.148 08:21:30 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:39.148 08:21:30 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:06:39.148 08:21:30 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:06:39.148 08:21:30 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:39.148 08:21:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:39.148 08:21:30 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:39.409 08:21:31 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:06:39.409 08:21:31 json_config -- json_config/json_config.sh@51 -- # local get_types 00:06:39.409 08:21:31 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:06:39.409 08:21:31 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:06:39.409 08:21:31 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:06:39.409 08:21:31 json_config -- json_config/json_config.sh@54 -- # sort 00:06:39.409 08:21:31 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:06:39.409 08:21:31 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:06:39.409 08:21:31 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:06:39.409 08:21:31 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:06:39.409 08:21:31 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:39.409 08:21:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:39.409 08:21:31 json_config -- json_config/json_config.sh@62 -- # return 0 00:06:39.409 08:21:31 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:06:39.409 08:21:31 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:06:39.409 08:21:31 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:06:39.409 08:21:31 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:06:39.409 08:21:31 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:06:39.409 08:21:31 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:06:39.409 08:21:31 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:39.409 08:21:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:39.409 08:21:31 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:39.409 08:21:31 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:06:39.409 08:21:31 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:06:39.409 08:21:31 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:39.409 08:21:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:39.409 MallocForNvmf0 00:06:39.670 08:21:31 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:39.670 08:21:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:39.670 MallocForNvmf1 00:06:39.670 08:21:31 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:39.670 08:21:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:39.932 [2024-10-01 08:21:31.586108] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:39.932 08:21:31 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:39.932 08:21:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:40.193 08:21:31 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:40.193 08:21:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:40.193 08:21:31 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:40.193 08:21:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:40.454 08:21:32 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:40.454 08:21:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:40.715 [2024-10-01 08:21:32.316571] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:40.715 08:21:32 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:06:40.715 08:21:32 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:40.715 08:21:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:40.715 08:21:32 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:06:40.715 08:21:32 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:40.715 08:21:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:40.715 08:21:32 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:06:40.715 08:21:32 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:40.715 08:21:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:40.977 MallocBdevForConfigChangeCheck 00:06:40.977 08:21:32 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:06:40.977 08:21:32 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:40.977 08:21:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:40.977 08:21:32 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:06:40.977 08:21:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:41.237 08:21:32 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:06:41.237 INFO: shutting down applications... 00:06:41.237 08:21:32 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:06:41.237 08:21:32 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:06:41.237 08:21:32 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:06:41.237 08:21:32 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:41.809 Calling clear_iscsi_subsystem 00:06:41.809 Calling clear_nvmf_subsystem 00:06:41.809 Calling clear_nbd_subsystem 00:06:41.809 Calling clear_ublk_subsystem 00:06:41.809 Calling clear_vhost_blk_subsystem 00:06:41.809 Calling clear_vhost_scsi_subsystem 00:06:41.809 Calling clear_bdev_subsystem 00:06:41.809 08:21:33 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:41.809 08:21:33 json_config -- json_config/json_config.sh@350 -- # count=100 00:06:41.809 08:21:33 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:06:41.809 08:21:33 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:41.809 08:21:33 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:41.809 08:21:33 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:42.069 08:21:33 json_config -- json_config/json_config.sh@352 -- # break 00:06:42.069 08:21:33 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:06:42.069 08:21:33 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:06:42.069 08:21:33 json_config -- json_config/common.sh@31 -- # local app=target 00:06:42.069 08:21:33 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:42.069 08:21:33 json_config -- json_config/common.sh@35 -- # [[ -n 3520972 ]] 00:06:42.069 08:21:33 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3520972 00:06:42.069 08:21:33 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:42.069 08:21:33 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:42.069 08:21:33 json_config -- json_config/common.sh@41 -- # kill -0 3520972 00:06:42.069 08:21:33 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:42.639 08:21:34 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:42.639 08:21:34 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:42.639 08:21:34 json_config -- json_config/common.sh@41 -- # kill -0 3520972 00:06:42.639 08:21:34 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:42.639 08:21:34 json_config -- json_config/common.sh@43 -- # break 00:06:42.639 08:21:34 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:42.639 08:21:34 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:42.639 SPDK target shutdown done 00:06:42.639 08:21:34 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:06:42.639 INFO: relaunching applications... 00:06:42.639 08:21:34 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:42.639 08:21:34 json_config -- json_config/common.sh@9 -- # local app=target 00:06:42.639 08:21:34 json_config -- json_config/common.sh@10 -- # shift 00:06:42.639 08:21:34 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:42.639 08:21:34 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:42.639 08:21:34 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:42.639 08:21:34 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:42.639 08:21:34 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:42.639 08:21:34 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3522112 00:06:42.639 08:21:34 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:42.639 Waiting for target to run... 00:06:42.639 08:21:34 json_config -- json_config/common.sh@25 -- # waitforlisten 3522112 /var/tmp/spdk_tgt.sock 00:06:42.639 08:21:34 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:42.639 08:21:34 json_config -- common/autotest_common.sh@831 -- # '[' -z 3522112 ']' 00:06:42.639 08:21:34 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:42.639 08:21:34 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:42.639 08:21:34 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:42.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:42.639 08:21:34 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:42.639 08:21:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:42.639 [2024-10-01 08:21:34.295563] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:06:42.639 [2024-10-01 08:21:34.295618] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3522112 ] 00:06:42.899 [2024-10-01 08:21:34.599153] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.899 [2024-10-01 08:21:34.660815] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.468 [2024-10-01 08:21:35.177022] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:43.468 [2024-10-01 08:21:35.209425] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:43.468 08:21:35 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:43.468 08:21:35 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:43.468 08:21:35 json_config -- json_config/common.sh@26 -- # echo '' 00:06:43.468 00:06:43.468 08:21:35 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:06:43.468 08:21:35 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:43.468 INFO: Checking if target configuration is the same... 00:06:43.468 08:21:35 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:43.468 08:21:35 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:06:43.468 08:21:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:43.468 + '[' 2 -ne 2 ']' 00:06:43.468 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:43.468 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:43.468 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:43.468 +++ basename /dev/fd/62 00:06:43.468 ++ mktemp /tmp/62.XXX 00:06:43.468 + tmp_file_1=/tmp/62.V9u 00:06:43.468 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:43.468 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:43.468 + tmp_file_2=/tmp/spdk_tgt_config.json.9Ic 00:06:43.468 + ret=0 00:06:43.468 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:44.039 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:44.039 + diff -u /tmp/62.V9u /tmp/spdk_tgt_config.json.9Ic 00:06:44.039 + echo 'INFO: JSON config files are the same' 00:06:44.039 INFO: JSON config files are the same 00:06:44.039 + rm /tmp/62.V9u /tmp/spdk_tgt_config.json.9Ic 00:06:44.039 + exit 0 00:06:44.039 08:21:35 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:06:44.039 08:21:35 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:44.039 INFO: changing configuration and checking if this can be detected... 00:06:44.039 08:21:35 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:44.039 08:21:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:44.039 08:21:35 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:44.039 08:21:35 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:06:44.039 08:21:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:44.039 + '[' 2 -ne 2 ']' 00:06:44.039 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:44.039 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:44.039 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:44.039 +++ basename /dev/fd/62 00:06:44.039 ++ mktemp /tmp/62.XXX 00:06:44.039 + tmp_file_1=/tmp/62.2DX 00:06:44.039 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:44.039 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:44.039 + tmp_file_2=/tmp/spdk_tgt_config.json.lPC 00:06:44.039 + ret=0 00:06:44.039 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:44.612 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:44.612 + diff -u /tmp/62.2DX /tmp/spdk_tgt_config.json.lPC 00:06:44.612 + ret=1 00:06:44.612 + echo '=== Start of file: /tmp/62.2DX ===' 00:06:44.612 + cat /tmp/62.2DX 00:06:44.612 + echo '=== End of file: /tmp/62.2DX ===' 00:06:44.612 + echo '' 00:06:44.612 + echo '=== Start of file: /tmp/spdk_tgt_config.json.lPC ===' 00:06:44.612 + cat /tmp/spdk_tgt_config.json.lPC 00:06:44.612 + echo '=== End of file: /tmp/spdk_tgt_config.json.lPC ===' 00:06:44.612 + echo '' 00:06:44.612 + rm /tmp/62.2DX /tmp/spdk_tgt_config.json.lPC 00:06:44.612 + exit 1 00:06:44.612 08:21:36 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:06:44.612 INFO: configuration change detected. 00:06:44.612 08:21:36 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:06:44.612 08:21:36 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:06:44.612 08:21:36 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:44.612 08:21:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:44.612 08:21:36 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:06:44.612 08:21:36 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:06:44.612 08:21:36 json_config -- json_config/json_config.sh@324 -- # [[ -n 3522112 ]] 00:06:44.612 08:21:36 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:06:44.612 08:21:36 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:06:44.612 08:21:36 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:44.612 08:21:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:44.612 08:21:36 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:06:44.612 08:21:36 json_config -- json_config/json_config.sh@200 -- # uname -s 00:06:44.612 08:21:36 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:06:44.612 08:21:36 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:06:44.612 08:21:36 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:06:44.612 08:21:36 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:06:44.612 08:21:36 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:44.612 08:21:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:44.612 08:21:36 json_config -- json_config/json_config.sh@330 -- # killprocess 3522112 00:06:44.612 08:21:36 json_config -- common/autotest_common.sh@950 -- # '[' -z 3522112 ']' 00:06:44.612 08:21:36 json_config -- common/autotest_common.sh@954 -- # kill -0 3522112 00:06:44.612 08:21:36 json_config -- common/autotest_common.sh@955 -- # uname 00:06:44.612 08:21:36 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:44.612 08:21:36 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3522112 00:06:44.612 08:21:36 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:44.612 08:21:36 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:44.612 08:21:36 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3522112' 00:06:44.612 killing process with pid 3522112 00:06:44.612 08:21:36 json_config -- common/autotest_common.sh@969 -- # kill 3522112 00:06:44.612 08:21:36 json_config -- common/autotest_common.sh@974 -- # wait 3522112 00:06:44.874 08:21:36 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:44.874 08:21:36 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:06:44.874 08:21:36 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:44.874 08:21:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:44.874 08:21:36 json_config -- json_config/json_config.sh@335 -- # return 0 00:06:44.874 08:21:36 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:06:44.874 INFO: Success 00:06:44.874 00:06:44.874 real 0m7.518s 00:06:44.874 user 0m9.124s 00:06:44.874 sys 0m1.925s 00:06:44.874 08:21:36 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:44.874 08:21:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:44.874 ************************************ 00:06:44.874 END TEST json_config 00:06:44.874 ************************************ 00:06:45.136 08:21:36 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:45.136 08:21:36 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:45.136 08:21:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.136 08:21:36 -- common/autotest_common.sh@10 -- # set +x 00:06:45.136 ************************************ 00:06:45.136 START TEST json_config_extra_key 00:06:45.136 ************************************ 00:06:45.136 08:21:36 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:45.136 08:21:36 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:45.136 08:21:36 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:06:45.136 08:21:36 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:45.136 08:21:36 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:45.136 08:21:36 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:45.136 08:21:36 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:45.136 08:21:36 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:45.136 08:21:36 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:45.136 08:21:36 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:45.136 08:21:36 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:45.136 08:21:36 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:45.136 08:21:36 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:45.136 08:21:36 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:45.136 08:21:36 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:45.136 08:21:36 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:45.136 08:21:36 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:45.136 08:21:36 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:45.136 08:21:36 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:45.136 08:21:36 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:45.136 08:21:36 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:45.136 08:21:36 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:45.136 08:21:36 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:45.136 08:21:36 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:45.136 08:21:36 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:45.136 08:21:36 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:45.136 08:21:36 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:45.136 08:21:36 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:45.136 08:21:36 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:45.136 08:21:36 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:45.136 08:21:36 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:45.136 08:21:36 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:45.136 08:21:36 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:45.136 08:21:36 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:45.136 08:21:36 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:45.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.136 --rc genhtml_branch_coverage=1 00:06:45.136 --rc genhtml_function_coverage=1 00:06:45.136 --rc genhtml_legend=1 00:06:45.136 --rc geninfo_all_blocks=1 00:06:45.136 --rc geninfo_unexecuted_blocks=1 00:06:45.136 00:06:45.136 ' 00:06:45.136 08:21:36 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:45.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.136 --rc genhtml_branch_coverage=1 00:06:45.136 --rc genhtml_function_coverage=1 00:06:45.136 --rc genhtml_legend=1 00:06:45.136 --rc geninfo_all_blocks=1 00:06:45.136 --rc geninfo_unexecuted_blocks=1 00:06:45.136 00:06:45.136 ' 00:06:45.136 08:21:36 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:45.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.136 --rc genhtml_branch_coverage=1 00:06:45.136 --rc genhtml_function_coverage=1 00:06:45.136 --rc genhtml_legend=1 00:06:45.137 --rc geninfo_all_blocks=1 00:06:45.137 --rc geninfo_unexecuted_blocks=1 00:06:45.137 00:06:45.137 ' 00:06:45.137 08:21:36 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:45.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.137 --rc genhtml_branch_coverage=1 00:06:45.137 --rc genhtml_function_coverage=1 00:06:45.137 --rc genhtml_legend=1 00:06:45.137 --rc geninfo_all_blocks=1 00:06:45.137 --rc geninfo_unexecuted_blocks=1 00:06:45.137 00:06:45.137 ' 00:06:45.137 08:21:36 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:45.137 08:21:36 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:45.137 08:21:36 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:45.137 08:21:36 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:45.137 08:21:36 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:45.137 08:21:36 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:45.137 08:21:36 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:45.137 08:21:36 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:45.137 08:21:36 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:45.137 08:21:36 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:45.137 08:21:36 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:45.137 08:21:36 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:45.137 08:21:36 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:45.137 08:21:36 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:45.137 08:21:36 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:45.137 08:21:36 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:45.137 08:21:36 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:45.137 08:21:36 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:45.137 08:21:36 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:45.137 08:21:36 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:45.137 08:21:36 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:45.137 08:21:36 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:45.137 08:21:36 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:45.137 08:21:36 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.137 08:21:36 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.137 08:21:36 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.137 08:21:36 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:45.137 08:21:36 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.137 08:21:36 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:45.137 08:21:36 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:45.137 08:21:36 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:45.137 08:21:36 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:45.137 08:21:36 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:45.137 08:21:36 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:45.137 08:21:36 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:45.137 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:45.137 08:21:36 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:45.137 08:21:36 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:45.137 08:21:36 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:45.137 08:21:36 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:45.137 08:21:36 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:45.137 08:21:36 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:45.137 08:21:36 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:45.137 08:21:36 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:45.137 08:21:36 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:45.137 08:21:36 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:45.137 08:21:36 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:45.137 08:21:36 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:45.137 08:21:36 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:45.137 08:21:36 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:45.137 INFO: launching applications... 00:06:45.137 08:21:36 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:45.137 08:21:36 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:45.137 08:21:36 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:45.137 08:21:36 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:45.137 08:21:36 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:45.137 08:21:36 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:45.137 08:21:36 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:45.137 08:21:36 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:45.137 08:21:36 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3522902 00:06:45.137 08:21:36 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:45.137 Waiting for target to run... 00:06:45.137 08:21:36 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3522902 /var/tmp/spdk_tgt.sock 00:06:45.137 08:21:36 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 3522902 ']' 00:06:45.137 08:21:36 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:45.137 08:21:36 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:45.137 08:21:36 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:45.137 08:21:36 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:45.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:45.137 08:21:36 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:45.137 08:21:36 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:45.398 [2024-10-01 08:21:37.011302] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:06:45.398 [2024-10-01 08:21:37.011371] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3522902 ] 00:06:45.659 [2024-10-01 08:21:37.292936] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.659 [2024-10-01 08:21:37.344662] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.231 08:21:37 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:46.231 08:21:37 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:06:46.231 08:21:37 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:46.231 00:06:46.231 08:21:37 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:46.231 INFO: shutting down applications... 00:06:46.231 08:21:37 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:46.231 08:21:37 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:46.231 08:21:37 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:46.231 08:21:37 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3522902 ]] 00:06:46.231 08:21:37 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3522902 00:06:46.231 08:21:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:46.231 08:21:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:46.231 08:21:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3522902 00:06:46.231 08:21:37 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:46.493 08:21:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:46.493 08:21:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:46.493 08:21:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3522902 00:06:46.493 08:21:38 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:46.493 08:21:38 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:46.493 08:21:38 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:46.493 08:21:38 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:46.493 SPDK target shutdown done 00:06:46.493 08:21:38 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:46.493 Success 00:06:46.493 00:06:46.493 real 0m1.558s 00:06:46.493 user 0m1.204s 00:06:46.493 sys 0m0.410s 00:06:46.493 08:21:38 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:46.493 08:21:38 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:46.493 ************************************ 00:06:46.493 END TEST json_config_extra_key 00:06:46.493 ************************************ 00:06:46.756 08:21:38 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:46.756 08:21:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:46.756 08:21:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:46.756 08:21:38 -- common/autotest_common.sh@10 -- # set +x 00:06:46.756 ************************************ 00:06:46.756 START TEST alias_rpc 00:06:46.756 ************************************ 00:06:46.756 08:21:38 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:46.756 * Looking for test storage... 00:06:46.756 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:46.756 08:21:38 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:46.756 08:21:38 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:46.756 08:21:38 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:46.756 08:21:38 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:46.756 08:21:38 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:46.756 08:21:38 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:46.756 08:21:38 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:46.756 08:21:38 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:46.756 08:21:38 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:46.756 08:21:38 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:46.756 08:21:38 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:46.756 08:21:38 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:46.756 08:21:38 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:46.756 08:21:38 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:46.756 08:21:38 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:46.756 08:21:38 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:46.756 08:21:38 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:46.756 08:21:38 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:46.756 08:21:38 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:46.756 08:21:38 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:46.756 08:21:38 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:46.756 08:21:38 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:46.756 08:21:38 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:46.756 08:21:38 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:46.756 08:21:38 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:46.756 08:21:38 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:46.756 08:21:38 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:46.756 08:21:38 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:46.756 08:21:38 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:46.756 08:21:38 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:46.756 08:21:38 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:46.756 08:21:38 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:46.756 08:21:38 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:46.756 08:21:38 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:46.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.756 --rc genhtml_branch_coverage=1 00:06:46.756 --rc genhtml_function_coverage=1 00:06:46.756 --rc genhtml_legend=1 00:06:46.756 --rc geninfo_all_blocks=1 00:06:46.756 --rc geninfo_unexecuted_blocks=1 00:06:46.756 00:06:46.756 ' 00:06:46.756 08:21:38 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:46.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.756 --rc genhtml_branch_coverage=1 00:06:46.756 --rc genhtml_function_coverage=1 00:06:46.756 --rc genhtml_legend=1 00:06:46.756 --rc geninfo_all_blocks=1 00:06:46.756 --rc geninfo_unexecuted_blocks=1 00:06:46.756 00:06:46.756 ' 00:06:46.756 08:21:38 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:46.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.756 --rc genhtml_branch_coverage=1 00:06:46.756 --rc genhtml_function_coverage=1 00:06:46.756 --rc genhtml_legend=1 00:06:46.756 --rc geninfo_all_blocks=1 00:06:46.756 --rc geninfo_unexecuted_blocks=1 00:06:46.756 00:06:46.756 ' 00:06:46.756 08:21:38 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:46.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.756 --rc genhtml_branch_coverage=1 00:06:46.756 --rc genhtml_function_coverage=1 00:06:46.756 --rc genhtml_legend=1 00:06:46.756 --rc geninfo_all_blocks=1 00:06:46.756 --rc geninfo_unexecuted_blocks=1 00:06:46.756 00:06:46.756 ' 00:06:46.756 08:21:38 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:47.019 08:21:38 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3523276 00:06:47.019 08:21:38 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3523276 00:06:47.019 08:21:38 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 3523276 ']' 00:06:47.019 08:21:38 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:47.019 08:21:38 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.019 08:21:38 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:47.019 08:21:38 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.019 08:21:38 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:47.019 08:21:38 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.019 [2024-10-01 08:21:38.639412] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:06:47.019 [2024-10-01 08:21:38.639492] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3523276 ] 00:06:47.019 [2024-10-01 08:21:38.703707] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.019 [2024-10-01 08:21:38.777804] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.961 08:21:39 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:47.961 08:21:39 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:47.961 08:21:39 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:47.961 08:21:39 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3523276 00:06:47.961 08:21:39 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 3523276 ']' 00:06:47.961 08:21:39 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 3523276 00:06:47.961 08:21:39 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:06:47.961 08:21:39 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:47.961 08:21:39 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3523276 00:06:47.961 08:21:39 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:47.961 08:21:39 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:47.961 08:21:39 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3523276' 00:06:47.961 killing process with pid 3523276 00:06:47.961 08:21:39 alias_rpc -- common/autotest_common.sh@969 -- # kill 3523276 00:06:47.961 08:21:39 alias_rpc -- common/autotest_common.sh@974 -- # wait 3523276 00:06:48.222 00:06:48.222 real 0m1.538s 00:06:48.222 user 0m1.662s 00:06:48.222 sys 0m0.435s 00:06:48.222 08:21:39 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:48.222 08:21:39 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.222 ************************************ 00:06:48.222 END TEST alias_rpc 00:06:48.222 ************************************ 00:06:48.222 08:21:39 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:48.222 08:21:39 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:48.222 08:21:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:48.222 08:21:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:48.222 08:21:39 -- common/autotest_common.sh@10 -- # set +x 00:06:48.222 ************************************ 00:06:48.222 START TEST spdkcli_tcp 00:06:48.222 ************************************ 00:06:48.222 08:21:39 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:48.483 * Looking for test storage... 00:06:48.483 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:48.483 08:21:40 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:48.483 08:21:40 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:06:48.483 08:21:40 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:48.483 08:21:40 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:48.484 08:21:40 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:48.484 08:21:40 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:48.484 08:21:40 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:48.484 08:21:40 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:48.484 08:21:40 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:48.484 08:21:40 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:48.484 08:21:40 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:48.484 08:21:40 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:48.484 08:21:40 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:48.484 08:21:40 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:48.484 08:21:40 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:48.484 08:21:40 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:48.484 08:21:40 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:48.484 08:21:40 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:48.484 08:21:40 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:48.484 08:21:40 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:48.484 08:21:40 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:48.484 08:21:40 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:48.484 08:21:40 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:48.484 08:21:40 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:48.484 08:21:40 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:48.484 08:21:40 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:48.484 08:21:40 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:48.484 08:21:40 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:48.484 08:21:40 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:48.484 08:21:40 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:48.484 08:21:40 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:48.484 08:21:40 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:48.484 08:21:40 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:48.484 08:21:40 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:48.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.484 --rc genhtml_branch_coverage=1 00:06:48.484 --rc genhtml_function_coverage=1 00:06:48.484 --rc genhtml_legend=1 00:06:48.484 --rc geninfo_all_blocks=1 00:06:48.484 --rc geninfo_unexecuted_blocks=1 00:06:48.484 00:06:48.484 ' 00:06:48.484 08:21:40 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:48.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.484 --rc genhtml_branch_coverage=1 00:06:48.484 --rc genhtml_function_coverage=1 00:06:48.484 --rc genhtml_legend=1 00:06:48.484 --rc geninfo_all_blocks=1 00:06:48.484 --rc geninfo_unexecuted_blocks=1 00:06:48.484 00:06:48.484 ' 00:06:48.484 08:21:40 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:48.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.484 --rc genhtml_branch_coverage=1 00:06:48.484 --rc genhtml_function_coverage=1 00:06:48.484 --rc genhtml_legend=1 00:06:48.484 --rc geninfo_all_blocks=1 00:06:48.484 --rc geninfo_unexecuted_blocks=1 00:06:48.484 00:06:48.484 ' 00:06:48.484 08:21:40 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:48.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.484 --rc genhtml_branch_coverage=1 00:06:48.484 --rc genhtml_function_coverage=1 00:06:48.484 --rc genhtml_legend=1 00:06:48.484 --rc geninfo_all_blocks=1 00:06:48.484 --rc geninfo_unexecuted_blocks=1 00:06:48.484 00:06:48.484 ' 00:06:48.484 08:21:40 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:48.484 08:21:40 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:48.484 08:21:40 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:48.484 08:21:40 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:48.484 08:21:40 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:48.484 08:21:40 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:48.484 08:21:40 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:48.484 08:21:40 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:48.484 08:21:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:48.484 08:21:40 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3523646 00:06:48.484 08:21:40 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3523646 00:06:48.484 08:21:40 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:48.484 08:21:40 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 3523646 ']' 00:06:48.484 08:21:40 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.484 08:21:40 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:48.484 08:21:40 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.484 08:21:40 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:48.484 08:21:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:48.484 [2024-10-01 08:21:40.263483] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:06:48.484 [2024-10-01 08:21:40.263559] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3523646 ] 00:06:48.745 [2024-10-01 08:21:40.331180] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:48.745 [2024-10-01 08:21:40.407275] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.745 [2024-10-01 08:21:40.407276] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.319 08:21:41 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:49.319 08:21:41 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:06:49.319 08:21:41 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3523710 00:06:49.319 08:21:41 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:49.319 08:21:41 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:49.582 [ 00:06:49.582 "bdev_malloc_delete", 00:06:49.582 "bdev_malloc_create", 00:06:49.582 "bdev_null_resize", 00:06:49.582 "bdev_null_delete", 00:06:49.582 "bdev_null_create", 00:06:49.582 "bdev_nvme_cuse_unregister", 00:06:49.582 "bdev_nvme_cuse_register", 00:06:49.582 "bdev_opal_new_user", 00:06:49.582 "bdev_opal_set_lock_state", 00:06:49.582 "bdev_opal_delete", 00:06:49.582 "bdev_opal_get_info", 00:06:49.582 "bdev_opal_create", 00:06:49.582 "bdev_nvme_opal_revert", 00:06:49.582 "bdev_nvme_opal_init", 00:06:49.582 "bdev_nvme_send_cmd", 00:06:49.582 "bdev_nvme_set_keys", 00:06:49.582 "bdev_nvme_get_path_iostat", 00:06:49.582 "bdev_nvme_get_mdns_discovery_info", 00:06:49.582 "bdev_nvme_stop_mdns_discovery", 00:06:49.582 "bdev_nvme_start_mdns_discovery", 00:06:49.582 "bdev_nvme_set_multipath_policy", 00:06:49.582 "bdev_nvme_set_preferred_path", 00:06:49.582 "bdev_nvme_get_io_paths", 00:06:49.582 "bdev_nvme_remove_error_injection", 00:06:49.582 "bdev_nvme_add_error_injection", 00:06:49.582 "bdev_nvme_get_discovery_info", 00:06:49.582 "bdev_nvme_stop_discovery", 00:06:49.582 "bdev_nvme_start_discovery", 00:06:49.582 "bdev_nvme_get_controller_health_info", 00:06:49.582 "bdev_nvme_disable_controller", 00:06:49.582 "bdev_nvme_enable_controller", 00:06:49.582 "bdev_nvme_reset_controller", 00:06:49.582 "bdev_nvme_get_transport_statistics", 00:06:49.582 "bdev_nvme_apply_firmware", 00:06:49.582 "bdev_nvme_detach_controller", 00:06:49.582 "bdev_nvme_get_controllers", 00:06:49.582 "bdev_nvme_attach_controller", 00:06:49.582 "bdev_nvme_set_hotplug", 00:06:49.582 "bdev_nvme_set_options", 00:06:49.582 "bdev_passthru_delete", 00:06:49.582 "bdev_passthru_create", 00:06:49.582 "bdev_lvol_set_parent_bdev", 00:06:49.582 "bdev_lvol_set_parent", 00:06:49.582 "bdev_lvol_check_shallow_copy", 00:06:49.582 "bdev_lvol_start_shallow_copy", 00:06:49.582 "bdev_lvol_grow_lvstore", 00:06:49.582 "bdev_lvol_get_lvols", 00:06:49.582 "bdev_lvol_get_lvstores", 00:06:49.582 "bdev_lvol_delete", 00:06:49.582 "bdev_lvol_set_read_only", 00:06:49.582 "bdev_lvol_resize", 00:06:49.582 "bdev_lvol_decouple_parent", 00:06:49.582 "bdev_lvol_inflate", 00:06:49.582 "bdev_lvol_rename", 00:06:49.582 "bdev_lvol_clone_bdev", 00:06:49.582 "bdev_lvol_clone", 00:06:49.582 "bdev_lvol_snapshot", 00:06:49.582 "bdev_lvol_create", 00:06:49.582 "bdev_lvol_delete_lvstore", 00:06:49.582 "bdev_lvol_rename_lvstore", 00:06:49.582 "bdev_lvol_create_lvstore", 00:06:49.582 "bdev_raid_set_options", 00:06:49.582 "bdev_raid_remove_base_bdev", 00:06:49.582 "bdev_raid_add_base_bdev", 00:06:49.582 "bdev_raid_delete", 00:06:49.582 "bdev_raid_create", 00:06:49.582 "bdev_raid_get_bdevs", 00:06:49.582 "bdev_error_inject_error", 00:06:49.582 "bdev_error_delete", 00:06:49.582 "bdev_error_create", 00:06:49.582 "bdev_split_delete", 00:06:49.582 "bdev_split_create", 00:06:49.582 "bdev_delay_delete", 00:06:49.582 "bdev_delay_create", 00:06:49.582 "bdev_delay_update_latency", 00:06:49.582 "bdev_zone_block_delete", 00:06:49.582 "bdev_zone_block_create", 00:06:49.582 "blobfs_create", 00:06:49.582 "blobfs_detect", 00:06:49.582 "blobfs_set_cache_size", 00:06:49.582 "bdev_aio_delete", 00:06:49.582 "bdev_aio_rescan", 00:06:49.582 "bdev_aio_create", 00:06:49.582 "bdev_ftl_set_property", 00:06:49.582 "bdev_ftl_get_properties", 00:06:49.582 "bdev_ftl_get_stats", 00:06:49.582 "bdev_ftl_unmap", 00:06:49.582 "bdev_ftl_unload", 00:06:49.582 "bdev_ftl_delete", 00:06:49.582 "bdev_ftl_load", 00:06:49.582 "bdev_ftl_create", 00:06:49.582 "bdev_virtio_attach_controller", 00:06:49.582 "bdev_virtio_scsi_get_devices", 00:06:49.582 "bdev_virtio_detach_controller", 00:06:49.582 "bdev_virtio_blk_set_hotplug", 00:06:49.582 "bdev_iscsi_delete", 00:06:49.582 "bdev_iscsi_create", 00:06:49.582 "bdev_iscsi_set_options", 00:06:49.582 "accel_error_inject_error", 00:06:49.582 "ioat_scan_accel_module", 00:06:49.582 "dsa_scan_accel_module", 00:06:49.582 "iaa_scan_accel_module", 00:06:49.582 "vfu_virtio_create_fs_endpoint", 00:06:49.582 "vfu_virtio_create_scsi_endpoint", 00:06:49.582 "vfu_virtio_scsi_remove_target", 00:06:49.582 "vfu_virtio_scsi_add_target", 00:06:49.582 "vfu_virtio_create_blk_endpoint", 00:06:49.582 "vfu_virtio_delete_endpoint", 00:06:49.582 "keyring_file_remove_key", 00:06:49.582 "keyring_file_add_key", 00:06:49.582 "keyring_linux_set_options", 00:06:49.582 "fsdev_aio_delete", 00:06:49.582 "fsdev_aio_create", 00:06:49.582 "iscsi_get_histogram", 00:06:49.582 "iscsi_enable_histogram", 00:06:49.582 "iscsi_set_options", 00:06:49.582 "iscsi_get_auth_groups", 00:06:49.582 "iscsi_auth_group_remove_secret", 00:06:49.582 "iscsi_auth_group_add_secret", 00:06:49.582 "iscsi_delete_auth_group", 00:06:49.582 "iscsi_create_auth_group", 00:06:49.582 "iscsi_set_discovery_auth", 00:06:49.582 "iscsi_get_options", 00:06:49.582 "iscsi_target_node_request_logout", 00:06:49.582 "iscsi_target_node_set_redirect", 00:06:49.582 "iscsi_target_node_set_auth", 00:06:49.582 "iscsi_target_node_add_lun", 00:06:49.582 "iscsi_get_stats", 00:06:49.582 "iscsi_get_connections", 00:06:49.582 "iscsi_portal_group_set_auth", 00:06:49.582 "iscsi_start_portal_group", 00:06:49.582 "iscsi_delete_portal_group", 00:06:49.582 "iscsi_create_portal_group", 00:06:49.582 "iscsi_get_portal_groups", 00:06:49.582 "iscsi_delete_target_node", 00:06:49.582 "iscsi_target_node_remove_pg_ig_maps", 00:06:49.582 "iscsi_target_node_add_pg_ig_maps", 00:06:49.582 "iscsi_create_target_node", 00:06:49.582 "iscsi_get_target_nodes", 00:06:49.582 "iscsi_delete_initiator_group", 00:06:49.582 "iscsi_initiator_group_remove_initiators", 00:06:49.582 "iscsi_initiator_group_add_initiators", 00:06:49.582 "iscsi_create_initiator_group", 00:06:49.582 "iscsi_get_initiator_groups", 00:06:49.582 "nvmf_set_crdt", 00:06:49.582 "nvmf_set_config", 00:06:49.582 "nvmf_set_max_subsystems", 00:06:49.582 "nvmf_stop_mdns_prr", 00:06:49.582 "nvmf_publish_mdns_prr", 00:06:49.582 "nvmf_subsystem_get_listeners", 00:06:49.582 "nvmf_subsystem_get_qpairs", 00:06:49.582 "nvmf_subsystem_get_controllers", 00:06:49.582 "nvmf_get_stats", 00:06:49.582 "nvmf_get_transports", 00:06:49.582 "nvmf_create_transport", 00:06:49.582 "nvmf_get_targets", 00:06:49.582 "nvmf_delete_target", 00:06:49.582 "nvmf_create_target", 00:06:49.582 "nvmf_subsystem_allow_any_host", 00:06:49.582 "nvmf_subsystem_set_keys", 00:06:49.582 "nvmf_subsystem_remove_host", 00:06:49.582 "nvmf_subsystem_add_host", 00:06:49.582 "nvmf_ns_remove_host", 00:06:49.582 "nvmf_ns_add_host", 00:06:49.582 "nvmf_subsystem_remove_ns", 00:06:49.582 "nvmf_subsystem_set_ns_ana_group", 00:06:49.582 "nvmf_subsystem_add_ns", 00:06:49.582 "nvmf_subsystem_listener_set_ana_state", 00:06:49.582 "nvmf_discovery_get_referrals", 00:06:49.582 "nvmf_discovery_remove_referral", 00:06:49.582 "nvmf_discovery_add_referral", 00:06:49.582 "nvmf_subsystem_remove_listener", 00:06:49.582 "nvmf_subsystem_add_listener", 00:06:49.583 "nvmf_delete_subsystem", 00:06:49.583 "nvmf_create_subsystem", 00:06:49.583 "nvmf_get_subsystems", 00:06:49.583 "env_dpdk_get_mem_stats", 00:06:49.583 "nbd_get_disks", 00:06:49.583 "nbd_stop_disk", 00:06:49.583 "nbd_start_disk", 00:06:49.583 "ublk_recover_disk", 00:06:49.583 "ublk_get_disks", 00:06:49.583 "ublk_stop_disk", 00:06:49.583 "ublk_start_disk", 00:06:49.583 "ublk_destroy_target", 00:06:49.583 "ublk_create_target", 00:06:49.583 "virtio_blk_create_transport", 00:06:49.583 "virtio_blk_get_transports", 00:06:49.583 "vhost_controller_set_coalescing", 00:06:49.583 "vhost_get_controllers", 00:06:49.583 "vhost_delete_controller", 00:06:49.583 "vhost_create_blk_controller", 00:06:49.583 "vhost_scsi_controller_remove_target", 00:06:49.583 "vhost_scsi_controller_add_target", 00:06:49.583 "vhost_start_scsi_controller", 00:06:49.583 "vhost_create_scsi_controller", 00:06:49.583 "thread_set_cpumask", 00:06:49.583 "scheduler_set_options", 00:06:49.583 "framework_get_governor", 00:06:49.583 "framework_get_scheduler", 00:06:49.583 "framework_set_scheduler", 00:06:49.583 "framework_get_reactors", 00:06:49.583 "thread_get_io_channels", 00:06:49.583 "thread_get_pollers", 00:06:49.583 "thread_get_stats", 00:06:49.583 "framework_monitor_context_switch", 00:06:49.583 "spdk_kill_instance", 00:06:49.583 "log_enable_timestamps", 00:06:49.583 "log_get_flags", 00:06:49.583 "log_clear_flag", 00:06:49.583 "log_set_flag", 00:06:49.583 "log_get_level", 00:06:49.583 "log_set_level", 00:06:49.583 "log_get_print_level", 00:06:49.583 "log_set_print_level", 00:06:49.583 "framework_enable_cpumask_locks", 00:06:49.583 "framework_disable_cpumask_locks", 00:06:49.583 "framework_wait_init", 00:06:49.583 "framework_start_init", 00:06:49.583 "scsi_get_devices", 00:06:49.583 "bdev_get_histogram", 00:06:49.583 "bdev_enable_histogram", 00:06:49.583 "bdev_set_qos_limit", 00:06:49.583 "bdev_set_qd_sampling_period", 00:06:49.583 "bdev_get_bdevs", 00:06:49.583 "bdev_reset_iostat", 00:06:49.583 "bdev_get_iostat", 00:06:49.583 "bdev_examine", 00:06:49.583 "bdev_wait_for_examine", 00:06:49.583 "bdev_set_options", 00:06:49.583 "accel_get_stats", 00:06:49.583 "accel_set_options", 00:06:49.583 "accel_set_driver", 00:06:49.583 "accel_crypto_key_destroy", 00:06:49.583 "accel_crypto_keys_get", 00:06:49.583 "accel_crypto_key_create", 00:06:49.583 "accel_assign_opc", 00:06:49.583 "accel_get_module_info", 00:06:49.583 "accel_get_opc_assignments", 00:06:49.583 "vmd_rescan", 00:06:49.583 "vmd_remove_device", 00:06:49.583 "vmd_enable", 00:06:49.583 "sock_get_default_impl", 00:06:49.583 "sock_set_default_impl", 00:06:49.583 "sock_impl_set_options", 00:06:49.583 "sock_impl_get_options", 00:06:49.583 "iobuf_get_stats", 00:06:49.583 "iobuf_set_options", 00:06:49.583 "keyring_get_keys", 00:06:49.583 "vfu_tgt_set_base_path", 00:06:49.583 "framework_get_pci_devices", 00:06:49.583 "framework_get_config", 00:06:49.583 "framework_get_subsystems", 00:06:49.583 "fsdev_set_opts", 00:06:49.583 "fsdev_get_opts", 00:06:49.583 "trace_get_info", 00:06:49.583 "trace_get_tpoint_group_mask", 00:06:49.583 "trace_disable_tpoint_group", 00:06:49.583 "trace_enable_tpoint_group", 00:06:49.583 "trace_clear_tpoint_mask", 00:06:49.583 "trace_set_tpoint_mask", 00:06:49.583 "notify_get_notifications", 00:06:49.583 "notify_get_types", 00:06:49.583 "spdk_get_version", 00:06:49.583 "rpc_get_methods" 00:06:49.583 ] 00:06:49.583 08:21:41 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:49.583 08:21:41 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:49.583 08:21:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:49.583 08:21:41 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:49.583 08:21:41 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3523646 00:06:49.583 08:21:41 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 3523646 ']' 00:06:49.583 08:21:41 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 3523646 00:06:49.583 08:21:41 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:06:49.583 08:21:41 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:49.583 08:21:41 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3523646 00:06:49.583 08:21:41 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:49.583 08:21:41 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:49.583 08:21:41 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3523646' 00:06:49.583 killing process with pid 3523646 00:06:49.583 08:21:41 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 3523646 00:06:49.583 08:21:41 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 3523646 00:06:49.845 00:06:49.845 real 0m1.572s 00:06:49.845 user 0m2.807s 00:06:49.845 sys 0m0.470s 00:06:49.845 08:21:41 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:49.845 08:21:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:49.845 ************************************ 00:06:49.845 END TEST spdkcli_tcp 00:06:49.845 ************************************ 00:06:49.845 08:21:41 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:49.845 08:21:41 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:49.845 08:21:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:49.845 08:21:41 -- common/autotest_common.sh@10 -- # set +x 00:06:49.845 ************************************ 00:06:49.845 START TEST dpdk_mem_utility 00:06:49.845 ************************************ 00:06:49.845 08:21:41 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:50.107 * Looking for test storage... 00:06:50.107 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:50.107 08:21:41 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:50.107 08:21:41 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:06:50.107 08:21:41 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:50.107 08:21:41 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:50.107 08:21:41 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:50.107 08:21:41 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:50.107 08:21:41 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:50.107 08:21:41 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:50.107 08:21:41 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:50.107 08:21:41 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:50.107 08:21:41 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:50.107 08:21:41 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:50.107 08:21:41 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:50.107 08:21:41 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:50.107 08:21:41 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:50.107 08:21:41 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:50.107 08:21:41 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:50.107 08:21:41 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:50.107 08:21:41 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:50.107 08:21:41 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:50.107 08:21:41 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:50.107 08:21:41 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:50.107 08:21:41 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:50.107 08:21:41 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:50.107 08:21:41 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:50.107 08:21:41 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:50.107 08:21:41 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:50.107 08:21:41 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:50.107 08:21:41 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:50.107 08:21:41 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:50.107 08:21:41 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:50.107 08:21:41 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:50.107 08:21:41 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:50.107 08:21:41 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:50.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.107 --rc genhtml_branch_coverage=1 00:06:50.107 --rc genhtml_function_coverage=1 00:06:50.107 --rc genhtml_legend=1 00:06:50.107 --rc geninfo_all_blocks=1 00:06:50.107 --rc geninfo_unexecuted_blocks=1 00:06:50.107 00:06:50.107 ' 00:06:50.107 08:21:41 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:50.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.107 --rc genhtml_branch_coverage=1 00:06:50.107 --rc genhtml_function_coverage=1 00:06:50.107 --rc genhtml_legend=1 00:06:50.107 --rc geninfo_all_blocks=1 00:06:50.107 --rc geninfo_unexecuted_blocks=1 00:06:50.107 00:06:50.107 ' 00:06:50.107 08:21:41 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:50.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.107 --rc genhtml_branch_coverage=1 00:06:50.107 --rc genhtml_function_coverage=1 00:06:50.107 --rc genhtml_legend=1 00:06:50.107 --rc geninfo_all_blocks=1 00:06:50.107 --rc geninfo_unexecuted_blocks=1 00:06:50.107 00:06:50.107 ' 00:06:50.107 08:21:41 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:50.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.107 --rc genhtml_branch_coverage=1 00:06:50.107 --rc genhtml_function_coverage=1 00:06:50.107 --rc genhtml_legend=1 00:06:50.107 --rc geninfo_all_blocks=1 00:06:50.107 --rc geninfo_unexecuted_blocks=1 00:06:50.107 00:06:50.107 ' 00:06:50.107 08:21:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:50.107 08:21:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3524062 00:06:50.107 08:21:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3524062 00:06:50.107 08:21:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:50.107 08:21:41 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 3524062 ']' 00:06:50.107 08:21:41 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.107 08:21:41 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:50.107 08:21:41 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.107 08:21:41 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:50.107 08:21:41 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:50.107 [2024-10-01 08:21:41.892418] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:06:50.107 [2024-10-01 08:21:41.892478] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3524062 ] 00:06:50.369 [2024-10-01 08:21:41.951175] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.369 [2024-10-01 08:21:42.016049] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.941 08:21:42 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:50.941 08:21:42 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:06:50.941 08:21:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:50.941 08:21:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:50.941 08:21:42 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.941 08:21:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:50.941 { 00:06:50.941 "filename": "/tmp/spdk_mem_dump.txt" 00:06:50.941 } 00:06:50.941 08:21:42 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.941 08:21:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:50.941 DPDK memory size 860.000000 MiB in 1 heap(s) 00:06:50.941 1 heaps totaling size 860.000000 MiB 00:06:50.941 size: 860.000000 MiB heap id: 0 00:06:50.941 end heaps---------- 00:06:50.941 9 mempools totaling size 642.649841 MiB 00:06:50.941 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:50.941 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:50.941 size: 92.545471 MiB name: bdev_io_3524062 00:06:50.941 size: 51.011292 MiB name: evtpool_3524062 00:06:50.941 size: 50.003479 MiB name: msgpool_3524062 00:06:50.941 size: 36.509338 MiB name: fsdev_io_3524062 00:06:50.941 size: 21.763794 MiB name: PDU_Pool 00:06:50.941 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:50.941 size: 0.026123 MiB name: Session_Pool 00:06:50.941 end mempools------- 00:06:50.941 6 memzones totaling size 4.142822 MiB 00:06:50.941 size: 1.000366 MiB name: RG_ring_0_3524062 00:06:50.941 size: 1.000366 MiB name: RG_ring_1_3524062 00:06:50.941 size: 1.000366 MiB name: RG_ring_4_3524062 00:06:50.941 size: 1.000366 MiB name: RG_ring_5_3524062 00:06:50.941 size: 0.125366 MiB name: RG_ring_2_3524062 00:06:50.941 size: 0.015991 MiB name: RG_ring_3_3524062 00:06:50.941 end memzones------- 00:06:50.941 08:21:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:51.202 heap id: 0 total size: 860.000000 MiB number of busy elements: 44 number of free elements: 16 00:06:51.202 list of free elements. size: 13.984680 MiB 00:06:51.202 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:51.202 element at address: 0x200000800000 with size: 1.996948 MiB 00:06:51.202 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:06:51.202 element at address: 0x20001be00000 with size: 0.999878 MiB 00:06:51.202 element at address: 0x200034a00000 with size: 0.994446 MiB 00:06:51.202 element at address: 0x200009600000 with size: 0.959839 MiB 00:06:51.202 element at address: 0x200015e00000 with size: 0.954285 MiB 00:06:51.202 element at address: 0x20001c000000 with size: 0.936584 MiB 00:06:51.202 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:51.202 element at address: 0x20001d800000 with size: 0.582886 MiB 00:06:51.202 element at address: 0x200003e00000 with size: 0.495422 MiB 00:06:51.202 element at address: 0x20000d800000 with size: 0.490723 MiB 00:06:51.202 element at address: 0x20001c200000 with size: 0.485657 MiB 00:06:51.202 element at address: 0x200007000000 with size: 0.481934 MiB 00:06:51.202 element at address: 0x20002ac00000 with size: 0.410034 MiB 00:06:51.202 element at address: 0x200003a00000 with size: 0.355042 MiB 00:06:51.202 list of standard malloc elements. size: 199.218628 MiB 00:06:51.202 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:06:51.202 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:06:51.202 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:06:51.202 element at address: 0x20001befff80 with size: 1.000122 MiB 00:06:51.202 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:06:51.202 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:51.202 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:06:51.202 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:51.202 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:06:51.202 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:51.202 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:51.202 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:51.202 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:51.202 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:51.202 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:51.202 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:51.203 element at address: 0x200003a5ae40 with size: 0.000183 MiB 00:06:51.203 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:51.203 element at address: 0x200003a5f300 with size: 0.000183 MiB 00:06:51.203 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:06:51.203 element at address: 0x200003a7f680 with size: 0.000183 MiB 00:06:51.203 element at address: 0x200003aff940 with size: 0.000183 MiB 00:06:51.203 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:51.203 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:06:51.203 element at address: 0x200003eff000 with size: 0.000183 MiB 00:06:51.203 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:51.203 element at address: 0x20000707b600 with size: 0.000183 MiB 00:06:51.203 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:06:51.203 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:06:51.203 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:06:51.203 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:06:51.203 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:06:51.203 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:06:51.203 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:06:51.203 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:06:51.203 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:06:51.203 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:06:51.203 element at address: 0x20001d895380 with size: 0.000183 MiB 00:06:51.203 element at address: 0x20001d895440 with size: 0.000183 MiB 00:06:51.203 element at address: 0x20002ac68f80 with size: 0.000183 MiB 00:06:51.203 element at address: 0x20002ac69040 with size: 0.000183 MiB 00:06:51.203 element at address: 0x20002ac6fc40 with size: 0.000183 MiB 00:06:51.203 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:06:51.203 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:06:51.203 list of memzone associated elements. size: 646.796692 MiB 00:06:51.203 element at address: 0x20001d895500 with size: 211.416748 MiB 00:06:51.203 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:51.203 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:06:51.203 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:51.203 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:06:51.203 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_3524062_0 00:06:51.203 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:51.203 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3524062_0 00:06:51.203 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:51.203 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3524062_0 00:06:51.203 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:06:51.203 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_3524062_0 00:06:51.203 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:06:51.203 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:51.203 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:06:51.203 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:51.203 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:51.203 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3524062 00:06:51.203 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:51.203 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3524062 00:06:51.203 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:51.203 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3524062 00:06:51.203 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:06:51.203 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:51.203 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:06:51.203 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:51.203 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:06:51.203 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:51.203 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:06:51.203 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:51.203 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:51.203 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3524062 00:06:51.203 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:51.203 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3524062 00:06:51.203 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:06:51.203 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3524062 00:06:51.203 element at address: 0x200034afe940 with size: 1.000488 MiB 00:06:51.203 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3524062 00:06:51.203 element at address: 0x200003a7f740 with size: 0.500488 MiB 00:06:51.203 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_3524062 00:06:51.203 element at address: 0x200003e7ee00 with size: 0.500488 MiB 00:06:51.203 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3524062 00:06:51.203 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:06:51.203 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:51.203 element at address: 0x20000707b780 with size: 0.500488 MiB 00:06:51.203 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:51.203 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:06:51.203 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:51.203 element at address: 0x200003a5f3c0 with size: 0.125488 MiB 00:06:51.203 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3524062 00:06:51.203 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:06:51.203 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:51.203 element at address: 0x20002ac69100 with size: 0.023743 MiB 00:06:51.203 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:51.203 element at address: 0x200003a5b100 with size: 0.016113 MiB 00:06:51.203 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3524062 00:06:51.203 element at address: 0x20002ac6f240 with size: 0.002441 MiB 00:06:51.203 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:51.203 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:51.203 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3524062 00:06:51.203 element at address: 0x200003affa00 with size: 0.000305 MiB 00:06:51.203 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_3524062 00:06:51.203 element at address: 0x200003a5af00 with size: 0.000305 MiB 00:06:51.203 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3524062 00:06:51.203 element at address: 0x20002ac6fd00 with size: 0.000305 MiB 00:06:51.203 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:51.203 08:21:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:51.203 08:21:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3524062 00:06:51.203 08:21:42 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 3524062 ']' 00:06:51.203 08:21:42 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 3524062 00:06:51.203 08:21:42 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:06:51.203 08:21:42 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:51.203 08:21:42 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3524062 00:06:51.203 08:21:42 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:51.203 08:21:42 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:51.203 08:21:42 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3524062' 00:06:51.203 killing process with pid 3524062 00:06:51.203 08:21:42 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 3524062 00:06:51.203 08:21:42 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 3524062 00:06:51.464 00:06:51.464 real 0m1.432s 00:06:51.464 user 0m1.515s 00:06:51.464 sys 0m0.406s 00:06:51.464 08:21:43 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:51.464 08:21:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:51.464 ************************************ 00:06:51.464 END TEST dpdk_mem_utility 00:06:51.464 ************************************ 00:06:51.464 08:21:43 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:51.464 08:21:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:51.464 08:21:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:51.464 08:21:43 -- common/autotest_common.sh@10 -- # set +x 00:06:51.464 ************************************ 00:06:51.464 START TEST event 00:06:51.464 ************************************ 00:06:51.464 08:21:43 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:51.464 * Looking for test storage... 00:06:51.464 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:51.464 08:21:43 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:51.464 08:21:43 event -- common/autotest_common.sh@1681 -- # lcov --version 00:06:51.464 08:21:43 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:51.725 08:21:43 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:51.725 08:21:43 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:51.725 08:21:43 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:51.725 08:21:43 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:51.725 08:21:43 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:51.725 08:21:43 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:51.725 08:21:43 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:51.725 08:21:43 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:51.725 08:21:43 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:51.725 08:21:43 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:51.725 08:21:43 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:51.725 08:21:43 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:51.725 08:21:43 event -- scripts/common.sh@344 -- # case "$op" in 00:06:51.725 08:21:43 event -- scripts/common.sh@345 -- # : 1 00:06:51.725 08:21:43 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:51.725 08:21:43 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:51.725 08:21:43 event -- scripts/common.sh@365 -- # decimal 1 00:06:51.725 08:21:43 event -- scripts/common.sh@353 -- # local d=1 00:06:51.725 08:21:43 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:51.725 08:21:43 event -- scripts/common.sh@355 -- # echo 1 00:06:51.725 08:21:43 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:51.725 08:21:43 event -- scripts/common.sh@366 -- # decimal 2 00:06:51.725 08:21:43 event -- scripts/common.sh@353 -- # local d=2 00:06:51.725 08:21:43 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:51.725 08:21:43 event -- scripts/common.sh@355 -- # echo 2 00:06:51.725 08:21:43 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:51.725 08:21:43 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:51.725 08:21:43 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:51.725 08:21:43 event -- scripts/common.sh@368 -- # return 0 00:06:51.725 08:21:43 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:51.725 08:21:43 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:51.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.725 --rc genhtml_branch_coverage=1 00:06:51.725 --rc genhtml_function_coverage=1 00:06:51.725 --rc genhtml_legend=1 00:06:51.725 --rc geninfo_all_blocks=1 00:06:51.725 --rc geninfo_unexecuted_blocks=1 00:06:51.725 00:06:51.725 ' 00:06:51.725 08:21:43 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:51.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.725 --rc genhtml_branch_coverage=1 00:06:51.725 --rc genhtml_function_coverage=1 00:06:51.725 --rc genhtml_legend=1 00:06:51.725 --rc geninfo_all_blocks=1 00:06:51.725 --rc geninfo_unexecuted_blocks=1 00:06:51.725 00:06:51.725 ' 00:06:51.725 08:21:43 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:51.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.725 --rc genhtml_branch_coverage=1 00:06:51.725 --rc genhtml_function_coverage=1 00:06:51.725 --rc genhtml_legend=1 00:06:51.725 --rc geninfo_all_blocks=1 00:06:51.725 --rc geninfo_unexecuted_blocks=1 00:06:51.725 00:06:51.725 ' 00:06:51.725 08:21:43 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:51.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.725 --rc genhtml_branch_coverage=1 00:06:51.725 --rc genhtml_function_coverage=1 00:06:51.725 --rc genhtml_legend=1 00:06:51.725 --rc geninfo_all_blocks=1 00:06:51.725 --rc geninfo_unexecuted_blocks=1 00:06:51.725 00:06:51.725 ' 00:06:51.725 08:21:43 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:51.726 08:21:43 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:51.726 08:21:43 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:51.726 08:21:43 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:51.726 08:21:43 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:51.726 08:21:43 event -- common/autotest_common.sh@10 -- # set +x 00:06:51.726 ************************************ 00:06:51.726 START TEST event_perf 00:06:51.726 ************************************ 00:06:51.726 08:21:43 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:51.726 Running I/O for 1 seconds...[2024-10-01 08:21:43.388094] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:06:51.726 [2024-10-01 08:21:43.388190] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3524439 ] 00:06:51.726 [2024-10-01 08:21:43.454170] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:51.726 [2024-10-01 08:21:43.526169] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.726 [2024-10-01 08:21:43.526283] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:51.726 [2024-10-01 08:21:43.526439] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.726 Running I/O for 1 seconds...[2024-10-01 08:21:43.526440] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:53.107 00:06:53.107 lcore 0: 187401 00:06:53.107 lcore 1: 187403 00:06:53.107 lcore 2: 187400 00:06:53.107 lcore 3: 187403 00:06:53.107 done. 00:06:53.107 00:06:53.107 real 0m1.214s 00:06:53.107 user 0m4.133s 00:06:53.107 sys 0m0.079s 00:06:53.107 08:21:44 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:53.107 08:21:44 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:53.107 ************************************ 00:06:53.107 END TEST event_perf 00:06:53.107 ************************************ 00:06:53.107 08:21:44 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:53.107 08:21:44 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:53.107 08:21:44 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:53.107 08:21:44 event -- common/autotest_common.sh@10 -- # set +x 00:06:53.107 ************************************ 00:06:53.107 START TEST event_reactor 00:06:53.107 ************************************ 00:06:53.107 08:21:44 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:53.107 [2024-10-01 08:21:44.672049] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:06:53.107 [2024-10-01 08:21:44.672153] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3524604 ] 00:06:53.107 [2024-10-01 08:21:44.736179] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.107 [2024-10-01 08:21:44.804508] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.049 test_start 00:06:54.049 oneshot 00:06:54.049 tick 100 00:06:54.049 tick 100 00:06:54.049 tick 250 00:06:54.049 tick 100 00:06:54.049 tick 100 00:06:54.049 tick 100 00:06:54.049 tick 250 00:06:54.049 tick 500 00:06:54.049 tick 100 00:06:54.049 tick 100 00:06:54.049 tick 250 00:06:54.049 tick 100 00:06:54.049 tick 100 00:06:54.049 test_end 00:06:54.049 00:06:54.049 real 0m1.206s 00:06:54.049 user 0m1.133s 00:06:54.049 sys 0m0.069s 00:06:54.049 08:21:45 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:54.049 08:21:45 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:54.049 ************************************ 00:06:54.049 END TEST event_reactor 00:06:54.049 ************************************ 00:06:54.309 08:21:45 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:54.309 08:21:45 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:54.309 08:21:45 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:54.309 08:21:45 event -- common/autotest_common.sh@10 -- # set +x 00:06:54.309 ************************************ 00:06:54.309 START TEST event_reactor_perf 00:06:54.309 ************************************ 00:06:54.309 08:21:45 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:54.309 [2024-10-01 08:21:45.955335] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:06:54.309 [2024-10-01 08:21:45.955431] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3524898 ] 00:06:54.309 [2024-10-01 08:21:46.019392] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.309 [2024-10-01 08:21:46.084663] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.693 test_start 00:06:55.693 test_end 00:06:55.693 Performance: 368276 events per second 00:06:55.693 00:06:55.693 real 0m1.206s 00:06:55.693 user 0m1.133s 00:06:55.693 sys 0m0.069s 00:06:55.693 08:21:47 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:55.693 08:21:47 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:55.693 ************************************ 00:06:55.693 END TEST event_reactor_perf 00:06:55.693 ************************************ 00:06:55.693 08:21:47 event -- event/event.sh@49 -- # uname -s 00:06:55.693 08:21:47 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:55.693 08:21:47 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:55.693 08:21:47 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:55.693 08:21:47 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:55.693 08:21:47 event -- common/autotest_common.sh@10 -- # set +x 00:06:55.693 ************************************ 00:06:55.693 START TEST event_scheduler 00:06:55.693 ************************************ 00:06:55.693 08:21:47 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:55.693 * Looking for test storage... 00:06:55.693 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:55.693 08:21:47 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:55.693 08:21:47 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:06:55.693 08:21:47 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:55.693 08:21:47 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:55.693 08:21:47 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:55.693 08:21:47 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:55.693 08:21:47 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:55.693 08:21:47 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:55.693 08:21:47 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:55.693 08:21:47 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:55.693 08:21:47 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:55.693 08:21:47 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:55.693 08:21:47 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:55.693 08:21:47 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:55.693 08:21:47 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:55.693 08:21:47 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:55.693 08:21:47 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:55.693 08:21:47 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:55.693 08:21:47 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:55.693 08:21:47 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:55.693 08:21:47 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:55.693 08:21:47 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:55.693 08:21:47 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:55.693 08:21:47 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:55.693 08:21:47 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:55.693 08:21:47 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:55.693 08:21:47 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:55.693 08:21:47 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:55.693 08:21:47 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:55.693 08:21:47 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:55.693 08:21:47 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:55.693 08:21:47 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:55.693 08:21:47 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:55.693 08:21:47 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:55.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.693 --rc genhtml_branch_coverage=1 00:06:55.693 --rc genhtml_function_coverage=1 00:06:55.693 --rc genhtml_legend=1 00:06:55.693 --rc geninfo_all_blocks=1 00:06:55.693 --rc geninfo_unexecuted_blocks=1 00:06:55.693 00:06:55.693 ' 00:06:55.693 08:21:47 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:55.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.693 --rc genhtml_branch_coverage=1 00:06:55.693 --rc genhtml_function_coverage=1 00:06:55.693 --rc genhtml_legend=1 00:06:55.693 --rc geninfo_all_blocks=1 00:06:55.693 --rc geninfo_unexecuted_blocks=1 00:06:55.693 00:06:55.693 ' 00:06:55.693 08:21:47 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:55.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.693 --rc genhtml_branch_coverage=1 00:06:55.693 --rc genhtml_function_coverage=1 00:06:55.693 --rc genhtml_legend=1 00:06:55.693 --rc geninfo_all_blocks=1 00:06:55.693 --rc geninfo_unexecuted_blocks=1 00:06:55.693 00:06:55.693 ' 00:06:55.693 08:21:47 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:55.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.693 --rc genhtml_branch_coverage=1 00:06:55.693 --rc genhtml_function_coverage=1 00:06:55.693 --rc genhtml_legend=1 00:06:55.693 --rc geninfo_all_blocks=1 00:06:55.693 --rc geninfo_unexecuted_blocks=1 00:06:55.693 00:06:55.693 ' 00:06:55.693 08:21:47 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:55.693 08:21:47 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3525286 00:06:55.693 08:21:47 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:55.693 08:21:47 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:55.693 08:21:47 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3525286 00:06:55.693 08:21:47 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 3525286 ']' 00:06:55.693 08:21:47 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.693 08:21:47 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:55.693 08:21:47 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.693 08:21:47 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:55.693 08:21:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:55.693 [2024-10-01 08:21:47.469212] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:06:55.693 [2024-10-01 08:21:47.469266] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3525286 ] 00:06:55.954 [2024-10-01 08:21:47.522369] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:55.954 [2024-10-01 08:21:47.576103] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.954 [2024-10-01 08:21:47.576157] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.954 [2024-10-01 08:21:47.576316] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:55.954 [2024-10-01 08:21:47.576317] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:56.526 08:21:48 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:56.526 08:21:48 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:06:56.526 08:21:48 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:56.526 08:21:48 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.526 08:21:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:56.526 [2024-10-01 08:21:48.274599] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:56.526 [2024-10-01 08:21:48.274615] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:56.526 [2024-10-01 08:21:48.274623] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:56.526 [2024-10-01 08:21:48.274628] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:56.526 [2024-10-01 08:21:48.274632] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:56.526 08:21:48 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.526 08:21:48 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:56.526 08:21:48 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.526 08:21:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:56.526 [2024-10-01 08:21:48.330409] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:56.526 08:21:48 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.526 08:21:48 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:56.526 08:21:48 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:56.526 08:21:48 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:56.526 08:21:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:56.787 ************************************ 00:06:56.787 START TEST scheduler_create_thread 00:06:56.787 ************************************ 00:06:56.787 08:21:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:06:56.787 08:21:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:56.787 08:21:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.787 08:21:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.787 2 00:06:56.787 08:21:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.787 08:21:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:56.787 08:21:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.787 08:21:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.787 3 00:06:56.787 08:21:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.787 08:21:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:56.787 08:21:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.787 08:21:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.787 4 00:06:56.787 08:21:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.787 08:21:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:56.787 08:21:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.787 08:21:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.787 5 00:06:56.787 08:21:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.787 08:21:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:56.787 08:21:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.787 08:21:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.787 6 00:06:56.787 08:21:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.787 08:21:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:56.787 08:21:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.787 08:21:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.787 7 00:06:56.787 08:21:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.787 08:21:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:56.788 08:21:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.788 08:21:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.788 8 00:06:56.788 08:21:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.788 08:21:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:56.788 08:21:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.788 08:21:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.788 9 00:06:56.788 08:21:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.788 08:21:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:56.788 08:21:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.788 08:21:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:58.173 10 00:06:58.173 08:21:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.173 08:21:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:58.173 08:21:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.173 08:21:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:58.744 08:21:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.744 08:21:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:58.744 08:21:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:58.744 08:21:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.744 08:21:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:59.686 08:21:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.686 08:21:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:59.686 08:21:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.686 08:21:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:00.257 08:21:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.257 08:21:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:00.257 08:21:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:00.257 08:21:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.257 08:21:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:00.829 08:21:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.829 00:07:00.829 real 0m4.216s 00:07:00.829 user 0m0.023s 00:07:00.829 sys 0m0.008s 00:07:00.829 08:21:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:00.829 08:21:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:00.829 ************************************ 00:07:00.829 END TEST scheduler_create_thread 00:07:00.829 ************************************ 00:07:00.829 08:21:52 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:00.829 08:21:52 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3525286 00:07:00.829 08:21:52 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 3525286 ']' 00:07:00.829 08:21:52 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 3525286 00:07:00.829 08:21:52 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:07:00.829 08:21:52 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:00.829 08:21:52 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3525286 00:07:01.090 08:21:52 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:01.091 08:21:52 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:01.091 08:21:52 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3525286' 00:07:01.091 killing process with pid 3525286 00:07:01.091 08:21:52 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 3525286 00:07:01.091 08:21:52 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 3525286 00:07:01.091 [2024-10-01 08:21:52.866029] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:01.352 00:07:01.352 real 0m5.833s 00:07:01.352 user 0m13.496s 00:07:01.352 sys 0m0.400s 00:07:01.352 08:21:53 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:01.352 08:21:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:01.352 ************************************ 00:07:01.352 END TEST event_scheduler 00:07:01.352 ************************************ 00:07:01.352 08:21:53 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:01.352 08:21:53 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:01.352 08:21:53 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:01.352 08:21:53 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:01.352 08:21:53 event -- common/autotest_common.sh@10 -- # set +x 00:07:01.352 ************************************ 00:07:01.352 START TEST app_repeat 00:07:01.352 ************************************ 00:07:01.352 08:21:53 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:07:01.352 08:21:53 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.352 08:21:53 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:01.352 08:21:53 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:01.352 08:21:53 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:01.352 08:21:53 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:01.352 08:21:53 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:01.352 08:21:53 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:01.352 08:21:53 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3526477 00:07:01.352 08:21:53 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:01.352 08:21:53 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:01.352 08:21:53 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3526477' 00:07:01.352 Process app_repeat pid: 3526477 00:07:01.352 08:21:53 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:01.352 08:21:53 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:01.352 spdk_app_start Round 0 00:07:01.352 08:21:53 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3526477 /var/tmp/spdk-nbd.sock 00:07:01.352 08:21:53 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3526477 ']' 00:07:01.352 08:21:53 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:01.352 08:21:53 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:01.352 08:21:53 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:01.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:01.352 08:21:53 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:01.352 08:21:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:01.352 [2024-10-01 08:21:53.168372] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:07:01.352 [2024-10-01 08:21:53.168430] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3526477 ] 00:07:01.613 [2024-10-01 08:21:53.229060] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:01.613 [2024-10-01 08:21:53.294730] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.613 [2024-10-01 08:21:53.294733] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.185 08:21:53 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:02.185 08:21:53 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:02.185 08:21:53 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:02.447 Malloc0 00:07:02.447 08:21:54 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:02.708 Malloc1 00:07:02.708 08:21:54 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:02.708 08:21:54 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.708 08:21:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:02.708 08:21:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:02.708 08:21:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:02.708 08:21:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:02.708 08:21:54 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:02.708 08:21:54 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.708 08:21:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:02.708 08:21:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:02.708 08:21:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:02.708 08:21:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:02.708 08:21:54 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:02.708 08:21:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:02.708 08:21:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:02.708 08:21:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:02.969 /dev/nbd0 00:07:02.969 08:21:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:02.969 08:21:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:02.969 08:21:54 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:02.969 08:21:54 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:02.969 08:21:54 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:02.969 08:21:54 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:02.969 08:21:54 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:02.969 08:21:54 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:02.969 08:21:54 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:02.969 08:21:54 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:02.969 08:21:54 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:02.969 1+0 records in 00:07:02.969 1+0 records out 00:07:02.969 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283374 s, 14.5 MB/s 00:07:02.969 08:21:54 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:02.969 08:21:54 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:02.969 08:21:54 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:02.969 08:21:54 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:02.969 08:21:54 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:02.969 08:21:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:02.969 08:21:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:02.969 08:21:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:02.969 /dev/nbd1 00:07:02.969 08:21:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:02.969 08:21:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:02.969 08:21:54 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:02.969 08:21:54 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:02.969 08:21:54 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:02.969 08:21:54 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:02.969 08:21:54 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:02.969 08:21:54 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:02.969 08:21:54 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:02.969 08:21:54 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:02.969 08:21:54 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:03.229 1+0 records in 00:07:03.230 1+0 records out 00:07:03.230 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00029013 s, 14.1 MB/s 00:07:03.230 08:21:54 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:03.230 08:21:54 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:03.230 08:21:54 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:03.230 08:21:54 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:03.230 08:21:54 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:03.230 08:21:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:03.230 08:21:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:03.230 08:21:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:03.230 08:21:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.230 08:21:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:03.230 08:21:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:03.230 { 00:07:03.230 "nbd_device": "/dev/nbd0", 00:07:03.230 "bdev_name": "Malloc0" 00:07:03.230 }, 00:07:03.230 { 00:07:03.230 "nbd_device": "/dev/nbd1", 00:07:03.230 "bdev_name": "Malloc1" 00:07:03.230 } 00:07:03.230 ]' 00:07:03.230 08:21:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:03.230 { 00:07:03.230 "nbd_device": "/dev/nbd0", 00:07:03.230 "bdev_name": "Malloc0" 00:07:03.230 }, 00:07:03.230 { 00:07:03.230 "nbd_device": "/dev/nbd1", 00:07:03.230 "bdev_name": "Malloc1" 00:07:03.230 } 00:07:03.230 ]' 00:07:03.230 08:21:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:03.230 08:21:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:03.230 /dev/nbd1' 00:07:03.230 08:21:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:03.230 /dev/nbd1' 00:07:03.230 08:21:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:03.230 08:21:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:03.230 08:21:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:03.230 08:21:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:03.230 08:21:55 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:03.230 08:21:55 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:03.230 08:21:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:03.230 08:21:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:03.230 08:21:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:03.230 08:21:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:03.230 08:21:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:03.230 08:21:55 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:03.230 256+0 records in 00:07:03.230 256+0 records out 00:07:03.230 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127515 s, 82.2 MB/s 00:07:03.230 08:21:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:03.230 08:21:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:03.490 256+0 records in 00:07:03.490 256+0 records out 00:07:03.490 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0162426 s, 64.6 MB/s 00:07:03.490 08:21:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:03.490 08:21:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:03.490 256+0 records in 00:07:03.490 256+0 records out 00:07:03.490 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0176621 s, 59.4 MB/s 00:07:03.490 08:21:55 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:03.490 08:21:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:03.490 08:21:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:03.490 08:21:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:03.490 08:21:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:03.490 08:21:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:03.490 08:21:55 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:03.490 08:21:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:03.490 08:21:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:03.490 08:21:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:03.490 08:21:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:03.491 08:21:55 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:03.491 08:21:55 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:03.491 08:21:55 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.491 08:21:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:03.491 08:21:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:03.491 08:21:55 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:03.491 08:21:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.491 08:21:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:03.491 08:21:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:03.491 08:21:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:03.491 08:21:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:03.491 08:21:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.491 08:21:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.491 08:21:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:03.491 08:21:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:03.491 08:21:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.491 08:21:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.491 08:21:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:03.751 08:21:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:03.751 08:21:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:03.751 08:21:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:03.751 08:21:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.751 08:21:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.751 08:21:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:03.751 08:21:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:03.751 08:21:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.751 08:21:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:03.751 08:21:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.751 08:21:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:04.011 08:21:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:04.011 08:21:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:04.011 08:21:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:04.011 08:21:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:04.011 08:21:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:04.011 08:21:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:04.011 08:21:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:04.011 08:21:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:04.011 08:21:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:04.011 08:21:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:04.011 08:21:55 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:04.011 08:21:55 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:04.011 08:21:55 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:04.272 08:21:55 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:04.272 [2024-10-01 08:21:56.008831] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:04.272 [2024-10-01 08:21:56.071535] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.272 [2024-10-01 08:21:56.071538] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.533 [2024-10-01 08:21:56.103133] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:04.533 [2024-10-01 08:21:56.103167] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:07.074 08:21:58 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:07.074 08:21:58 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:07.074 spdk_app_start Round 1 00:07:07.074 08:21:58 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3526477 /var/tmp/spdk-nbd.sock 00:07:07.074 08:21:58 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3526477 ']' 00:07:07.074 08:21:58 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:07.074 08:21:58 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:07.074 08:21:58 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:07.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:07.074 08:21:58 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:07.074 08:21:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:07.333 08:21:59 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:07.333 08:21:59 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:07.333 08:21:59 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:07.592 Malloc0 00:07:07.592 08:21:59 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:07.592 Malloc1 00:07:07.592 08:21:59 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:07.592 08:21:59 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.592 08:21:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:07.592 08:21:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:07.592 08:21:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:07.592 08:21:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:07.592 08:21:59 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:07.592 08:21:59 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.592 08:21:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:07.592 08:21:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:07.592 08:21:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:07.592 08:21:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:07.592 08:21:59 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:07.592 08:21:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:07.592 08:21:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:07.592 08:21:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:07.851 /dev/nbd0 00:07:07.851 08:21:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:07.851 08:21:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:07.851 08:21:59 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:07.851 08:21:59 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:07.851 08:21:59 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:07.851 08:21:59 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:07.851 08:21:59 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:07.851 08:21:59 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:07.851 08:21:59 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:07.851 08:21:59 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:07.851 08:21:59 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:07.851 1+0 records in 00:07:07.851 1+0 records out 00:07:07.851 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000282779 s, 14.5 MB/s 00:07:07.851 08:21:59 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:07.851 08:21:59 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:07.851 08:21:59 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:07.851 08:21:59 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:07.851 08:21:59 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:07.851 08:21:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:07.851 08:21:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:07.851 08:21:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:08.111 /dev/nbd1 00:07:08.111 08:21:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:08.111 08:21:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:08.111 08:21:59 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:08.111 08:21:59 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:08.111 08:21:59 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:08.111 08:21:59 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:08.111 08:21:59 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:08.111 08:21:59 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:08.111 08:21:59 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:08.111 08:21:59 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:08.111 08:21:59 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:08.111 1+0 records in 00:07:08.111 1+0 records out 00:07:08.111 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000287469 s, 14.2 MB/s 00:07:08.111 08:21:59 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:08.111 08:21:59 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:08.111 08:21:59 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:08.111 08:21:59 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:08.111 08:21:59 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:08.111 08:21:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:08.111 08:21:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:08.111 08:21:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:08.111 08:21:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.111 08:21:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:08.372 08:22:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:08.372 { 00:07:08.372 "nbd_device": "/dev/nbd0", 00:07:08.372 "bdev_name": "Malloc0" 00:07:08.372 }, 00:07:08.372 { 00:07:08.372 "nbd_device": "/dev/nbd1", 00:07:08.372 "bdev_name": "Malloc1" 00:07:08.372 } 00:07:08.372 ]' 00:07:08.372 08:22:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:08.372 { 00:07:08.372 "nbd_device": "/dev/nbd0", 00:07:08.372 "bdev_name": "Malloc0" 00:07:08.372 }, 00:07:08.372 { 00:07:08.372 "nbd_device": "/dev/nbd1", 00:07:08.372 "bdev_name": "Malloc1" 00:07:08.372 } 00:07:08.372 ]' 00:07:08.372 08:22:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:08.372 08:22:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:08.372 /dev/nbd1' 00:07:08.372 08:22:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:08.372 /dev/nbd1' 00:07:08.372 08:22:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:08.372 08:22:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:08.372 08:22:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:08.372 08:22:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:08.372 08:22:00 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:08.372 08:22:00 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:08.372 08:22:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:08.372 08:22:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:08.372 08:22:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:08.372 08:22:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:08.372 08:22:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:08.372 08:22:00 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:08.372 256+0 records in 00:07:08.372 256+0 records out 00:07:08.372 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118709 s, 88.3 MB/s 00:07:08.372 08:22:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:08.372 08:22:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:08.372 256+0 records in 00:07:08.372 256+0 records out 00:07:08.372 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0168612 s, 62.2 MB/s 00:07:08.372 08:22:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:08.372 08:22:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:08.372 256+0 records in 00:07:08.372 256+0 records out 00:07:08.372 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0189428 s, 55.4 MB/s 00:07:08.372 08:22:00 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:08.372 08:22:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:08.372 08:22:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:08.372 08:22:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:08.372 08:22:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:08.372 08:22:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:08.372 08:22:00 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:08.372 08:22:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:08.372 08:22:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:08.372 08:22:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:08.372 08:22:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:08.372 08:22:00 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:08.372 08:22:00 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:08.372 08:22:00 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.372 08:22:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:08.372 08:22:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:08.372 08:22:00 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:08.372 08:22:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:08.372 08:22:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:08.632 08:22:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:08.632 08:22:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:08.632 08:22:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:08.632 08:22:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:08.632 08:22:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:08.632 08:22:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:08.632 08:22:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:08.633 08:22:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:08.633 08:22:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:08.633 08:22:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:08.893 08:22:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:08.893 08:22:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:08.893 08:22:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:08.893 08:22:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:08.893 08:22:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:08.893 08:22:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:08.893 08:22:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:08.893 08:22:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:08.893 08:22:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:08.893 08:22:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.893 08:22:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:08.893 08:22:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:08.893 08:22:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:08.893 08:22:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:09.152 08:22:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:09.152 08:22:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:09.152 08:22:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:09.152 08:22:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:09.152 08:22:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:09.152 08:22:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:09.152 08:22:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:09.152 08:22:00 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:09.152 08:22:00 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:09.152 08:22:00 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:09.152 08:22:00 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:09.412 [2024-10-01 08:22:01.070748] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:09.412 [2024-10-01 08:22:01.133380] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.412 [2024-10-01 08:22:01.133383] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.412 [2024-10-01 08:22:01.165919] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:09.412 [2024-10-01 08:22:01.165956] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:12.712 08:22:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:12.712 08:22:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:12.712 spdk_app_start Round 2 00:07:12.712 08:22:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3526477 /var/tmp/spdk-nbd.sock 00:07:12.712 08:22:03 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3526477 ']' 00:07:12.712 08:22:03 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:12.712 08:22:03 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:12.712 08:22:03 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:12.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:12.712 08:22:03 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:12.712 08:22:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:12.712 08:22:04 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:12.712 08:22:04 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:12.712 08:22:04 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:12.712 Malloc0 00:07:12.712 08:22:04 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:12.712 Malloc1 00:07:12.712 08:22:04 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:12.712 08:22:04 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.712 08:22:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:12.712 08:22:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:12.712 08:22:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:12.712 08:22:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:12.712 08:22:04 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:12.712 08:22:04 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.712 08:22:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:12.712 08:22:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:12.712 08:22:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:12.713 08:22:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:12.713 08:22:04 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:12.713 08:22:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:12.713 08:22:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:12.713 08:22:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:12.973 /dev/nbd0 00:07:12.973 08:22:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:12.973 08:22:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:12.973 08:22:04 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:12.973 08:22:04 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:12.973 08:22:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:12.973 08:22:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:12.973 08:22:04 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:12.973 08:22:04 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:12.973 08:22:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:12.973 08:22:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:12.973 08:22:04 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:12.973 1+0 records in 00:07:12.973 1+0 records out 00:07:12.973 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277387 s, 14.8 MB/s 00:07:12.973 08:22:04 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:12.973 08:22:04 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:12.973 08:22:04 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:12.973 08:22:04 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:12.973 08:22:04 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:12.973 08:22:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:12.973 08:22:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:12.973 08:22:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:13.234 /dev/nbd1 00:07:13.234 08:22:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:13.234 08:22:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:13.234 08:22:04 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:13.234 08:22:04 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:13.234 08:22:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:13.234 08:22:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:13.234 08:22:04 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:13.234 08:22:04 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:13.234 08:22:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:13.234 08:22:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:13.234 08:22:04 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:13.234 1+0 records in 00:07:13.234 1+0 records out 00:07:13.234 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000263061 s, 15.6 MB/s 00:07:13.234 08:22:04 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:13.234 08:22:04 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:13.234 08:22:04 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:13.234 08:22:04 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:13.234 08:22:04 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:13.234 08:22:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:13.234 08:22:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:13.234 08:22:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:13.234 08:22:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.234 08:22:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:13.495 08:22:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:13.495 { 00:07:13.495 "nbd_device": "/dev/nbd0", 00:07:13.495 "bdev_name": "Malloc0" 00:07:13.495 }, 00:07:13.495 { 00:07:13.495 "nbd_device": "/dev/nbd1", 00:07:13.495 "bdev_name": "Malloc1" 00:07:13.495 } 00:07:13.495 ]' 00:07:13.495 08:22:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:13.495 { 00:07:13.495 "nbd_device": "/dev/nbd0", 00:07:13.495 "bdev_name": "Malloc0" 00:07:13.495 }, 00:07:13.495 { 00:07:13.495 "nbd_device": "/dev/nbd1", 00:07:13.495 "bdev_name": "Malloc1" 00:07:13.495 } 00:07:13.495 ]' 00:07:13.495 08:22:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:13.495 08:22:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:13.495 /dev/nbd1' 00:07:13.495 08:22:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:13.495 /dev/nbd1' 00:07:13.495 08:22:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:13.495 08:22:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:13.495 08:22:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:13.495 08:22:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:13.495 08:22:05 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:13.495 08:22:05 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:13.495 08:22:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:13.495 08:22:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:13.495 08:22:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:13.495 08:22:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:13.495 08:22:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:13.495 08:22:05 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:13.495 256+0 records in 00:07:13.495 256+0 records out 00:07:13.495 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127322 s, 82.4 MB/s 00:07:13.495 08:22:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:13.495 08:22:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:13.495 256+0 records in 00:07:13.495 256+0 records out 00:07:13.495 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0166185 s, 63.1 MB/s 00:07:13.495 08:22:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:13.495 08:22:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:13.495 256+0 records in 00:07:13.495 256+0 records out 00:07:13.495 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0180604 s, 58.1 MB/s 00:07:13.495 08:22:05 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:13.495 08:22:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:13.495 08:22:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:13.495 08:22:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:13.495 08:22:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:13.495 08:22:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:13.495 08:22:05 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:13.495 08:22:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:13.495 08:22:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:13.495 08:22:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:13.495 08:22:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:13.495 08:22:05 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:13.495 08:22:05 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:13.495 08:22:05 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.495 08:22:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:13.495 08:22:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:13.495 08:22:05 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:13.495 08:22:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:13.495 08:22:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:13.756 08:22:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:13.756 08:22:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:13.756 08:22:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:13.756 08:22:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:13.756 08:22:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:13.756 08:22:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:13.756 08:22:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:13.756 08:22:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:13.756 08:22:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:13.756 08:22:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:13.756 08:22:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:14.017 08:22:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:14.017 08:22:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:14.017 08:22:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:14.017 08:22:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:14.017 08:22:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:14.017 08:22:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:14.017 08:22:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:14.017 08:22:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:14.017 08:22:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.017 08:22:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:14.017 08:22:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:14.017 08:22:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:14.017 08:22:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:14.017 08:22:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:14.017 08:22:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:14.017 08:22:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:14.017 08:22:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:14.017 08:22:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:14.017 08:22:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:14.017 08:22:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:14.017 08:22:05 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:14.017 08:22:05 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:14.017 08:22:05 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:14.277 08:22:05 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:14.277 [2024-10-01 08:22:06.096903] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:14.538 [2024-10-01 08:22:06.160470] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.538 [2024-10-01 08:22:06.160473] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.538 [2024-10-01 08:22:06.192690] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:14.538 [2024-10-01 08:22:06.192722] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:17.890 08:22:08 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3526477 /var/tmp/spdk-nbd.sock 00:07:17.890 08:22:08 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3526477 ']' 00:07:17.890 08:22:08 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:17.890 08:22:08 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:17.890 08:22:08 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:17.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:17.890 08:22:08 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:17.890 08:22:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:17.890 08:22:09 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:17.890 08:22:09 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:17.890 08:22:09 event.app_repeat -- event/event.sh@39 -- # killprocess 3526477 00:07:17.890 08:22:09 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 3526477 ']' 00:07:17.890 08:22:09 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 3526477 00:07:17.890 08:22:09 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:07:17.890 08:22:09 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:17.890 08:22:09 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3526477 00:07:17.890 08:22:09 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:17.890 08:22:09 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:17.890 08:22:09 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3526477' 00:07:17.890 killing process with pid 3526477 00:07:17.890 08:22:09 event.app_repeat -- common/autotest_common.sh@969 -- # kill 3526477 00:07:17.890 08:22:09 event.app_repeat -- common/autotest_common.sh@974 -- # wait 3526477 00:07:17.890 spdk_app_start is called in Round 0. 00:07:17.890 Shutdown signal received, stop current app iteration 00:07:17.890 Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 reinitialization... 00:07:17.890 spdk_app_start is called in Round 1. 00:07:17.890 Shutdown signal received, stop current app iteration 00:07:17.890 Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 reinitialization... 00:07:17.890 spdk_app_start is called in Round 2. 00:07:17.890 Shutdown signal received, stop current app iteration 00:07:17.890 Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 reinitialization... 00:07:17.890 spdk_app_start is called in Round 3. 00:07:17.890 Shutdown signal received, stop current app iteration 00:07:17.890 08:22:09 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:17.890 08:22:09 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:17.890 00:07:17.890 real 0m16.190s 00:07:17.890 user 0m35.078s 00:07:17.891 sys 0m2.297s 00:07:17.891 08:22:09 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:17.891 08:22:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:17.891 ************************************ 00:07:17.891 END TEST app_repeat 00:07:17.891 ************************************ 00:07:17.891 08:22:09 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:17.891 08:22:09 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:17.891 08:22:09 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:17.891 08:22:09 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:17.891 08:22:09 event -- common/autotest_common.sh@10 -- # set +x 00:07:17.891 ************************************ 00:07:17.891 START TEST cpu_locks 00:07:17.891 ************************************ 00:07:17.891 08:22:09 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:17.891 * Looking for test storage... 00:07:17.891 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:17.891 08:22:09 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:17.891 08:22:09 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:07:17.891 08:22:09 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:17.891 08:22:09 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:17.891 08:22:09 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:17.891 08:22:09 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:17.891 08:22:09 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:17.891 08:22:09 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:17.891 08:22:09 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:17.891 08:22:09 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:17.891 08:22:09 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:17.891 08:22:09 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:17.891 08:22:09 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:17.891 08:22:09 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:17.891 08:22:09 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:17.891 08:22:09 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:17.891 08:22:09 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:17.891 08:22:09 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:17.891 08:22:09 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:17.891 08:22:09 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:17.891 08:22:09 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:17.891 08:22:09 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:17.891 08:22:09 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:17.891 08:22:09 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:17.891 08:22:09 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:17.891 08:22:09 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:17.891 08:22:09 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:17.891 08:22:09 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:17.891 08:22:09 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:17.891 08:22:09 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:17.891 08:22:09 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:17.891 08:22:09 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:17.891 08:22:09 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:17.891 08:22:09 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:17.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.891 --rc genhtml_branch_coverage=1 00:07:17.891 --rc genhtml_function_coverage=1 00:07:17.891 --rc genhtml_legend=1 00:07:17.891 --rc geninfo_all_blocks=1 00:07:17.891 --rc geninfo_unexecuted_blocks=1 00:07:17.891 00:07:17.891 ' 00:07:17.891 08:22:09 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:17.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.891 --rc genhtml_branch_coverage=1 00:07:17.891 --rc genhtml_function_coverage=1 00:07:17.891 --rc genhtml_legend=1 00:07:17.891 --rc geninfo_all_blocks=1 00:07:17.891 --rc geninfo_unexecuted_blocks=1 00:07:17.891 00:07:17.891 ' 00:07:17.891 08:22:09 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:17.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.891 --rc genhtml_branch_coverage=1 00:07:17.891 --rc genhtml_function_coverage=1 00:07:17.891 --rc genhtml_legend=1 00:07:17.891 --rc geninfo_all_blocks=1 00:07:17.891 --rc geninfo_unexecuted_blocks=1 00:07:17.891 00:07:17.891 ' 00:07:17.891 08:22:09 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:17.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.891 --rc genhtml_branch_coverage=1 00:07:17.891 --rc genhtml_function_coverage=1 00:07:17.891 --rc genhtml_legend=1 00:07:17.891 --rc geninfo_all_blocks=1 00:07:17.891 --rc geninfo_unexecuted_blocks=1 00:07:17.891 00:07:17.891 ' 00:07:17.891 08:22:09 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:17.891 08:22:09 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:17.891 08:22:09 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:17.891 08:22:09 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:17.891 08:22:09 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:17.891 08:22:09 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:17.891 08:22:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:17.891 ************************************ 00:07:17.891 START TEST default_locks 00:07:17.891 ************************************ 00:07:17.891 08:22:09 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:07:17.891 08:22:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3529946 00:07:17.891 08:22:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3529946 00:07:17.891 08:22:09 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 3529946 ']' 00:07:17.891 08:22:09 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.891 08:22:09 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:17.891 08:22:09 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.891 08:22:09 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:17.891 08:22:09 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:17.891 08:22:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:17.891 [2024-10-01 08:22:09.690270] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:07:17.891 [2024-10-01 08:22:09.690328] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3529946 ] 00:07:18.164 [2024-10-01 08:22:09.751387] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.164 [2024-10-01 08:22:09.819165] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.825 08:22:10 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:18.825 08:22:10 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:07:18.825 08:22:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3529946 00:07:18.825 08:22:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3529946 00:07:18.825 08:22:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:19.086 lslocks: write error 00:07:19.086 08:22:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3529946 00:07:19.086 08:22:10 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 3529946 ']' 00:07:19.086 08:22:10 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 3529946 00:07:19.086 08:22:10 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:07:19.086 08:22:10 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:19.086 08:22:10 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3529946 00:07:19.086 08:22:10 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:19.086 08:22:10 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:19.086 08:22:10 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3529946' 00:07:19.086 killing process with pid 3529946 00:07:19.086 08:22:10 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 3529946 00:07:19.086 08:22:10 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 3529946 00:07:19.346 08:22:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3529946 00:07:19.346 08:22:10 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:07:19.346 08:22:10 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3529946 00:07:19.346 08:22:10 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:19.346 08:22:10 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:19.346 08:22:10 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:19.346 08:22:10 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:19.346 08:22:10 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 3529946 00:07:19.346 08:22:10 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 3529946 ']' 00:07:19.346 08:22:10 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.346 08:22:10 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:19.346 08:22:10 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.346 08:22:10 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:19.346 08:22:10 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:19.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (3529946) - No such process 00:07:19.346 ERROR: process (pid: 3529946) is no longer running 00:07:19.346 08:22:10 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:19.346 08:22:10 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:07:19.346 08:22:10 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:07:19.346 08:22:10 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:19.346 08:22:10 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:19.346 08:22:10 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:19.346 08:22:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:19.346 08:22:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:19.346 08:22:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:19.346 08:22:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:19.346 00:07:19.346 real 0m1.378s 00:07:19.346 user 0m1.478s 00:07:19.346 sys 0m0.465s 00:07:19.346 08:22:10 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:19.346 08:22:10 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:19.346 ************************************ 00:07:19.346 END TEST default_locks 00:07:19.346 ************************************ 00:07:19.346 08:22:11 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:19.346 08:22:11 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:19.346 08:22:11 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:19.346 08:22:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:19.346 ************************************ 00:07:19.346 START TEST default_locks_via_rpc 00:07:19.346 ************************************ 00:07:19.346 08:22:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:07:19.346 08:22:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3530322 00:07:19.346 08:22:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3530322 00:07:19.347 08:22:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:19.347 08:22:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3530322 ']' 00:07:19.347 08:22:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.347 08:22:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:19.347 08:22:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.347 08:22:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:19.347 08:22:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.347 [2024-10-01 08:22:11.131404] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:07:19.347 [2024-10-01 08:22:11.131476] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3530322 ] 00:07:19.607 [2024-10-01 08:22:11.195063] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.607 [2024-10-01 08:22:11.269220] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.176 08:22:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:20.176 08:22:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:20.176 08:22:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:20.176 08:22:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.176 08:22:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.176 08:22:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.176 08:22:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:20.176 08:22:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:20.177 08:22:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:20.177 08:22:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:20.177 08:22:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:20.177 08:22:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.177 08:22:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.177 08:22:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.177 08:22:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3530322 00:07:20.177 08:22:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3530322 00:07:20.177 08:22:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:20.746 08:22:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3530322 00:07:20.746 08:22:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 3530322 ']' 00:07:20.746 08:22:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 3530322 00:07:20.746 08:22:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:07:20.746 08:22:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:20.746 08:22:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3530322 00:07:21.006 08:22:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:21.006 08:22:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:21.006 08:22:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3530322' 00:07:21.006 killing process with pid 3530322 00:07:21.006 08:22:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 3530322 00:07:21.006 08:22:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 3530322 00:07:21.006 00:07:21.006 real 0m1.736s 00:07:21.006 user 0m1.864s 00:07:21.006 sys 0m0.579s 00:07:21.006 08:22:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:21.006 08:22:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.006 ************************************ 00:07:21.006 END TEST default_locks_via_rpc 00:07:21.006 ************************************ 00:07:21.266 08:22:12 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:21.266 08:22:12 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:21.266 08:22:12 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:21.266 08:22:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:21.266 ************************************ 00:07:21.266 START TEST non_locking_app_on_locked_coremask 00:07:21.266 ************************************ 00:07:21.266 08:22:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:07:21.266 08:22:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3530686 00:07:21.266 08:22:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3530686 /var/tmp/spdk.sock 00:07:21.266 08:22:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:21.266 08:22:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3530686 ']' 00:07:21.266 08:22:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.266 08:22:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:21.266 08:22:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.266 08:22:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:21.266 08:22:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:21.266 [2024-10-01 08:22:12.950094] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:07:21.266 [2024-10-01 08:22:12.950150] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3530686 ] 00:07:21.266 [2024-10-01 08:22:13.013740] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.266 [2024-10-01 08:22:13.082932] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.206 08:22:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:22.206 08:22:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:22.206 08:22:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:22.206 08:22:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3531016 00:07:22.206 08:22:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3531016 /var/tmp/spdk2.sock 00:07:22.206 08:22:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3531016 ']' 00:07:22.206 08:22:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:22.207 08:22:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:22.207 08:22:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:22.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:22.207 08:22:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:22.207 08:22:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:22.207 [2024-10-01 08:22:13.756827] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:07:22.207 [2024-10-01 08:22:13.756881] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3531016 ] 00:07:22.207 [2024-10-01 08:22:13.842523] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:22.207 [2024-10-01 08:22:13.842550] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.207 [2024-10-01 08:22:13.975675] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.776 08:22:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:22.776 08:22:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:22.776 08:22:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3530686 00:07:22.776 08:22:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3530686 00:07:22.776 08:22:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:23.347 lslocks: write error 00:07:23.347 08:22:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3530686 00:07:23.347 08:22:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3530686 ']' 00:07:23.347 08:22:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 3530686 00:07:23.347 08:22:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:23.347 08:22:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:23.347 08:22:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3530686 00:07:23.347 08:22:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:23.347 08:22:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:23.347 08:22:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3530686' 00:07:23.347 killing process with pid 3530686 00:07:23.347 08:22:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 3530686 00:07:23.347 08:22:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 3530686 00:07:23.917 08:22:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3531016 00:07:23.917 08:22:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3531016 ']' 00:07:23.917 08:22:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 3531016 00:07:23.917 08:22:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:23.917 08:22:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:23.917 08:22:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3531016 00:07:23.917 08:22:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:23.917 08:22:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:23.917 08:22:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3531016' 00:07:23.917 killing process with pid 3531016 00:07:23.917 08:22:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 3531016 00:07:23.917 08:22:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 3531016 00:07:24.177 00:07:24.177 real 0m2.882s 00:07:24.177 user 0m3.153s 00:07:24.177 sys 0m0.869s 00:07:24.177 08:22:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:24.177 08:22:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:24.177 ************************************ 00:07:24.177 END TEST non_locking_app_on_locked_coremask 00:07:24.177 ************************************ 00:07:24.177 08:22:15 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:24.177 08:22:15 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:24.177 08:22:15 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:24.177 08:22:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:24.177 ************************************ 00:07:24.177 START TEST locking_app_on_unlocked_coremask 00:07:24.177 ************************************ 00:07:24.177 08:22:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:07:24.177 08:22:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3531390 00:07:24.177 08:22:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3531390 /var/tmp/spdk.sock 00:07:24.177 08:22:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:24.177 08:22:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3531390 ']' 00:07:24.177 08:22:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.177 08:22:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:24.177 08:22:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.177 08:22:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:24.177 08:22:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:24.177 [2024-10-01 08:22:15.899566] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:07:24.177 [2024-10-01 08:22:15.899619] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3531390 ] 00:07:24.177 [2024-10-01 08:22:15.961724] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:24.177 [2024-10-01 08:22:15.961755] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.437 [2024-10-01 08:22:16.028770] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.007 08:22:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:25.007 08:22:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:25.007 08:22:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:25.008 08:22:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3531523 00:07:25.008 08:22:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3531523 /var/tmp/spdk2.sock 00:07:25.008 08:22:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3531523 ']' 00:07:25.008 08:22:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:25.008 08:22:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:25.008 08:22:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:25.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:25.008 08:22:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:25.008 08:22:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:25.008 [2024-10-01 08:22:16.717800] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:07:25.008 [2024-10-01 08:22:16.717852] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3531523 ] 00:07:25.008 [2024-10-01 08:22:16.803920] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.268 [2024-10-01 08:22:16.937755] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.839 08:22:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:25.839 08:22:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:25.839 08:22:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3531523 00:07:25.839 08:22:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3531523 00:07:25.839 08:22:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:26.410 lslocks: write error 00:07:26.410 08:22:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3531390 00:07:26.410 08:22:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3531390 ']' 00:07:26.410 08:22:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 3531390 00:07:26.410 08:22:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:26.410 08:22:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:26.410 08:22:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3531390 00:07:26.410 08:22:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:26.410 08:22:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:26.410 08:22:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3531390' 00:07:26.410 killing process with pid 3531390 00:07:26.410 08:22:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 3531390 00:07:26.410 08:22:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 3531390 00:07:26.983 08:22:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3531523 00:07:26.983 08:22:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3531523 ']' 00:07:26.983 08:22:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 3531523 00:07:26.983 08:22:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:26.983 08:22:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:26.983 08:22:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3531523 00:07:26.983 08:22:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:26.983 08:22:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:26.983 08:22:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3531523' 00:07:26.983 killing process with pid 3531523 00:07:26.983 08:22:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 3531523 00:07:26.983 08:22:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 3531523 00:07:27.243 00:07:27.243 real 0m3.006s 00:07:27.243 user 0m3.292s 00:07:27.243 sys 0m0.921s 00:07:27.243 08:22:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:27.243 08:22:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:27.243 ************************************ 00:07:27.243 END TEST locking_app_on_unlocked_coremask 00:07:27.243 ************************************ 00:07:27.243 08:22:18 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:27.243 08:22:18 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:27.243 08:22:18 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:27.243 08:22:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:27.243 ************************************ 00:07:27.243 START TEST locking_app_on_locked_coremask 00:07:27.243 ************************************ 00:07:27.243 08:22:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:07:27.243 08:22:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3532101 00:07:27.243 08:22:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3532101 /var/tmp/spdk.sock 00:07:27.243 08:22:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:27.243 08:22:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3532101 ']' 00:07:27.243 08:22:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.243 08:22:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:27.243 08:22:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.243 08:22:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:27.243 08:22:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:27.243 [2024-10-01 08:22:18.985332] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:07:27.243 [2024-10-01 08:22:18.985383] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3532101 ] 00:07:27.243 [2024-10-01 08:22:19.045736] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.503 [2024-10-01 08:22:19.108242] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.073 08:22:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:28.073 08:22:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:28.073 08:22:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3532123 00:07:28.073 08:22:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3532123 /var/tmp/spdk2.sock 00:07:28.073 08:22:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:28.073 08:22:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:28.073 08:22:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3532123 /var/tmp/spdk2.sock 00:07:28.073 08:22:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:28.073 08:22:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:28.073 08:22:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:28.073 08:22:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:28.073 08:22:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 3532123 /var/tmp/spdk2.sock 00:07:28.073 08:22:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3532123 ']' 00:07:28.073 08:22:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:28.073 08:22:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:28.073 08:22:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:28.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:28.073 08:22:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:28.073 08:22:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:28.073 [2024-10-01 08:22:19.827549] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:07:28.073 [2024-10-01 08:22:19.827604] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3532123 ] 00:07:28.333 [2024-10-01 08:22:19.918965] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3532101 has claimed it. 00:07:28.333 [2024-10-01 08:22:19.923012] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:28.903 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (3532123) - No such process 00:07:28.903 ERROR: process (pid: 3532123) is no longer running 00:07:28.903 08:22:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:28.903 08:22:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:28.903 08:22:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:28.903 08:22:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:28.903 08:22:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:28.903 08:22:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:28.903 08:22:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3532101 00:07:28.903 08:22:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3532101 00:07:28.903 08:22:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:29.163 lslocks: write error 00:07:29.163 08:22:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3532101 00:07:29.163 08:22:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3532101 ']' 00:07:29.163 08:22:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 3532101 00:07:29.163 08:22:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:29.163 08:22:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:29.163 08:22:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3532101 00:07:29.423 08:22:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:29.423 08:22:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:29.423 08:22:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3532101' 00:07:29.423 killing process with pid 3532101 00:07:29.423 08:22:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 3532101 00:07:29.423 08:22:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 3532101 00:07:29.682 00:07:29.682 real 0m2.349s 00:07:29.682 user 0m2.650s 00:07:29.682 sys 0m0.671s 00:07:29.682 08:22:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:29.682 08:22:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:29.682 ************************************ 00:07:29.682 END TEST locking_app_on_locked_coremask 00:07:29.682 ************************************ 00:07:29.682 08:22:21 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:29.682 08:22:21 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:29.682 08:22:21 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:29.682 08:22:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:29.682 ************************************ 00:07:29.682 START TEST locking_overlapped_coremask 00:07:29.682 ************************************ 00:07:29.682 08:22:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:07:29.682 08:22:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3532486 00:07:29.682 08:22:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3532486 /var/tmp/spdk.sock 00:07:29.682 08:22:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:29.682 08:22:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 3532486 ']' 00:07:29.682 08:22:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.682 08:22:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:29.682 08:22:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.682 08:22:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:29.682 08:22:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:29.682 [2024-10-01 08:22:21.410066] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:07:29.682 [2024-10-01 08:22:21.410125] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3532486 ] 00:07:29.682 [2024-10-01 08:22:21.471651] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:29.942 [2024-10-01 08:22:21.541933] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:29.943 [2024-10-01 08:22:21.542032] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:29.943 [2024-10-01 08:22:21.542045] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.514 08:22:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:30.514 08:22:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:30.514 08:22:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3532813 00:07:30.514 08:22:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3532813 /var/tmp/spdk2.sock 00:07:30.514 08:22:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:30.514 08:22:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:30.514 08:22:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3532813 /var/tmp/spdk2.sock 00:07:30.514 08:22:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:30.514 08:22:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:30.514 08:22:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:30.514 08:22:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:30.514 08:22:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 3532813 /var/tmp/spdk2.sock 00:07:30.514 08:22:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 3532813 ']' 00:07:30.514 08:22:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:30.514 08:22:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:30.514 08:22:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:30.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:30.514 08:22:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:30.514 08:22:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:30.514 [2024-10-01 08:22:22.258506] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:07:30.514 [2024-10-01 08:22:22.258561] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3532813 ] 00:07:30.514 [2024-10-01 08:22:22.331947] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3532486 has claimed it. 00:07:30.514 [2024-10-01 08:22:22.331979] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:31.086 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (3532813) - No such process 00:07:31.086 ERROR: process (pid: 3532813) is no longer running 00:07:31.086 08:22:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:31.086 08:22:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:31.086 08:22:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:31.086 08:22:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:31.086 08:22:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:31.086 08:22:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:31.086 08:22:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:31.086 08:22:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:31.086 08:22:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:31.086 08:22:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:31.086 08:22:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3532486 00:07:31.086 08:22:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 3532486 ']' 00:07:31.086 08:22:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 3532486 00:07:31.086 08:22:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:07:31.086 08:22:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:31.086 08:22:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3532486 00:07:31.347 08:22:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:31.347 08:22:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:31.347 08:22:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3532486' 00:07:31.347 killing process with pid 3532486 00:07:31.347 08:22:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 3532486 00:07:31.347 08:22:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 3532486 00:07:31.347 00:07:31.347 real 0m1.819s 00:07:31.347 user 0m5.183s 00:07:31.347 sys 0m0.382s 00:07:31.347 08:22:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:31.347 08:22:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:31.347 ************************************ 00:07:31.347 END TEST locking_overlapped_coremask 00:07:31.347 ************************************ 00:07:31.609 08:22:23 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:31.609 08:22:23 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:31.609 08:22:23 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:31.609 08:22:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:31.609 ************************************ 00:07:31.609 START TEST locking_overlapped_coremask_via_rpc 00:07:31.609 ************************************ 00:07:31.609 08:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:07:31.609 08:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3532878 00:07:31.609 08:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3532878 /var/tmp/spdk.sock 00:07:31.609 08:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:31.609 08:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3532878 ']' 00:07:31.609 08:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.609 08:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:31.609 08:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.609 08:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:31.609 08:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.609 [2024-10-01 08:22:23.302167] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:07:31.609 [2024-10-01 08:22:23.302230] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3532878 ] 00:07:31.609 [2024-10-01 08:22:23.366659] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:31.609 [2024-10-01 08:22:23.366695] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:31.869 [2024-10-01 08:22:23.441143] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:31.869 [2024-10-01 08:22:23.441375] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:31.869 [2024-10-01 08:22:23.441378] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.441 08:22:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:32.441 08:22:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:32.441 08:22:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3533188 00:07:32.441 08:22:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3533188 /var/tmp/spdk2.sock 00:07:32.441 08:22:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3533188 ']' 00:07:32.441 08:22:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:32.441 08:22:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:32.441 08:22:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:32.441 08:22:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:32.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:32.441 08:22:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:32.441 08:22:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.441 [2024-10-01 08:22:24.163667] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:07:32.441 [2024-10-01 08:22:24.163722] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3533188 ] 00:07:32.441 [2024-10-01 08:22:24.234739] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:32.441 [2024-10-01 08:22:24.234762] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:32.701 [2024-10-01 08:22:24.348384] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:32.701 [2024-10-01 08:22:24.348539] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:32.701 [2024-10-01 08:22:24.348542] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:07:33.272 08:22:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:33.272 08:22:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:33.272 08:22:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:33.272 08:22:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.272 08:22:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:33.272 08:22:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.272 08:22:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:33.272 08:22:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:33.272 08:22:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:33.272 08:22:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:33.272 08:22:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:33.272 08:22:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:33.272 08:22:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:33.272 08:22:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:33.272 08:22:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.272 08:22:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:33.272 [2024-10-01 08:22:24.963055] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3532878 has claimed it. 00:07:33.272 request: 00:07:33.272 { 00:07:33.272 "method": "framework_enable_cpumask_locks", 00:07:33.272 "req_id": 1 00:07:33.272 } 00:07:33.272 Got JSON-RPC error response 00:07:33.272 response: 00:07:33.272 { 00:07:33.272 "code": -32603, 00:07:33.272 "message": "Failed to claim CPU core: 2" 00:07:33.272 } 00:07:33.272 08:22:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:33.272 08:22:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:33.272 08:22:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:33.272 08:22:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:33.272 08:22:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:33.272 08:22:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3532878 /var/tmp/spdk.sock 00:07:33.272 08:22:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3532878 ']' 00:07:33.272 08:22:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:33.272 08:22:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:33.272 08:22:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:33.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:33.272 08:22:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:33.272 08:22:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:33.534 08:22:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:33.534 08:22:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:33.534 08:22:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3533188 /var/tmp/spdk2.sock 00:07:33.534 08:22:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3533188 ']' 00:07:33.534 08:22:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:33.534 08:22:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:33.534 08:22:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:33.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:33.534 08:22:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:33.534 08:22:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:33.534 08:22:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:33.534 08:22:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:33.534 08:22:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:33.534 08:22:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:33.534 08:22:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:33.534 08:22:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:33.534 00:07:33.534 real 0m2.094s 00:07:33.534 user 0m0.864s 00:07:33.534 sys 0m0.159s 00:07:33.534 08:22:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:33.534 08:22:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:33.534 ************************************ 00:07:33.534 END TEST locking_overlapped_coremask_via_rpc 00:07:33.534 ************************************ 00:07:33.795 08:22:25 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:33.795 08:22:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3532878 ]] 00:07:33.795 08:22:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3532878 00:07:33.795 08:22:25 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3532878 ']' 00:07:33.795 08:22:25 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3532878 00:07:33.795 08:22:25 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:33.795 08:22:25 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:33.795 08:22:25 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3532878 00:07:33.795 08:22:25 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:33.795 08:22:25 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:33.795 08:22:25 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3532878' 00:07:33.795 killing process with pid 3532878 00:07:33.795 08:22:25 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 3532878 00:07:33.795 08:22:25 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 3532878 00:07:34.055 08:22:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3533188 ]] 00:07:34.055 08:22:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3533188 00:07:34.055 08:22:25 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3533188 ']' 00:07:34.055 08:22:25 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3533188 00:07:34.055 08:22:25 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:34.055 08:22:25 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:34.055 08:22:25 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3533188 00:07:34.055 08:22:25 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:34.055 08:22:25 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:34.055 08:22:25 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3533188' 00:07:34.055 killing process with pid 3533188 00:07:34.055 08:22:25 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 3533188 00:07:34.055 08:22:25 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 3533188 00:07:34.316 08:22:25 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:34.316 08:22:25 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:34.316 08:22:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3532878 ]] 00:07:34.316 08:22:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3532878 00:07:34.316 08:22:25 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3532878 ']' 00:07:34.316 08:22:25 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3532878 00:07:34.316 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3532878) - No such process 00:07:34.316 08:22:25 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 3532878 is not found' 00:07:34.316 Process with pid 3532878 is not found 00:07:34.316 08:22:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3533188 ]] 00:07:34.316 08:22:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3533188 00:07:34.316 08:22:25 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3533188 ']' 00:07:34.316 08:22:25 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3533188 00:07:34.316 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3533188) - No such process 00:07:34.316 08:22:25 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 3533188 is not found' 00:07:34.316 Process with pid 3533188 is not found 00:07:34.316 08:22:25 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:34.316 00:07:34.316 real 0m16.554s 00:07:34.316 user 0m28.666s 00:07:34.316 sys 0m4.971s 00:07:34.316 08:22:25 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:34.316 08:22:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:34.316 ************************************ 00:07:34.316 END TEST cpu_locks 00:07:34.316 ************************************ 00:07:34.316 00:07:34.316 real 0m42.843s 00:07:34.316 user 1m23.942s 00:07:34.316 sys 0m8.256s 00:07:34.316 08:22:25 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:34.316 08:22:25 event -- common/autotest_common.sh@10 -- # set +x 00:07:34.316 ************************************ 00:07:34.316 END TEST event 00:07:34.316 ************************************ 00:07:34.316 08:22:26 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:34.316 08:22:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:34.316 08:22:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:34.316 08:22:26 -- common/autotest_common.sh@10 -- # set +x 00:07:34.316 ************************************ 00:07:34.316 START TEST thread 00:07:34.316 ************************************ 00:07:34.316 08:22:26 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:34.578 * Looking for test storage... 00:07:34.578 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:34.578 08:22:26 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:34.578 08:22:26 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:07:34.578 08:22:26 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:34.578 08:22:26 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:34.578 08:22:26 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:34.578 08:22:26 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:34.578 08:22:26 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:34.578 08:22:26 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:34.578 08:22:26 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:34.578 08:22:26 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:34.578 08:22:26 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:34.578 08:22:26 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:34.578 08:22:26 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:34.578 08:22:26 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:34.578 08:22:26 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:34.578 08:22:26 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:34.578 08:22:26 thread -- scripts/common.sh@345 -- # : 1 00:07:34.578 08:22:26 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:34.578 08:22:26 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:34.578 08:22:26 thread -- scripts/common.sh@365 -- # decimal 1 00:07:34.578 08:22:26 thread -- scripts/common.sh@353 -- # local d=1 00:07:34.578 08:22:26 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:34.578 08:22:26 thread -- scripts/common.sh@355 -- # echo 1 00:07:34.578 08:22:26 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:34.578 08:22:26 thread -- scripts/common.sh@366 -- # decimal 2 00:07:34.578 08:22:26 thread -- scripts/common.sh@353 -- # local d=2 00:07:34.578 08:22:26 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:34.578 08:22:26 thread -- scripts/common.sh@355 -- # echo 2 00:07:34.578 08:22:26 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:34.578 08:22:26 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:34.578 08:22:26 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:34.578 08:22:26 thread -- scripts/common.sh@368 -- # return 0 00:07:34.578 08:22:26 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:34.578 08:22:26 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:34.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.578 --rc genhtml_branch_coverage=1 00:07:34.578 --rc genhtml_function_coverage=1 00:07:34.578 --rc genhtml_legend=1 00:07:34.578 --rc geninfo_all_blocks=1 00:07:34.578 --rc geninfo_unexecuted_blocks=1 00:07:34.578 00:07:34.578 ' 00:07:34.578 08:22:26 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:34.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.578 --rc genhtml_branch_coverage=1 00:07:34.578 --rc genhtml_function_coverage=1 00:07:34.578 --rc genhtml_legend=1 00:07:34.578 --rc geninfo_all_blocks=1 00:07:34.578 --rc geninfo_unexecuted_blocks=1 00:07:34.578 00:07:34.578 ' 00:07:34.578 08:22:26 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:34.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.578 --rc genhtml_branch_coverage=1 00:07:34.578 --rc genhtml_function_coverage=1 00:07:34.578 --rc genhtml_legend=1 00:07:34.578 --rc geninfo_all_blocks=1 00:07:34.578 --rc geninfo_unexecuted_blocks=1 00:07:34.578 00:07:34.578 ' 00:07:34.578 08:22:26 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:34.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.578 --rc genhtml_branch_coverage=1 00:07:34.579 --rc genhtml_function_coverage=1 00:07:34.579 --rc genhtml_legend=1 00:07:34.579 --rc geninfo_all_blocks=1 00:07:34.579 --rc geninfo_unexecuted_blocks=1 00:07:34.579 00:07:34.579 ' 00:07:34.579 08:22:26 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:34.579 08:22:26 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:34.579 08:22:26 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:34.579 08:22:26 thread -- common/autotest_common.sh@10 -- # set +x 00:07:34.579 ************************************ 00:07:34.579 START TEST thread_poller_perf 00:07:34.579 ************************************ 00:07:34.579 08:22:26 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:34.579 [2024-10-01 08:22:26.327835] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:07:34.579 [2024-10-01 08:22:26.327957] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3533639 ] 00:07:34.579 [2024-10-01 08:22:26.396052] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.839 [2024-10-01 08:22:26.461139] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.839 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:35.780 ====================================== 00:07:35.780 busy:2408060684 (cyc) 00:07:35.780 total_run_count: 287000 00:07:35.780 tsc_hz: 2400000000 (cyc) 00:07:35.780 ====================================== 00:07:35.780 poller_cost: 8390 (cyc), 3495 (nsec) 00:07:35.780 00:07:35.780 real 0m1.218s 00:07:35.780 user 0m1.135s 00:07:35.780 sys 0m0.078s 00:07:35.780 08:22:27 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:35.780 08:22:27 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:35.780 ************************************ 00:07:35.780 END TEST thread_poller_perf 00:07:35.780 ************************************ 00:07:35.780 08:22:27 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:35.780 08:22:27 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:35.780 08:22:27 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:35.780 08:22:27 thread -- common/autotest_common.sh@10 -- # set +x 00:07:35.780 ************************************ 00:07:35.780 START TEST thread_poller_perf 00:07:35.780 ************************************ 00:07:35.780 08:22:27 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:36.040 [2024-10-01 08:22:27.622133] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:07:36.040 [2024-10-01 08:22:27.622237] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3533995 ] 00:07:36.040 [2024-10-01 08:22:27.684078] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.040 [2024-10-01 08:22:27.747925] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.040 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:36.982 ====================================== 00:07:36.982 busy:2402181534 (cyc) 00:07:36.982 total_run_count: 3809000 00:07:36.982 tsc_hz: 2400000000 (cyc) 00:07:36.982 ====================================== 00:07:36.982 poller_cost: 630 (cyc), 262 (nsec) 00:07:36.982 00:07:36.982 real 0m1.202s 00:07:36.982 user 0m1.130s 00:07:36.982 sys 0m0.068s 00:07:36.982 08:22:28 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:36.982 08:22:28 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:36.982 ************************************ 00:07:36.982 END TEST thread_poller_perf 00:07:36.982 ************************************ 00:07:37.243 08:22:28 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:37.243 00:07:37.243 real 0m2.779s 00:07:37.243 user 0m2.436s 00:07:37.243 sys 0m0.356s 00:07:37.243 08:22:28 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:37.243 08:22:28 thread -- common/autotest_common.sh@10 -- # set +x 00:07:37.243 ************************************ 00:07:37.243 END TEST thread 00:07:37.243 ************************************ 00:07:37.243 08:22:28 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:37.243 08:22:28 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:37.243 08:22:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:37.243 08:22:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:37.243 08:22:28 -- common/autotest_common.sh@10 -- # set +x 00:07:37.243 ************************************ 00:07:37.243 START TEST app_cmdline 00:07:37.243 ************************************ 00:07:37.243 08:22:28 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:37.243 * Looking for test storage... 00:07:37.243 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:37.243 08:22:29 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:37.243 08:22:29 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:07:37.243 08:22:29 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:37.503 08:22:29 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:37.503 08:22:29 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:37.503 08:22:29 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:37.503 08:22:29 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:37.503 08:22:29 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:37.503 08:22:29 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:37.503 08:22:29 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:37.503 08:22:29 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:37.503 08:22:29 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:37.503 08:22:29 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:37.503 08:22:29 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:37.503 08:22:29 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:37.503 08:22:29 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:37.503 08:22:29 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:37.503 08:22:29 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:37.503 08:22:29 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:37.503 08:22:29 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:37.503 08:22:29 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:37.503 08:22:29 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:37.503 08:22:29 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:37.503 08:22:29 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:37.503 08:22:29 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:37.503 08:22:29 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:37.503 08:22:29 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:37.503 08:22:29 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:37.503 08:22:29 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:37.503 08:22:29 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:37.503 08:22:29 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:37.503 08:22:29 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:37.503 08:22:29 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:37.503 08:22:29 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:37.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.503 --rc genhtml_branch_coverage=1 00:07:37.503 --rc genhtml_function_coverage=1 00:07:37.503 --rc genhtml_legend=1 00:07:37.503 --rc geninfo_all_blocks=1 00:07:37.503 --rc geninfo_unexecuted_blocks=1 00:07:37.503 00:07:37.503 ' 00:07:37.503 08:22:29 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:37.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.503 --rc genhtml_branch_coverage=1 00:07:37.503 --rc genhtml_function_coverage=1 00:07:37.503 --rc genhtml_legend=1 00:07:37.503 --rc geninfo_all_blocks=1 00:07:37.503 --rc geninfo_unexecuted_blocks=1 00:07:37.503 00:07:37.503 ' 00:07:37.503 08:22:29 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:37.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.503 --rc genhtml_branch_coverage=1 00:07:37.503 --rc genhtml_function_coverage=1 00:07:37.503 --rc genhtml_legend=1 00:07:37.503 --rc geninfo_all_blocks=1 00:07:37.503 --rc geninfo_unexecuted_blocks=1 00:07:37.503 00:07:37.503 ' 00:07:37.503 08:22:29 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:37.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.503 --rc genhtml_branch_coverage=1 00:07:37.503 --rc genhtml_function_coverage=1 00:07:37.503 --rc genhtml_legend=1 00:07:37.503 --rc geninfo_all_blocks=1 00:07:37.503 --rc geninfo_unexecuted_blocks=1 00:07:37.503 00:07:37.503 ' 00:07:37.503 08:22:29 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:37.503 08:22:29 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3534392 00:07:37.503 08:22:29 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3534392 00:07:37.503 08:22:29 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 3534392 ']' 00:07:37.503 08:22:29 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:37.503 08:22:29 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.503 08:22:29 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:37.503 08:22:29 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.503 08:22:29 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:37.503 08:22:29 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:37.503 [2024-10-01 08:22:29.180574] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:07:37.503 [2024-10-01 08:22:29.180643] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3534392 ] 00:07:37.503 [2024-10-01 08:22:29.243759] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.503 [2024-10-01 08:22:29.317946] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.444 08:22:29 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:38.444 08:22:29 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:07:38.444 08:22:29 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:38.444 { 00:07:38.444 "version": "SPDK v25.01-pre git sha1 718f46c19", 00:07:38.444 "fields": { 00:07:38.444 "major": 25, 00:07:38.444 "minor": 1, 00:07:38.444 "patch": 0, 00:07:38.444 "suffix": "-pre", 00:07:38.444 "commit": "718f46c19" 00:07:38.444 } 00:07:38.444 } 00:07:38.444 08:22:30 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:38.445 08:22:30 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:38.445 08:22:30 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:38.445 08:22:30 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:38.445 08:22:30 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:38.445 08:22:30 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:38.445 08:22:30 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.445 08:22:30 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:38.445 08:22:30 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:38.445 08:22:30 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.445 08:22:30 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:38.445 08:22:30 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:38.445 08:22:30 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:38.445 08:22:30 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:38.445 08:22:30 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:38.445 08:22:30 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:38.445 08:22:30 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:38.445 08:22:30 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:38.445 08:22:30 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:38.445 08:22:30 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:38.445 08:22:30 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:38.445 08:22:30 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:38.445 08:22:30 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:38.445 08:22:30 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:38.705 request: 00:07:38.705 { 00:07:38.705 "method": "env_dpdk_get_mem_stats", 00:07:38.705 "req_id": 1 00:07:38.705 } 00:07:38.705 Got JSON-RPC error response 00:07:38.705 response: 00:07:38.705 { 00:07:38.705 "code": -32601, 00:07:38.705 "message": "Method not found" 00:07:38.705 } 00:07:38.705 08:22:30 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:38.705 08:22:30 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:38.705 08:22:30 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:38.705 08:22:30 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:38.705 08:22:30 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3534392 00:07:38.705 08:22:30 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 3534392 ']' 00:07:38.705 08:22:30 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 3534392 00:07:38.705 08:22:30 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:07:38.705 08:22:30 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:38.705 08:22:30 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3534392 00:07:38.705 08:22:30 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:38.705 08:22:30 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:38.705 08:22:30 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3534392' 00:07:38.705 killing process with pid 3534392 00:07:38.705 08:22:30 app_cmdline -- common/autotest_common.sh@969 -- # kill 3534392 00:07:38.705 08:22:30 app_cmdline -- common/autotest_common.sh@974 -- # wait 3534392 00:07:38.966 00:07:38.966 real 0m1.722s 00:07:38.966 user 0m2.036s 00:07:38.966 sys 0m0.464s 00:07:38.966 08:22:30 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:38.966 08:22:30 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:38.966 ************************************ 00:07:38.966 END TEST app_cmdline 00:07:38.966 ************************************ 00:07:38.966 08:22:30 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:38.966 08:22:30 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:38.966 08:22:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:38.966 08:22:30 -- common/autotest_common.sh@10 -- # set +x 00:07:38.966 ************************************ 00:07:38.966 START TEST version 00:07:38.966 ************************************ 00:07:38.966 08:22:30 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:39.227 * Looking for test storage... 00:07:39.227 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:39.227 08:22:30 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:39.227 08:22:30 version -- common/autotest_common.sh@1681 -- # lcov --version 00:07:39.227 08:22:30 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:39.227 08:22:30 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:39.227 08:22:30 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:39.227 08:22:30 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:39.227 08:22:30 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:39.227 08:22:30 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:39.227 08:22:30 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:39.227 08:22:30 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:39.227 08:22:30 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:39.227 08:22:30 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:39.227 08:22:30 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:39.227 08:22:30 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:39.227 08:22:30 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:39.227 08:22:30 version -- scripts/common.sh@344 -- # case "$op" in 00:07:39.227 08:22:30 version -- scripts/common.sh@345 -- # : 1 00:07:39.227 08:22:30 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:39.227 08:22:30 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:39.227 08:22:30 version -- scripts/common.sh@365 -- # decimal 1 00:07:39.227 08:22:30 version -- scripts/common.sh@353 -- # local d=1 00:07:39.227 08:22:30 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:39.227 08:22:30 version -- scripts/common.sh@355 -- # echo 1 00:07:39.227 08:22:30 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:39.227 08:22:30 version -- scripts/common.sh@366 -- # decimal 2 00:07:39.227 08:22:30 version -- scripts/common.sh@353 -- # local d=2 00:07:39.227 08:22:30 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:39.227 08:22:30 version -- scripts/common.sh@355 -- # echo 2 00:07:39.227 08:22:30 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:39.227 08:22:30 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:39.227 08:22:30 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:39.227 08:22:30 version -- scripts/common.sh@368 -- # return 0 00:07:39.227 08:22:30 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:39.227 08:22:30 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:39.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.227 --rc genhtml_branch_coverage=1 00:07:39.227 --rc genhtml_function_coverage=1 00:07:39.227 --rc genhtml_legend=1 00:07:39.227 --rc geninfo_all_blocks=1 00:07:39.227 --rc geninfo_unexecuted_blocks=1 00:07:39.227 00:07:39.227 ' 00:07:39.227 08:22:30 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:39.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.227 --rc genhtml_branch_coverage=1 00:07:39.227 --rc genhtml_function_coverage=1 00:07:39.227 --rc genhtml_legend=1 00:07:39.227 --rc geninfo_all_blocks=1 00:07:39.227 --rc geninfo_unexecuted_blocks=1 00:07:39.227 00:07:39.227 ' 00:07:39.227 08:22:30 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:39.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.227 --rc genhtml_branch_coverage=1 00:07:39.227 --rc genhtml_function_coverage=1 00:07:39.227 --rc genhtml_legend=1 00:07:39.227 --rc geninfo_all_blocks=1 00:07:39.227 --rc geninfo_unexecuted_blocks=1 00:07:39.227 00:07:39.227 ' 00:07:39.227 08:22:30 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:39.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.228 --rc genhtml_branch_coverage=1 00:07:39.228 --rc genhtml_function_coverage=1 00:07:39.228 --rc genhtml_legend=1 00:07:39.228 --rc geninfo_all_blocks=1 00:07:39.228 --rc geninfo_unexecuted_blocks=1 00:07:39.228 00:07:39.228 ' 00:07:39.228 08:22:30 version -- app/version.sh@17 -- # get_header_version major 00:07:39.228 08:22:30 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:39.228 08:22:30 version -- app/version.sh@14 -- # cut -f2 00:07:39.228 08:22:30 version -- app/version.sh@14 -- # tr -d '"' 00:07:39.228 08:22:30 version -- app/version.sh@17 -- # major=25 00:07:39.228 08:22:30 version -- app/version.sh@18 -- # get_header_version minor 00:07:39.228 08:22:30 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:39.228 08:22:30 version -- app/version.sh@14 -- # cut -f2 00:07:39.228 08:22:30 version -- app/version.sh@14 -- # tr -d '"' 00:07:39.228 08:22:30 version -- app/version.sh@18 -- # minor=1 00:07:39.228 08:22:30 version -- app/version.sh@19 -- # get_header_version patch 00:07:39.228 08:22:30 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:39.228 08:22:30 version -- app/version.sh@14 -- # cut -f2 00:07:39.228 08:22:30 version -- app/version.sh@14 -- # tr -d '"' 00:07:39.228 08:22:30 version -- app/version.sh@19 -- # patch=0 00:07:39.228 08:22:30 version -- app/version.sh@20 -- # get_header_version suffix 00:07:39.228 08:22:30 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:39.228 08:22:30 version -- app/version.sh@14 -- # cut -f2 00:07:39.228 08:22:30 version -- app/version.sh@14 -- # tr -d '"' 00:07:39.228 08:22:30 version -- app/version.sh@20 -- # suffix=-pre 00:07:39.228 08:22:30 version -- app/version.sh@22 -- # version=25.1 00:07:39.228 08:22:30 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:39.228 08:22:30 version -- app/version.sh@28 -- # version=25.1rc0 00:07:39.228 08:22:30 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:39.228 08:22:30 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:39.228 08:22:30 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:39.228 08:22:30 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:39.228 00:07:39.228 real 0m0.277s 00:07:39.228 user 0m0.163s 00:07:39.228 sys 0m0.160s 00:07:39.228 08:22:30 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:39.228 08:22:30 version -- common/autotest_common.sh@10 -- # set +x 00:07:39.228 ************************************ 00:07:39.228 END TEST version 00:07:39.228 ************************************ 00:07:39.228 08:22:31 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:39.228 08:22:31 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:39.228 08:22:31 -- spdk/autotest.sh@194 -- # uname -s 00:07:39.228 08:22:31 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:39.228 08:22:31 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:39.228 08:22:31 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:39.228 08:22:31 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:39.228 08:22:31 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:07:39.228 08:22:31 -- spdk/autotest.sh@256 -- # timing_exit lib 00:07:39.228 08:22:31 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:39.228 08:22:31 -- common/autotest_common.sh@10 -- # set +x 00:07:39.489 08:22:31 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:07:39.489 08:22:31 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:07:39.489 08:22:31 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:07:39.489 08:22:31 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:07:39.489 08:22:31 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:07:39.489 08:22:31 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:07:39.489 08:22:31 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:39.489 08:22:31 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:39.489 08:22:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:39.489 08:22:31 -- common/autotest_common.sh@10 -- # set +x 00:07:39.489 ************************************ 00:07:39.489 START TEST nvmf_tcp 00:07:39.489 ************************************ 00:07:39.489 08:22:31 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:39.489 * Looking for test storage... 00:07:39.489 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:39.489 08:22:31 nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:39.489 08:22:31 nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:07:39.489 08:22:31 nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:39.489 08:22:31 nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:39.489 08:22:31 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:39.489 08:22:31 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:39.489 08:22:31 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:39.489 08:22:31 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:39.489 08:22:31 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:39.489 08:22:31 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:39.489 08:22:31 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:39.489 08:22:31 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:39.489 08:22:31 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:39.489 08:22:31 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:39.489 08:22:31 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:39.489 08:22:31 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:39.489 08:22:31 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:07:39.489 08:22:31 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:39.489 08:22:31 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:39.489 08:22:31 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:39.489 08:22:31 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:07:39.489 08:22:31 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:39.489 08:22:31 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:07:39.489 08:22:31 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:39.750 08:22:31 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:39.750 08:22:31 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:07:39.750 08:22:31 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:39.750 08:22:31 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:07:39.750 08:22:31 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:39.750 08:22:31 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:39.750 08:22:31 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:39.750 08:22:31 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:07:39.750 08:22:31 nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:39.750 08:22:31 nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:39.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.750 --rc genhtml_branch_coverage=1 00:07:39.750 --rc genhtml_function_coverage=1 00:07:39.750 --rc genhtml_legend=1 00:07:39.750 --rc geninfo_all_blocks=1 00:07:39.750 --rc geninfo_unexecuted_blocks=1 00:07:39.750 00:07:39.750 ' 00:07:39.750 08:22:31 nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:39.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.750 --rc genhtml_branch_coverage=1 00:07:39.750 --rc genhtml_function_coverage=1 00:07:39.750 --rc genhtml_legend=1 00:07:39.750 --rc geninfo_all_blocks=1 00:07:39.750 --rc geninfo_unexecuted_blocks=1 00:07:39.750 00:07:39.750 ' 00:07:39.750 08:22:31 nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:39.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.750 --rc genhtml_branch_coverage=1 00:07:39.750 --rc genhtml_function_coverage=1 00:07:39.750 --rc genhtml_legend=1 00:07:39.750 --rc geninfo_all_blocks=1 00:07:39.750 --rc geninfo_unexecuted_blocks=1 00:07:39.750 00:07:39.750 ' 00:07:39.750 08:22:31 nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:39.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.750 --rc genhtml_branch_coverage=1 00:07:39.750 --rc genhtml_function_coverage=1 00:07:39.750 --rc genhtml_legend=1 00:07:39.750 --rc geninfo_all_blocks=1 00:07:39.750 --rc geninfo_unexecuted_blocks=1 00:07:39.750 00:07:39.750 ' 00:07:39.750 08:22:31 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:39.750 08:22:31 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:39.750 08:22:31 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:39.750 08:22:31 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:39.750 08:22:31 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:39.750 08:22:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:39.750 ************************************ 00:07:39.750 START TEST nvmf_target_core 00:07:39.750 ************************************ 00:07:39.750 08:22:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:39.750 * Looking for test storage... 00:07:39.750 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:39.750 08:22:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:39.750 08:22:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lcov --version 00:07:39.750 08:22:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:39.750 08:22:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:39.750 08:22:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:39.750 08:22:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:39.750 08:22:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:39.750 08:22:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:39.750 08:22:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:39.750 08:22:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:39.750 08:22:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:39.750 08:22:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:39.750 08:22:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:39.750 08:22:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:39.751 08:22:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:39.751 08:22:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:39.751 08:22:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:39.751 08:22:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:39.751 08:22:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:39.751 08:22:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:39.751 08:22:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:39.751 08:22:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:39.751 08:22:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:39.751 08:22:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:39.751 08:22:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:39.751 08:22:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:39.751 08:22:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:39.751 08:22:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:39.751 08:22:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:39.751 08:22:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:39.751 08:22:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:39.751 08:22:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:39.751 08:22:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:39.751 08:22:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:39.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.751 --rc genhtml_branch_coverage=1 00:07:39.751 --rc genhtml_function_coverage=1 00:07:39.751 --rc genhtml_legend=1 00:07:39.751 --rc geninfo_all_blocks=1 00:07:39.751 --rc geninfo_unexecuted_blocks=1 00:07:39.751 00:07:39.751 ' 00:07:39.751 08:22:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:39.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.751 --rc genhtml_branch_coverage=1 00:07:39.751 --rc genhtml_function_coverage=1 00:07:39.751 --rc genhtml_legend=1 00:07:39.751 --rc geninfo_all_blocks=1 00:07:39.751 --rc geninfo_unexecuted_blocks=1 00:07:39.751 00:07:39.751 ' 00:07:39.751 08:22:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:39.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.751 --rc genhtml_branch_coverage=1 00:07:39.751 --rc genhtml_function_coverage=1 00:07:39.751 --rc genhtml_legend=1 00:07:39.751 --rc geninfo_all_blocks=1 00:07:39.751 --rc geninfo_unexecuted_blocks=1 00:07:39.751 00:07:39.751 ' 00:07:39.751 08:22:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:39.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.751 --rc genhtml_branch_coverage=1 00:07:39.751 --rc genhtml_function_coverage=1 00:07:39.751 --rc genhtml_legend=1 00:07:39.751 --rc geninfo_all_blocks=1 00:07:39.751 --rc geninfo_unexecuted_blocks=1 00:07:39.751 00:07:39.751 ' 00:07:39.751 08:22:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:40.012 08:22:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:40.012 08:22:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:40.012 08:22:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:40.012 08:22:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:40.012 08:22:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:40.012 08:22:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:40.012 08:22:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:40.012 08:22:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:40.012 08:22:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:40.012 08:22:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:40.012 08:22:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:40.012 08:22:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:40.012 08:22:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:40.012 08:22:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:40.012 08:22:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:40.012 08:22:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:40.012 08:22:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:40.012 08:22:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:40.012 08:22:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:40.012 08:22:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:40.012 08:22:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:40.012 08:22:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:40.012 08:22:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:40.012 08:22:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:40.012 08:22:31 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.012 08:22:31 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.012 08:22:31 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.012 08:22:31 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:40.013 08:22:31 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.013 08:22:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:40.013 08:22:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:40.013 08:22:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:40.013 08:22:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:40.013 08:22:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:40.013 08:22:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:40.013 08:22:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:40.013 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:40.013 08:22:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:40.013 08:22:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:40.013 08:22:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:40.013 08:22:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:40.013 08:22:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:40.013 08:22:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:40.013 08:22:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:40.013 08:22:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:40.013 08:22:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:40.013 08:22:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:40.013 ************************************ 00:07:40.013 START TEST nvmf_abort 00:07:40.013 ************************************ 00:07:40.013 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:40.013 * Looking for test storage... 00:07:40.013 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:40.013 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:40.013 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:40.013 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lcov --version 00:07:40.013 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:40.013 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:40.013 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:40.013 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:40.013 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:07:40.013 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:07:40.013 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:07:40.013 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:07:40.013 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:07:40.013 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:07:40.013 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:07:40.013 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:40.013 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:07:40.013 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:07:40.013 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:40.013 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:40.013 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:07:40.013 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:07:40.013 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:40.013 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:07:40.013 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:07:40.013 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:07:40.013 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:07:40.013 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:40.013 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:07:40.013 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:07:40.013 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:40.013 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:40.013 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:07:40.013 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:40.013 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:40.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.013 --rc genhtml_branch_coverage=1 00:07:40.013 --rc genhtml_function_coverage=1 00:07:40.013 --rc genhtml_legend=1 00:07:40.013 --rc geninfo_all_blocks=1 00:07:40.013 --rc geninfo_unexecuted_blocks=1 00:07:40.013 00:07:40.013 ' 00:07:40.013 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:40.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.013 --rc genhtml_branch_coverage=1 00:07:40.013 --rc genhtml_function_coverage=1 00:07:40.013 --rc genhtml_legend=1 00:07:40.013 --rc geninfo_all_blocks=1 00:07:40.013 --rc geninfo_unexecuted_blocks=1 00:07:40.013 00:07:40.013 ' 00:07:40.013 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:40.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.013 --rc genhtml_branch_coverage=1 00:07:40.013 --rc genhtml_function_coverage=1 00:07:40.013 --rc genhtml_legend=1 00:07:40.013 --rc geninfo_all_blocks=1 00:07:40.013 --rc geninfo_unexecuted_blocks=1 00:07:40.013 00:07:40.013 ' 00:07:40.274 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:40.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.274 --rc genhtml_branch_coverage=1 00:07:40.274 --rc genhtml_function_coverage=1 00:07:40.274 --rc genhtml_legend=1 00:07:40.274 --rc geninfo_all_blocks=1 00:07:40.274 --rc geninfo_unexecuted_blocks=1 00:07:40.274 00:07:40.274 ' 00:07:40.274 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:40.274 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:40.274 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:40.274 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:40.274 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:40.274 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:40.274 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:40.274 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:40.274 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:40.274 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:40.274 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:40.274 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:40.274 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:40.274 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:40.274 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:40.274 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:40.274 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:40.274 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:40.274 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:40.274 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:07:40.274 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:40.274 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:40.274 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:40.274 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.274 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.274 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.274 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:40.274 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.274 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:07:40.274 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:40.274 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:40.274 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:40.274 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:40.274 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:40.274 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:40.274 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:40.274 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:40.274 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:40.274 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:40.274 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:40.274 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:40.274 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:40.274 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:07:40.274 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:40.274 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@472 -- # prepare_net_devs 00:07:40.274 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@434 -- # local -g is_hw=no 00:07:40.274 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@436 -- # remove_spdk_ns 00:07:40.274 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:40.274 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:40.274 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:40.274 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:07:40.274 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:07:40.274 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:07:40.274 08:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.414 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:48.414 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:07:48.414 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:48.414 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:48.414 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:48.414 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:48.414 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:48.414 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:07:48.414 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:48.414 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:07:48.414 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:07:48.414 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:07:48.414 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:07:48.414 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:07:48.414 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:07:48.414 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:48.414 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:48.414 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:48.414 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:48.414 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:48.414 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:48.414 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:48.414 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:48.414 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:48.414 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:48.414 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:48.414 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:07:48.414 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:07:48.414 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:07:48.414 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:07:48.415 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:07:48.415 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:07:48.415 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:07:48.415 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:48.415 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:48.415 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:07:48.415 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:07:48.415 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:48.415 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:48.415 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:07:48.415 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:07:48.415 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:48.415 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:48.415 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:07:48.415 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:07:48.415 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:48.415 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:48.415 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:07:48.415 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:07:48.415 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:07:48.415 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:07:48.415 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:48.415 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:48.415 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:07:48.415 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:48.415 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ up == up ]] 00:07:48.415 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:48.415 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:48.415 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:48.415 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:48.415 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:48.415 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:48.415 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:48.415 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:07:48.415 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:48.415 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ up == up ]] 00:07:48.415 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:48.415 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:48.415 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:48.415 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:48.415 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:48.415 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:07:48.415 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # is_hw=yes 00:07:48.415 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:07:48.415 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:07:48.415 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:07:48.415 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:48.415 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:48.415 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:48.415 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:48.415 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:48.415 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:48.415 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:48.415 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:48.415 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:48.415 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:48.415 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:48.415 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:48.415 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:48.415 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:48.415 08:22:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:48.415 08:22:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:48.415 08:22:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:48.415 08:22:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:48.415 08:22:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:48.415 08:22:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:48.415 08:22:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:48.415 08:22:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:48.415 08:22:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:48.415 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:48.415 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.729 ms 00:07:48.415 00:07:48.415 --- 10.0.0.2 ping statistics --- 00:07:48.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.415 rtt min/avg/max/mdev = 0.729/0.729/0.729/0.000 ms 00:07:48.415 08:22:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:48.415 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:48.415 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:07:48.415 00:07:48.415 --- 10.0.0.1 ping statistics --- 00:07:48.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.415 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:07:48.415 08:22:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:48.415 08:22:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # return 0 00:07:48.415 08:22:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:07:48.415 08:22:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:48.415 08:22:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:07:48.415 08:22:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:07:48.415 08:22:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:48.415 08:22:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:07:48.415 08:22:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:07:48.415 08:22:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:48.415 08:22:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:07:48.415 08:22:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:48.415 08:22:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.415 08:22:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@505 -- # nvmfpid=3538838 00:07:48.415 08:22:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@506 -- # waitforlisten 3538838 00:07:48.415 08:22:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:48.415 08:22:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 3538838 ']' 00:07:48.415 08:22:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.415 08:22:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:48.415 08:22:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.415 08:22:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:48.415 08:22:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.415 [2024-10-01 08:22:39.272497] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:07:48.415 [2024-10-01 08:22:39.272564] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:48.415 [2024-10-01 08:22:39.363963] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:48.415 [2024-10-01 08:22:39.458425] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:48.415 [2024-10-01 08:22:39.458486] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:48.415 [2024-10-01 08:22:39.458494] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:48.415 [2024-10-01 08:22:39.458501] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:48.415 [2024-10-01 08:22:39.458508] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:48.415 [2024-10-01 08:22:39.459799] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:48.415 [2024-10-01 08:22:39.460097] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:48.415 [2024-10-01 08:22:39.460200] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.415 08:22:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:48.415 08:22:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:07:48.415 08:22:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:07:48.415 08:22:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:48.415 08:22:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.415 08:22:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:48.415 08:22:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:48.415 08:22:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.415 08:22:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.415 [2024-10-01 08:22:40.126764] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:48.415 08:22:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.415 08:22:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:48.415 08:22:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.415 08:22:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.415 Malloc0 00:07:48.415 08:22:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.415 08:22:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:48.415 08:22:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.415 08:22:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.415 Delay0 00:07:48.415 08:22:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.415 08:22:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:48.415 08:22:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.415 08:22:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.415 08:22:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.415 08:22:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:48.415 08:22:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.415 08:22:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.415 08:22:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.415 08:22:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:48.415 08:22:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.415 08:22:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.415 [2024-10-01 08:22:40.207232] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:48.415 08:22:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.415 08:22:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:48.415 08:22:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.415 08:22:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.415 08:22:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.415 08:22:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:48.676 [2024-10-01 08:22:40.326549] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:51.217 Initializing NVMe Controllers 00:07:51.217 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:51.217 controller IO queue size 128 less than required 00:07:51.217 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:51.217 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:51.217 Initialization complete. Launching workers. 00:07:51.217 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 28236 00:07:51.217 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28301, failed to submit 62 00:07:51.217 success 28240, unsuccessful 61, failed 0 00:07:51.217 08:22:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:51.217 08:22:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.217 08:22:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:51.217 08:22:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.217 08:22:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:51.217 08:22:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:51.217 08:22:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # nvmfcleanup 00:07:51.217 08:22:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:07:51.217 08:22:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:51.217 08:22:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:07:51.217 08:22:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:51.217 08:22:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:51.217 rmmod nvme_tcp 00:07:51.217 rmmod nvme_fabrics 00:07:51.217 rmmod nvme_keyring 00:07:51.217 08:22:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:51.217 08:22:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:07:51.218 08:22:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:07:51.218 08:22:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@513 -- # '[' -n 3538838 ']' 00:07:51.218 08:22:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@514 -- # killprocess 3538838 00:07:51.218 08:22:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 3538838 ']' 00:07:51.218 08:22:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 3538838 00:07:51.218 08:22:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:07:51.218 08:22:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:51.218 08:22:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3538838 00:07:51.218 08:22:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:51.218 08:22:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:51.218 08:22:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3538838' 00:07:51.218 killing process with pid 3538838 00:07:51.218 08:22:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 3538838 00:07:51.218 08:22:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 3538838 00:07:51.218 08:22:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:07:51.218 08:22:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:07:51.218 08:22:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:07:51.218 08:22:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:07:51.218 08:22:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@787 -- # iptables-save 00:07:51.218 08:22:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:07:51.218 08:22:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@787 -- # iptables-restore 00:07:51.218 08:22:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:51.218 08:22:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:51.218 08:22:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:51.218 08:22:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:51.218 08:22:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:53.127 08:22:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:53.127 00:07:53.127 real 0m13.213s 00:07:53.127 user 0m13.888s 00:07:53.127 sys 0m6.474s 00:07:53.127 08:22:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:53.127 08:22:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:53.127 ************************************ 00:07:53.127 END TEST nvmf_abort 00:07:53.127 ************************************ 00:07:53.127 08:22:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:53.127 08:22:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:53.127 08:22:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:53.127 08:22:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:53.127 ************************************ 00:07:53.127 START TEST nvmf_ns_hotplug_stress 00:07:53.127 ************************************ 00:07:53.127 08:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:53.388 * Looking for test storage... 00:07:53.388 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:53.388 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:53.388 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:07:53.388 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:53.388 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:53.388 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:53.388 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:53.388 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:53.388 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:07:53.388 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:07:53.388 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:07:53.388 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:07:53.388 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:07:53.388 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:07:53.388 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:07:53.388 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:53.388 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:07:53.388 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:07:53.388 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:53.388 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:53.388 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:07:53.388 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:07:53.388 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:53.388 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:07:53.388 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:07:53.388 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:07:53.388 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:07:53.388 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:53.388 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:07:53.388 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:07:53.388 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:53.388 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:53.388 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:07:53.388 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:53.388 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:53.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.388 --rc genhtml_branch_coverage=1 00:07:53.388 --rc genhtml_function_coverage=1 00:07:53.388 --rc genhtml_legend=1 00:07:53.389 --rc geninfo_all_blocks=1 00:07:53.389 --rc geninfo_unexecuted_blocks=1 00:07:53.389 00:07:53.389 ' 00:07:53.389 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:53.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.389 --rc genhtml_branch_coverage=1 00:07:53.389 --rc genhtml_function_coverage=1 00:07:53.389 --rc genhtml_legend=1 00:07:53.389 --rc geninfo_all_blocks=1 00:07:53.389 --rc geninfo_unexecuted_blocks=1 00:07:53.389 00:07:53.389 ' 00:07:53.389 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:53.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.389 --rc genhtml_branch_coverage=1 00:07:53.389 --rc genhtml_function_coverage=1 00:07:53.389 --rc genhtml_legend=1 00:07:53.389 --rc geninfo_all_blocks=1 00:07:53.389 --rc geninfo_unexecuted_blocks=1 00:07:53.389 00:07:53.389 ' 00:07:53.389 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:53.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.389 --rc genhtml_branch_coverage=1 00:07:53.389 --rc genhtml_function_coverage=1 00:07:53.389 --rc genhtml_legend=1 00:07:53.389 --rc geninfo_all_blocks=1 00:07:53.389 --rc geninfo_unexecuted_blocks=1 00:07:53.389 00:07:53.389 ' 00:07:53.389 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:53.389 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:53.389 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:53.389 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:53.389 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:53.389 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:53.389 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:53.389 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:53.389 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:53.389 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:53.389 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:53.389 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:53.389 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:53.389 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:53.389 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:53.389 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:53.389 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:53.389 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:53.389 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:53.389 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:07:53.389 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:53.389 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:53.389 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:53.389 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.389 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.389 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.389 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:53.389 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.389 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:07:53.389 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:53.389 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:53.389 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:53.389 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:53.389 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:53.389 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:53.389 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:53.389 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:53.389 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:53.389 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:53.389 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:53.389 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:53.389 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:07:53.389 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:53.389 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # prepare_net_devs 00:07:53.389 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@434 -- # local -g is_hw=no 00:07:53.389 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # remove_spdk_ns 00:07:53.389 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:53.389 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:53.389 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:53.389 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:07:53.389 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:07:53.389 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:07:53.389 08:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:01.616 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:01.616 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:08:01.616 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:01.616 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:01.616 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:01.616 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:01.616 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:01.616 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:08:01.616 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:01.616 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:08:01.616 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:08:01.616 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:08:01.616 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:08:01.616 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:08:01.616 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:08:01.616 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:01.616 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:01.616 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:01.616 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:01.616 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:01.616 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:01.616 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:01.616 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:01.616 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:01.616 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:01.617 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:01.617 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:01.617 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:01.617 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # is_hw=yes 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:01.617 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:01.617 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.622 ms 00:08:01.617 00:08:01.617 --- 10.0.0.2 ping statistics --- 00:08:01.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:01.617 rtt min/avg/max/mdev = 0.622/0.622/0.622/0.000 ms 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:01.617 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:01.617 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.380 ms 00:08:01.617 00:08:01.617 --- 10.0.0.1 ping statistics --- 00:08:01.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:01.617 rtt min/avg/max/mdev = 0.380/0.380/0.380/0.000 ms 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # return 0 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # nvmfpid=3543610 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # waitforlisten 3543610 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 3543610 ']' 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:01.617 08:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:01.618 [2024-10-01 08:22:52.476594] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:08:01.618 [2024-10-01 08:22:52.476663] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:01.618 [2024-10-01 08:22:52.567019] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:01.618 [2024-10-01 08:22:52.659853] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:01.618 [2024-10-01 08:22:52.659913] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:01.618 [2024-10-01 08:22:52.659921] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:01.618 [2024-10-01 08:22:52.659930] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:01.618 [2024-10-01 08:22:52.659936] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:01.618 [2024-10-01 08:22:52.661250] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:01.618 [2024-10-01 08:22:52.661416] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:01.618 [2024-10-01 08:22:52.661418] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:08:01.618 08:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:01.618 08:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:08:01.618 08:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:01.618 08:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:01.618 08:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:01.618 08:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:01.618 08:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:08:01.618 08:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:01.879 [2024-10-01 08:22:53.476863] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:01.879 08:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:02.140 08:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:02.140 [2024-10-01 08:22:53.850547] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:02.140 08:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:02.400 08:22:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:08:02.400 Malloc0 00:08:02.660 08:22:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:02.660 Delay0 00:08:02.660 08:22:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.920 08:22:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:08:03.179 NULL1 00:08:03.179 08:22:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:03.179 08:22:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3544299 00:08:03.179 08:22:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:08:03.179 08:22:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3544299 00:08:03.179 08:22:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.438 08:22:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:03.697 08:22:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:08:03.697 08:22:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:08:03.698 true 00:08:03.957 08:22:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3544299 00:08:03.957 08:22:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.957 08:22:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.216 08:22:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:08:04.216 08:22:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:08:04.476 true 00:08:04.476 08:22:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3544299 00:08:04.476 08:22:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.476 08:22:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.736 08:22:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:08:04.736 08:22:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:08:04.995 true 00:08:04.995 08:22:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3544299 00:08:04.995 08:22:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.255 08:22:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.255 08:22:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:08:05.255 08:22:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:08:05.516 true 00:08:05.516 08:22:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3544299 00:08:05.516 08:22:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.776 08:22:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.038 08:22:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:08:06.038 08:22:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:06.038 true 00:08:06.038 08:22:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3544299 00:08:06.038 08:22:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.299 08:22:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.560 08:22:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:08:06.560 08:22:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:06.560 true 00:08:06.560 08:22:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3544299 00:08:06.560 08:22:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.820 08:22:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:07.080 08:22:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:08:07.080 08:22:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:08:07.080 true 00:08:07.080 08:22:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3544299 00:08:07.080 08:22:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.340 08:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:07.600 08:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:08:07.600 08:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:07.600 true 00:08:07.600 08:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3544299 00:08:07.600 08:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.860 08:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:08.121 08:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:08:08.121 08:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:08.121 true 00:08:08.381 08:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3544299 00:08:08.381 08:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.381 08:23:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:08.642 08:23:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:08:08.642 08:23:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:08.902 true 00:08:08.902 08:23:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3544299 00:08:08.902 08:23:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.902 08:23:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:09.162 08:23:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:08:09.163 08:23:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:09.423 true 00:08:09.423 08:23:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3544299 00:08:09.423 08:23:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.423 08:23:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:09.683 08:23:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:08:09.683 08:23:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:09.944 true 00:08:09.944 08:23:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3544299 00:08:09.944 08:23:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.944 08:23:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:10.205 08:23:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:08:10.205 08:23:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:10.465 true 00:08:10.465 08:23:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3544299 00:08:10.465 08:23:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.726 08:23:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:10.726 08:23:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:08:10.726 08:23:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:10.988 true 00:08:10.988 08:23:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3544299 00:08:10.988 08:23:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.248 08:23:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:11.248 08:23:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:08:11.249 08:23:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:11.508 true 00:08:11.509 08:23:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3544299 00:08:11.509 08:23:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.768 08:23:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:11.768 08:23:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:08:11.768 08:23:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:12.028 true 00:08:12.028 08:23:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3544299 00:08:12.029 08:23:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.289 08:23:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:12.549 08:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:08:12.549 08:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:12.549 true 00:08:12.549 08:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3544299 00:08:12.549 08:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.808 08:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:13.068 08:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:08:13.068 08:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:13.068 true 00:08:13.068 08:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3544299 00:08:13.068 08:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.327 08:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:13.586 08:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:13.586 08:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:13.586 true 00:08:13.586 08:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3544299 00:08:13.586 08:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.846 08:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:14.106 08:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:08:14.107 08:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:14.107 true 00:08:14.107 08:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3544299 00:08:14.107 08:23:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.368 08:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:14.628 08:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:14.628 08:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:14.628 true 00:08:14.888 08:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3544299 00:08:14.888 08:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.888 08:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:15.149 08:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:15.149 08:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:15.409 true 00:08:15.409 08:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3544299 00:08:15.410 08:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.410 08:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:15.670 08:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:15.670 08:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:15.930 true 00:08:15.930 08:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3544299 00:08:15.930 08:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.930 08:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:16.192 08:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:16.192 08:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:16.453 true 00:08:16.453 08:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3544299 00:08:16.453 08:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.714 08:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:16.714 08:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:16.714 08:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:16.975 true 00:08:16.975 08:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3544299 00:08:16.975 08:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.235 08:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:17.235 08:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:17.235 08:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:17.495 true 00:08:17.495 08:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3544299 00:08:17.495 08:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.755 08:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:17.755 08:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:17.755 08:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:18.016 true 00:08:18.016 08:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3544299 00:08:18.016 08:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.276 08:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:18.276 08:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:18.276 08:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:18.537 true 00:08:18.537 08:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3544299 00:08:18.537 08:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.797 08:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:19.057 08:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:08:19.057 08:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:08:19.057 true 00:08:19.057 08:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3544299 00:08:19.057 08:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:19.316 08:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:19.576 08:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:08:19.576 08:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:08:19.576 true 00:08:19.576 08:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3544299 00:08:19.576 08:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:19.836 08:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:20.096 08:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:08:20.096 08:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:08:20.096 true 00:08:20.096 08:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3544299 00:08:20.096 08:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.356 08:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:20.616 08:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:08:20.616 08:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:08:20.616 true 00:08:20.877 08:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3544299 00:08:20.877 08:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.877 08:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:21.138 08:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:08:21.138 08:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:08:21.138 true 00:08:21.398 08:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3544299 00:08:21.398 08:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:21.398 08:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:21.658 08:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:08:21.658 08:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:08:21.918 true 00:08:21.918 08:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3544299 00:08:21.918 08:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:21.918 08:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:22.178 08:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:08:22.178 08:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:08:22.438 true 00:08:22.438 08:23:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3544299 00:08:22.438 08:23:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:22.438 08:23:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:22.697 08:23:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:08:22.697 08:23:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:08:22.958 true 00:08:22.958 08:23:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3544299 00:08:22.958 08:23:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:23.218 08:23:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:23.218 08:23:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:08:23.218 08:23:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:08:23.478 true 00:08:23.478 08:23:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3544299 00:08:23.478 08:23:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:23.739 08:23:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:23.739 08:23:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:08:23.739 08:23:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:08:23.999 true 00:08:23.999 08:23:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3544299 00:08:23.999 08:23:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:24.259 08:23:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:24.259 08:23:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:08:24.259 08:23:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:08:24.521 true 00:08:24.521 08:23:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3544299 00:08:24.521 08:23:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:24.781 08:23:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:25.042 08:23:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:08:25.042 08:23:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:08:25.042 true 00:08:25.042 08:23:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3544299 00:08:25.042 08:23:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:25.304 08:23:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:25.564 08:23:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:08:25.564 08:23:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:08:25.564 true 00:08:25.564 08:23:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3544299 00:08:25.564 08:23:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:25.825 08:23:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:26.086 08:23:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:08:26.086 08:23:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:08:26.086 true 00:08:26.086 08:23:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3544299 00:08:26.086 08:23:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:26.346 08:23:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:26.607 08:23:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:08:26.607 08:23:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:08:26.607 true 00:08:26.868 08:23:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3544299 00:08:26.868 08:23:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:26.868 08:23:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:27.128 08:23:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:08:27.128 08:23:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:08:27.389 true 00:08:27.389 08:23:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3544299 00:08:27.389 08:23:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:27.389 08:23:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:27.649 08:23:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:08:27.649 08:23:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:08:27.910 true 00:08:27.910 08:23:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3544299 00:08:27.910 08:23:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:27.911 08:23:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:28.171 08:23:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:08:28.171 08:23:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:08:28.432 true 00:08:28.432 08:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3544299 00:08:28.432 08:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:28.432 08:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:28.692 08:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:08:28.692 08:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:08:28.952 true 00:08:28.952 08:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3544299 00:08:28.952 08:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:29.213 08:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:29.213 08:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:08:29.213 08:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:08:29.483 true 00:08:29.483 08:23:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3544299 00:08:29.483 08:23:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:29.744 08:23:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:29.744 08:23:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:08:29.744 08:23:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:08:30.007 true 00:08:30.007 08:23:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3544299 00:08:30.007 08:23:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:30.267 08:23:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:30.267 08:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:08:30.267 08:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:08:30.528 true 00:08:30.528 08:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3544299 00:08:30.528 08:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:30.789 08:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:30.789 08:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:08:30.789 08:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:08:31.050 true 00:08:31.050 08:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3544299 00:08:31.050 08:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:31.310 08:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:31.570 08:23:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:08:31.570 08:23:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:08:31.570 true 00:08:31.570 08:23:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3544299 00:08:31.570 08:23:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:31.830 08:23:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:32.090 08:23:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:08:32.090 08:23:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:08:32.090 true 00:08:32.090 08:23:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3544299 00:08:32.090 08:23:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:32.351 08:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:32.611 08:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:08:32.611 08:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:08:32.611 true 00:08:32.611 08:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3544299 00:08:32.611 08:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:32.871 08:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:33.132 08:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:08:33.132 08:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:08:33.132 true 00:08:33.393 08:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3544299 00:08:33.393 08:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:33.393 08:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:33.654 Initializing NVMe Controllers 00:08:33.654 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:33.654 Controller SPDK bdev Controller (SPDK00000000000001 ): Skipping inactive NS 1 00:08:33.654 Controller IO queue size 128, less than required. 00:08:33.654 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:33.654 WARNING: Some requested NVMe devices were skipped 00:08:33.654 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:33.654 Initialization complete. Launching workers. 00:08:33.654 ======================================================== 00:08:33.654 Latency(us) 00:08:33.654 Device Information : IOPS MiB/s Average min max 00:08:33.654 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 29942.94 14.62 4275.02 1412.20 43773.77 00:08:33.654 ======================================================== 00:08:33.654 Total : 29942.94 14.62 4275.02 1412.20 43773.77 00:08:33.654 00:08:33.654 08:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1056 00:08:33.654 08:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:08:33.915 true 00:08:33.915 08:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3544299 00:08:33.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3544299) - No such process 00:08:33.915 08:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3544299 00:08:33.915 08:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:33.915 08:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:34.175 08:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:34.175 08:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:34.175 08:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:34.175 08:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:34.175 08:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:34.436 null0 00:08:34.436 08:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:34.436 08:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:34.436 08:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:34.436 null1 00:08:34.436 08:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:34.436 08:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:34.436 08:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:34.696 null2 00:08:34.696 08:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:34.696 08:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:34.696 08:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:34.958 null3 00:08:34.958 08:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:34.958 08:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:34.958 08:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:34.958 null4 00:08:34.958 08:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:35.218 08:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:35.218 08:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:35.218 null5 00:08:35.218 08:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:35.218 08:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:35.218 08:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:35.478 null6 00:08:35.478 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:35.478 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:35.478 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:35.739 null7 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3550856 3550858 3550859 3550861 3550863 3550865 3550868 3550870 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:35.739 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:35.740 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:35.740 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:36.001 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:36.001 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:36.001 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:36.001 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:36.001 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:36.001 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.001 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.001 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:36.001 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.001 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.001 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:36.001 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.001 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.001 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:36.001 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.001 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.001 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:36.001 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.001 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.001 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:36.001 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.001 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.001 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:36.001 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.001 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.002 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:36.002 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.002 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.002 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:36.262 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:36.262 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:36.262 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:36.262 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:36.262 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:36.262 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:36.262 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:36.262 08:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:36.262 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.263 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.263 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:36.263 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.263 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.263 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:36.534 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.534 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.534 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:36.534 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.534 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.534 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:36.534 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.534 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.534 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:36.534 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.534 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.535 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:36.535 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.535 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.535 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:36.535 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.535 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.535 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:36.535 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:36.535 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:36.535 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:36.535 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:36.535 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:36.535 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:36.535 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:36.795 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:36.795 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.795 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.795 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:36.795 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.795 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.795 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:36.795 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.795 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.795 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:36.795 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.795 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.795 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:36.795 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.795 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.795 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:36.795 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.795 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.795 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:36.795 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.795 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.795 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:36.795 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.795 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.795 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:36.795 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:36.795 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:37.055 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:37.055 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:37.055 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:37.055 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:37.055 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:37.055 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:37.055 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.055 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.055 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.055 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.056 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:37.056 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:37.056 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.056 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.056 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:37.056 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.056 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.056 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:37.056 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.056 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.056 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:37.056 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.056 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.056 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:37.056 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.056 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.056 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:37.318 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.318 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.318 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:37.318 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:37.318 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:37.318 08:23:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:37.318 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:37.318 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:37.318 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:37.318 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:37.318 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:37.580 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.580 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.580 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:37.580 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.580 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.580 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:37.580 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.580 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.580 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:37.580 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.580 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.580 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:37.580 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.580 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.580 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:37.580 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.580 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.580 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:37.580 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.580 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.580 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:37.580 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.580 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.580 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:37.580 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:37.580 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:37.580 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:37.580 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:37.580 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:37.580 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:37.580 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:37.843 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:37.843 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.843 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.843 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:37.843 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.843 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.843 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:37.843 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.843 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.843 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:37.843 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.843 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.843 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:37.843 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.843 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.843 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:37.843 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.843 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.843 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:37.843 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.843 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.843 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:37.843 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.843 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.843 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:37.843 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:38.105 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:38.105 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:38.105 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:38.105 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:38.105 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:38.105 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:38.105 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:38.105 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.105 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.105 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:38.105 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.105 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.105 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.105 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.105 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:38.105 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:38.105 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.105 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.105 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:38.105 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.105 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.105 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:38.105 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.105 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.105 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:38.365 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.365 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.365 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:38.365 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.365 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.365 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:38.365 08:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:38.365 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:38.365 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:38.365 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:38.365 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:38.365 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:38.365 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:38.365 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:38.625 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.625 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.625 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:38.625 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.625 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.625 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:38.625 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.625 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.625 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:38.625 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.625 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.625 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:38.625 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.625 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.625 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:38.625 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.625 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.625 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:38.625 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.625 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.625 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:38.625 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.625 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.625 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:38.625 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:38.625 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:38.625 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:38.625 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:38.885 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:38.885 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:38.885 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:38.885 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:38.885 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.885 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.885 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:38.885 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.885 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.885 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:38.885 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.885 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.885 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:38.885 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.885 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.885 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:38.885 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.885 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.885 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:38.885 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.885 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.886 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:38.886 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.886 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.886 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:39.146 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:39.146 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:39.146 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:39.146 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.146 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.146 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:39.146 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:39.146 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:39.146 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:39.146 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:39.146 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.146 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.146 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.146 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.146 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.146 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.146 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:39.146 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.146 08:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.406 08:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.406 08:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.406 08:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.406 08:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.406 08:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.406 08:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.406 08:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.406 08:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.406 08:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:39.406 08:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:39.406 08:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # nvmfcleanup 00:08:39.406 08:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:08:39.406 08:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:39.406 08:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:08:39.406 08:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:39.406 08:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:39.406 rmmod nvme_tcp 00:08:39.406 rmmod nvme_fabrics 00:08:39.406 rmmod nvme_keyring 00:08:39.406 08:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:39.406 08:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:08:39.406 08:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:08:39.406 08:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@513 -- # '[' -n 3543610 ']' 00:08:39.406 08:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # killprocess 3543610 00:08:39.406 08:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 3543610 ']' 00:08:39.406 08:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 3543610 00:08:39.406 08:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:08:39.406 08:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:39.406 08:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3543610 00:08:39.406 08:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:39.406 08:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:39.406 08:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3543610' 00:08:39.406 killing process with pid 3543610 00:08:39.406 08:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 3543610 00:08:39.406 08:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 3543610 00:08:39.666 08:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:08:39.666 08:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:08:39.667 08:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:08:39.667 08:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:08:39.667 08:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # iptables-save 00:08:39.667 08:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # iptables-restore 00:08:39.667 08:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:08:39.667 08:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:39.667 08:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:39.667 08:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:39.667 08:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:39.667 08:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:41.667 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:41.667 00:08:41.667 real 0m48.474s 00:08:41.667 user 3m18.620s 00:08:41.667 sys 0m16.668s 00:08:41.667 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:41.667 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:41.667 ************************************ 00:08:41.667 END TEST nvmf_ns_hotplug_stress 00:08:41.667 ************************************ 00:08:41.667 08:23:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:41.667 08:23:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:41.667 08:23:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:41.667 08:23:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:41.928 ************************************ 00:08:41.928 START TEST nvmf_delete_subsystem 00:08:41.928 ************************************ 00:08:41.928 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:41.928 * Looking for test storage... 00:08:41.928 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:41.928 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:41.928 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lcov --version 00:08:41.928 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:41.928 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:41.928 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:41.928 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:41.928 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:41.928 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:08:41.928 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:08:41.928 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:08:41.928 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:08:41.928 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:08:41.928 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:08:41.928 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:08:41.928 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:41.928 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:08:41.928 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:08:41.928 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:41.928 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:41.928 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:08:41.928 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:08:41.928 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:41.928 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:08:41.928 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:08:41.928 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:08:41.928 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:08:41.928 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:41.928 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:08:41.928 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:08:41.928 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:41.928 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:41.929 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:08:41.929 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:41.929 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:41.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.929 --rc genhtml_branch_coverage=1 00:08:41.929 --rc genhtml_function_coverage=1 00:08:41.929 --rc genhtml_legend=1 00:08:41.929 --rc geninfo_all_blocks=1 00:08:41.929 --rc geninfo_unexecuted_blocks=1 00:08:41.929 00:08:41.929 ' 00:08:41.929 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:41.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.929 --rc genhtml_branch_coverage=1 00:08:41.929 --rc genhtml_function_coverage=1 00:08:41.929 --rc genhtml_legend=1 00:08:41.929 --rc geninfo_all_blocks=1 00:08:41.929 --rc geninfo_unexecuted_blocks=1 00:08:41.929 00:08:41.929 ' 00:08:41.929 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:41.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.929 --rc genhtml_branch_coverage=1 00:08:41.929 --rc genhtml_function_coverage=1 00:08:41.929 --rc genhtml_legend=1 00:08:41.929 --rc geninfo_all_blocks=1 00:08:41.929 --rc geninfo_unexecuted_blocks=1 00:08:41.929 00:08:41.929 ' 00:08:41.929 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:41.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.929 --rc genhtml_branch_coverage=1 00:08:41.929 --rc genhtml_function_coverage=1 00:08:41.929 --rc genhtml_legend=1 00:08:41.929 --rc geninfo_all_blocks=1 00:08:41.929 --rc geninfo_unexecuted_blocks=1 00:08:41.929 00:08:41.929 ' 00:08:41.929 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:41.929 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:41.929 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:41.929 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:41.929 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:41.929 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:41.929 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:41.929 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:41.929 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:41.929 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:41.929 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:41.929 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:41.929 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:41.929 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:41.929 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:41.929 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:41.929 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:41.929 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:41.929 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:41.929 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:08:41.929 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:41.929 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:41.929 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:41.929 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.929 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.929 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.929 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:41.929 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.929 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:08:41.929 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:41.929 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:41.929 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:41.929 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:41.929 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:41.929 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:41.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:41.929 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:41.929 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:41.929 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:41.929 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:41.929 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:08:41.929 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:41.929 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:41.929 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:41.929 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:41.929 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.929 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:41.929 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:41.929 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:08:41.929 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:08:41.929 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:08:41.929 08:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:50.075 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:50.075 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:08:50.075 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:50.075 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:50.075 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:50.075 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:50.075 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:50.075 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:08:50.075 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:50.075 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:08:50.075 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:08:50.075 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:08:50.075 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:08:50.075 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:08:50.075 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:08:50.075 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:50.075 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:50.075 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:50.075 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:50.075 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:50.075 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:50.075 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:50.075 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:50.075 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:50.075 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:50.075 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:50.075 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:08:50.075 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:08:50.075 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:08:50.075 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:08:50.075 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:08:50.075 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:08:50.075 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:50.075 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:50.075 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:50.075 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:08:50.075 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:08:50.075 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:50.075 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:50.075 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:08:50.075 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:50.075 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:50.075 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:50.075 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:08:50.075 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:08:50.075 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:50.075 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:50.075 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:08:50.075 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:08:50.075 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:08:50.075 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:08:50.075 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:50.075 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:50.075 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:50.075 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:50.075 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:50.075 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:50.075 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:50.075 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:50.076 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:50.076 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:50.076 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:50.076 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:50.076 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:50.076 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:50.076 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:50.076 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:50.076 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:50.076 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:50.076 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:50.076 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:50.076 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:08:50.076 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # is_hw=yes 00:08:50.076 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:08:50.076 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:08:50.076 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:08:50.076 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:50.076 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:50.076 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:50.076 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:50.076 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:50.076 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:50.076 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:50.076 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:50.076 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:50.076 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:50.076 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:50.076 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:50.076 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:50.076 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:50.076 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:50.076 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:50.076 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:50.076 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:50.076 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:50.076 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:50.076 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:50.076 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:50.076 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:50.076 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:50.076 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.635 ms 00:08:50.076 00:08:50.076 --- 10.0.0.2 ping statistics --- 00:08:50.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.076 rtt min/avg/max/mdev = 0.635/0.635/0.635/0.000 ms 00:08:50.076 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:50.076 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:50.076 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:08:50.076 00:08:50.076 --- 10.0.0.1 ping statistics --- 00:08:50.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.076 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:08:50.076 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:50.076 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # return 0 00:08:50.076 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:50.076 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:50.076 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:08:50.076 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:08:50.076 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:50.076 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:08:50.076 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:08:50.076 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:50.076 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:50.076 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:50.076 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:50.076 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # nvmfpid=3556068 00:08:50.076 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # waitforlisten 3556068 00:08:50.076 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:50.076 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 3556068 ']' 00:08:50.076 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.076 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:50.076 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.076 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:50.076 08:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:50.076 [2024-10-01 08:23:41.012413] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:08:50.076 [2024-10-01 08:23:41.012464] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:50.076 [2024-10-01 08:23:41.080733] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:50.076 [2024-10-01 08:23:41.143279] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:50.076 [2024-10-01 08:23:41.143320] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:50.076 [2024-10-01 08:23:41.143328] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:50.076 [2024-10-01 08:23:41.143335] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:50.076 [2024-10-01 08:23:41.143342] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:50.076 [2024-10-01 08:23:41.144201] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:50.076 [2024-10-01 08:23:41.144202] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.076 08:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:50.076 08:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:08:50.076 08:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:50.076 08:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:50.076 08:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:50.076 08:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:50.076 08:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:50.076 08:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.076 08:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:50.076 [2024-10-01 08:23:41.266810] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:50.076 08:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.076 08:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:50.076 08:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.076 08:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:50.076 08:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.076 08:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:50.076 08:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.076 08:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:50.076 [2024-10-01 08:23:41.291065] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:50.076 08:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.076 08:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:50.076 08:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.077 08:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:50.077 NULL1 00:08:50.077 08:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.077 08:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:50.077 08:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.077 08:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:50.077 Delay0 00:08:50.077 08:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.077 08:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:50.077 08:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.077 08:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:50.077 08:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.077 08:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3556090 00:08:50.077 08:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:50.077 08:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:50.077 [2024-10-01 08:23:41.387814] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:51.987 08:23:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:51.987 08:23:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.987 08:23:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:51.987 Write completed with error (sct=0, sc=8) 00:08:51.987 starting I/O failed: -6 00:08:51.987 Write completed with error (sct=0, sc=8) 00:08:51.987 Read completed with error (sct=0, sc=8) 00:08:51.987 Read completed with error (sct=0, sc=8) 00:08:51.987 Read completed with error (sct=0, sc=8) 00:08:51.987 starting I/O failed: -6 00:08:51.987 Write completed with error (sct=0, sc=8) 00:08:51.987 Read completed with error (sct=0, sc=8) 00:08:51.987 Read completed with error (sct=0, sc=8) 00:08:51.987 Write completed with error (sct=0, sc=8) 00:08:51.987 starting I/O failed: -6 00:08:51.987 Read completed with error (sct=0, sc=8) 00:08:51.987 Read completed with error (sct=0, sc=8) 00:08:51.987 Read completed with error (sct=0, sc=8) 00:08:51.987 Read completed with error (sct=0, sc=8) 00:08:51.987 starting I/O failed: -6 00:08:51.987 Read completed with error (sct=0, sc=8) 00:08:51.987 Read completed with error (sct=0, sc=8) 00:08:51.987 Read completed with error (sct=0, sc=8) 00:08:51.987 Read completed with error (sct=0, sc=8) 00:08:51.987 starting I/O failed: -6 00:08:51.987 Read completed with error (sct=0, sc=8) 00:08:51.987 Read completed with error (sct=0, sc=8) 00:08:51.987 Read completed with error (sct=0, sc=8) 00:08:51.987 Read completed with error (sct=0, sc=8) 00:08:51.987 starting I/O failed: -6 00:08:51.987 Read completed with error (sct=0, sc=8) 00:08:51.987 Read completed with error (sct=0, sc=8) 00:08:51.987 Write completed with error (sct=0, sc=8) 00:08:51.987 Write completed with error (sct=0, sc=8) 00:08:51.987 starting I/O failed: -6 00:08:51.987 Read completed with error (sct=0, sc=8) 00:08:51.987 Write completed with error (sct=0, sc=8) 00:08:51.987 Write completed with error (sct=0, sc=8) 00:08:51.987 Read completed with error (sct=0, sc=8) 00:08:51.987 starting I/O failed: -6 00:08:51.987 Read completed with error (sct=0, sc=8) 00:08:51.987 Read completed with error (sct=0, sc=8) 00:08:51.987 Read completed with error (sct=0, sc=8) 00:08:51.987 Write completed with error (sct=0, sc=8) 00:08:51.987 starting I/O failed: -6 00:08:51.987 Read completed with error (sct=0, sc=8) 00:08:51.987 Read completed with error (sct=0, sc=8) 00:08:51.987 Read completed with error (sct=0, sc=8) 00:08:51.987 Read completed with error (sct=0, sc=8) 00:08:51.987 starting I/O failed: -6 00:08:51.987 Read completed with error (sct=0, sc=8) 00:08:51.987 Write completed with error (sct=0, sc=8) 00:08:51.987 Read completed with error (sct=0, sc=8) 00:08:51.987 Read completed with error (sct=0, sc=8) 00:08:51.987 starting I/O failed: -6 00:08:51.987 Read completed with error (sct=0, sc=8) 00:08:51.987 Read completed with error (sct=0, sc=8) 00:08:51.987 Write completed with error (sct=0, sc=8) 00:08:51.987 Read completed with error (sct=0, sc=8) 00:08:51.987 starting I/O failed: -6 00:08:51.987 Write completed with error (sct=0, sc=8) 00:08:51.987 Read completed with error (sct=0, sc=8) 00:08:51.987 [2024-10-01 08:23:43.475743] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140e390 is same with the state(6) to be set 00:08:51.987 Read completed with error (sct=0, sc=8) 00:08:51.987 Write completed with error (sct=0, sc=8) 00:08:51.987 Write completed with error (sct=0, sc=8) 00:08:51.987 Write completed with error (sct=0, sc=8) 00:08:51.987 Write completed with error (sct=0, sc=8) 00:08:51.987 Read completed with error (sct=0, sc=8) 00:08:51.987 Read completed with error (sct=0, sc=8) 00:08:51.987 Read completed with error (sct=0, sc=8) 00:08:51.987 Read completed with error (sct=0, sc=8) 00:08:51.987 Read completed with error (sct=0, sc=8) 00:08:51.987 Read completed with error (sct=0, sc=8) 00:08:51.987 Read completed with error (sct=0, sc=8) 00:08:51.988 Write completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Write completed with error (sct=0, sc=8) 00:08:51.988 Write completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Write completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Write completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Write completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Write completed with error (sct=0, sc=8) 00:08:51.988 Write completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Write completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Write completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Write completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Write completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Write completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Write completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 starting I/O failed: -6 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Write completed with error (sct=0, sc=8) 00:08:51.988 Write completed with error (sct=0, sc=8) 00:08:51.988 starting I/O failed: -6 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 starting I/O failed: -6 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 starting I/O failed: -6 00:08:51.988 Write completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 starting I/O failed: -6 00:08:51.988 Write completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Write completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 starting I/O failed: -6 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Write completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 starting I/O failed: -6 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 starting I/O failed: -6 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Write completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Write completed with error (sct=0, sc=8) 00:08:51.988 starting I/O failed: -6 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 starting I/O failed: -6 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 [2024-10-01 08:23:43.479387] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5dec00d450 is same with the state(6) to be set 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Write completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Write completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Write completed with error (sct=0, sc=8) 00:08:51.988 Write completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Write completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Write completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Write completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Write completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Write completed with error (sct=0, sc=8) 00:08:51.988 Write completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Write completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Write completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Write completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Write completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:51.988 Read completed with error (sct=0, sc=8) 00:08:52.928 [2024-10-01 08:23:44.446018] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fa70 is same with the state(6) to be set 00:08:52.928 Read completed with error (sct=0, sc=8) 00:08:52.928 Write completed with error (sct=0, sc=8) 00:08:52.928 Read completed with error (sct=0, sc=8) 00:08:52.928 Read completed with error (sct=0, sc=8) 00:08:52.928 Read completed with error (sct=0, sc=8) 00:08:52.928 Write completed with error (sct=0, sc=8) 00:08:52.928 Read completed with error (sct=0, sc=8) 00:08:52.928 Write completed with error (sct=0, sc=8) 00:08:52.928 Read completed with error (sct=0, sc=8) 00:08:52.928 Read completed with error (sct=0, sc=8) 00:08:52.928 Read completed with error (sct=0, sc=8) 00:08:52.928 Read completed with error (sct=0, sc=8) 00:08:52.928 Read completed with error (sct=0, sc=8) 00:08:52.928 Read completed with error (sct=0, sc=8) 00:08:52.928 Read completed with error (sct=0, sc=8) 00:08:52.928 Write completed with error (sct=0, sc=8) 00:08:52.928 Read completed with error (sct=0, sc=8) 00:08:52.928 Read completed with error (sct=0, sc=8) 00:08:52.928 Read completed with error (sct=0, sc=8) 00:08:52.928 Write completed with error (sct=0, sc=8) 00:08:52.928 Write completed with error (sct=0, sc=8) 00:08:52.928 Read completed with error (sct=0, sc=8) 00:08:52.928 Write completed with error (sct=0, sc=8) 00:08:52.928 Write completed with error (sct=0, sc=8) 00:08:52.928 Write completed with error (sct=0, sc=8) 00:08:52.928 Read completed with error (sct=0, sc=8) 00:08:52.928 Read completed with error (sct=0, sc=8) 00:08:52.928 [2024-10-01 08:23:44.479041] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140e570 is same with the state(6) to be set 00:08:52.928 Read completed with error (sct=0, sc=8) 00:08:52.928 Read completed with error (sct=0, sc=8) 00:08:52.928 Read completed with error (sct=0, sc=8) 00:08:52.928 Read completed with error (sct=0, sc=8) 00:08:52.928 Write completed with error (sct=0, sc=8) 00:08:52.928 Read completed with error (sct=0, sc=8) 00:08:52.928 Read completed with error (sct=0, sc=8) 00:08:52.928 Read completed with error (sct=0, sc=8) 00:08:52.928 Read completed with error (sct=0, sc=8) 00:08:52.928 Read completed with error (sct=0, sc=8) 00:08:52.928 Read completed with error (sct=0, sc=8) 00:08:52.928 Read completed with error (sct=0, sc=8) 00:08:52.928 Read completed with error (sct=0, sc=8) 00:08:52.928 Read completed with error (sct=0, sc=8) 00:08:52.928 Read completed with error (sct=0, sc=8) 00:08:52.928 Read completed with error (sct=0, sc=8) 00:08:52.928 Read completed with error (sct=0, sc=8) 00:08:52.928 Read completed with error (sct=0, sc=8) 00:08:52.928 Write completed with error (sct=0, sc=8) 00:08:52.928 Read completed with error (sct=0, sc=8) 00:08:52.928 Write completed with error (sct=0, sc=8) 00:08:52.928 Read completed with error (sct=0, sc=8) 00:08:52.928 Read completed with error (sct=0, sc=8) 00:08:52.928 Write completed with error (sct=0, sc=8) 00:08:52.928 Write completed with error (sct=0, sc=8) 00:08:52.928 Read completed with error (sct=0, sc=8) 00:08:52.928 [2024-10-01 08:23:44.479462] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140e930 is same with the state(6) to be set 00:08:52.928 Read completed with error (sct=0, sc=8) 00:08:52.928 Read completed with error (sct=0, sc=8) 00:08:52.928 Read completed with error (sct=0, sc=8) 00:08:52.928 Read completed with error (sct=0, sc=8) 00:08:52.928 Write completed with error (sct=0, sc=8) 00:08:52.928 Read completed with error (sct=0, sc=8) 00:08:52.928 Read completed with error (sct=0, sc=8) 00:08:52.928 Read completed with error (sct=0, sc=8) 00:08:52.928 Read completed with error (sct=0, sc=8) 00:08:52.928 Write completed with error (sct=0, sc=8) 00:08:52.928 Read completed with error (sct=0, sc=8) 00:08:52.928 Read completed with error (sct=0, sc=8) 00:08:52.928 Read completed with error (sct=0, sc=8) 00:08:52.928 Read completed with error (sct=0, sc=8) 00:08:52.928 Read completed with error (sct=0, sc=8) 00:08:52.928 Read completed with error (sct=0, sc=8) 00:08:52.928 Read completed with error (sct=0, sc=8) 00:08:52.928 Read completed with error (sct=0, sc=8) 00:08:52.928 Read completed with error (sct=0, sc=8) 00:08:52.928 [2024-10-01 08:23:44.482255] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5dec00d780 is same with the state(6) to be set 00:08:52.928 Read completed with error (sct=0, sc=8) 00:08:52.928 Read completed with error (sct=0, sc=8) 00:08:52.928 Read completed with error (sct=0, sc=8) 00:08:52.928 Read completed with error (sct=0, sc=8) 00:08:52.929 Read completed with error (sct=0, sc=8) 00:08:52.929 Read completed with error (sct=0, sc=8) 00:08:52.929 Read completed with error (sct=0, sc=8) 00:08:52.929 Write completed with error (sct=0, sc=8) 00:08:52.929 Write completed with error (sct=0, sc=8) 00:08:52.929 Read completed with error (sct=0, sc=8) 00:08:52.929 Write completed with error (sct=0, sc=8) 00:08:52.929 Read completed with error (sct=0, sc=8) 00:08:52.929 Read completed with error (sct=0, sc=8) 00:08:52.929 Read completed with error (sct=0, sc=8) 00:08:52.929 Write completed with error (sct=0, sc=8) 00:08:52.929 Read completed with error (sct=0, sc=8) 00:08:52.929 Read completed with error (sct=0, sc=8) 00:08:52.929 Read completed with error (sct=0, sc=8) 00:08:52.929 Read completed with error (sct=0, sc=8) 00:08:52.929 Write completed with error (sct=0, sc=8) 00:08:52.929 [2024-10-01 08:23:44.482348] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5dec00cfe0 is same with the state(6) to be set 00:08:52.929 Initializing NVMe Controllers 00:08:52.929 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:52.929 Controller IO queue size 128, less than required. 00:08:52.929 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:52.929 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:52.929 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:52.929 Initialization complete. Launching workers. 00:08:52.929 ======================================================== 00:08:52.929 Latency(us) 00:08:52.929 Device Information : IOPS MiB/s Average min max 00:08:52.929 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 174.73 0.09 883723.03 250.72 1007754.28 00:08:52.929 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 160.29 0.08 920824.91 296.95 2002502.18 00:08:52.929 ======================================================== 00:08:52.929 Total : 335.02 0.16 901474.60 250.72 2002502.18 00:08:52.929 00:08:52.929 [2024-10-01 08:23:44.482853] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x140fa70 (9): Bad file descriptor 00:08:52.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:52.929 08:23:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.929 08:23:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:52.929 08:23:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3556090 00:08:52.929 08:23:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:53.189 08:23:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:53.189 08:23:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3556090 00:08:53.190 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3556090) - No such process 00:08:53.190 08:23:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3556090 00:08:53.190 08:23:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:08:53.190 08:23:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3556090 00:08:53.190 08:23:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:08:53.190 08:23:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:53.190 08:23:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:08:53.190 08:23:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:53.190 08:23:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 3556090 00:08:53.190 08:23:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:08:53.190 08:23:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:53.190 08:23:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:53.190 08:23:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:53.190 08:23:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:53.190 08:23:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.190 08:23:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:53.190 08:23:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.190 08:23:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:53.190 08:23:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.190 08:23:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:53.451 [2024-10-01 08:23:45.015395] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:53.451 08:23:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.451 08:23:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:53.451 08:23:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.451 08:23:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:53.451 08:23:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.451 08:23:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3556773 00:08:53.451 08:23:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:53.451 08:23:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:53.451 08:23:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3556773 00:08:53.451 08:23:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:53.451 [2024-10-01 08:23:45.092186] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:54.023 08:23:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:54.023 08:23:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3556773 00:08:54.023 08:23:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:54.283 08:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:54.283 08:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3556773 00:08:54.283 08:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:54.855 08:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:54.855 08:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3556773 00:08:54.855 08:23:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:55.427 08:23:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:55.427 08:23:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3556773 00:08:55.427 08:23:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:55.998 08:23:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:55.998 08:23:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3556773 00:08:55.998 08:23:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:56.258 08:23:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:56.258 08:23:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3556773 00:08:56.258 08:23:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:56.518 Initializing NVMe Controllers 00:08:56.518 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:56.518 Controller IO queue size 128, less than required. 00:08:56.518 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:56.518 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:56.518 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:56.518 Initialization complete. Launching workers. 00:08:56.518 ======================================================== 00:08:56.518 Latency(us) 00:08:56.518 Device Information : IOPS MiB/s Average min max 00:08:56.518 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001943.21 1000265.96 1005596.98 00:08:56.518 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003251.62 1000275.63 1009638.58 00:08:56.518 ======================================================== 00:08:56.518 Total : 256.00 0.12 1002597.41 1000265.96 1009638.58 00:08:56.518 00:08:56.780 08:23:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:56.780 08:23:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3556773 00:08:56.780 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3556773) - No such process 00:08:56.780 08:23:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3556773 00:08:56.780 08:23:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:56.780 08:23:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:56.780 08:23:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # nvmfcleanup 00:08:56.780 08:23:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:08:56.780 08:23:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:56.780 08:23:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:08:56.780 08:23:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:56.780 08:23:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:56.780 rmmod nvme_tcp 00:08:56.780 rmmod nvme_fabrics 00:08:57.040 rmmod nvme_keyring 00:08:57.040 08:23:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:57.040 08:23:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:08:57.040 08:23:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:08:57.040 08:23:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@513 -- # '[' -n 3556068 ']' 00:08:57.040 08:23:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # killprocess 3556068 00:08:57.040 08:23:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 3556068 ']' 00:08:57.040 08:23:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 3556068 00:08:57.040 08:23:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:08:57.040 08:23:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:57.040 08:23:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3556068 00:08:57.040 08:23:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:57.040 08:23:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:57.040 08:23:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3556068' 00:08:57.040 killing process with pid 3556068 00:08:57.040 08:23:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 3556068 00:08:57.040 08:23:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 3556068 00:08:57.040 08:23:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:08:57.040 08:23:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:08:57.040 08:23:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:08:57.040 08:23:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:08:57.040 08:23:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:08:57.040 08:23:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # iptables-save 00:08:57.040 08:23:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # iptables-restore 00:08:57.040 08:23:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:57.040 08:23:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:57.040 08:23:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:57.040 08:23:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:57.040 08:23:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:59.589 08:23:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:59.589 00:08:59.589 real 0m17.424s 00:08:59.589 user 0m29.142s 00:08:59.589 sys 0m6.552s 00:08:59.589 08:23:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:59.589 08:23:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:59.589 ************************************ 00:08:59.589 END TEST nvmf_delete_subsystem 00:08:59.589 ************************************ 00:08:59.589 08:23:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:59.589 08:23:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:59.589 08:23:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:59.589 08:23:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:59.589 ************************************ 00:08:59.589 START TEST nvmf_host_management 00:08:59.589 ************************************ 00:08:59.589 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:59.589 * Looking for test storage... 00:08:59.589 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:59.589 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:59.589 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:08:59.589 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:59.589 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:59.589 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:59.589 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:59.589 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:59.589 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:59.589 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:59.589 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:59.589 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:59.589 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:59.589 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:59.589 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:59.589 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:59.589 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:59.589 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:59.589 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:59.589 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:59.589 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:59.589 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:59.589 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:59.589 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:59.589 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:59.589 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:59.589 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:59.589 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:59.589 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:59.589 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:59.589 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:59.589 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:59.589 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:59.589 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:59.589 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:59.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.589 --rc genhtml_branch_coverage=1 00:08:59.589 --rc genhtml_function_coverage=1 00:08:59.589 --rc genhtml_legend=1 00:08:59.589 --rc geninfo_all_blocks=1 00:08:59.589 --rc geninfo_unexecuted_blocks=1 00:08:59.589 00:08:59.589 ' 00:08:59.589 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:59.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.589 --rc genhtml_branch_coverage=1 00:08:59.589 --rc genhtml_function_coverage=1 00:08:59.589 --rc genhtml_legend=1 00:08:59.589 --rc geninfo_all_blocks=1 00:08:59.589 --rc geninfo_unexecuted_blocks=1 00:08:59.589 00:08:59.589 ' 00:08:59.589 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:59.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.589 --rc genhtml_branch_coverage=1 00:08:59.589 --rc genhtml_function_coverage=1 00:08:59.589 --rc genhtml_legend=1 00:08:59.589 --rc geninfo_all_blocks=1 00:08:59.589 --rc geninfo_unexecuted_blocks=1 00:08:59.589 00:08:59.589 ' 00:08:59.589 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:59.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.589 --rc genhtml_branch_coverage=1 00:08:59.589 --rc genhtml_function_coverage=1 00:08:59.589 --rc genhtml_legend=1 00:08:59.589 --rc geninfo_all_blocks=1 00:08:59.590 --rc geninfo_unexecuted_blocks=1 00:08:59.590 00:08:59.590 ' 00:08:59.590 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:59.590 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:59.590 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:59.590 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:59.590 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:59.590 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:59.590 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:59.590 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:59.590 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:59.590 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:59.590 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:59.590 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:59.590 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:59.590 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:59.590 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:59.590 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:59.590 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:59.590 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:59.590 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:59.590 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:59.590 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:59.590 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:59.590 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:59.590 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.590 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.590 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.590 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:59.590 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.590 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:59.590 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:59.590 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:59.590 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:59.590 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:59.590 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:59.590 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:59.590 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:59.590 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:59.590 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:59.590 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:59.590 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:59.590 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:59.590 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:59.590 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:08:59.590 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:59.590 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:59.590 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:59.590 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:59.590 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:59.590 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:59.590 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:59.590 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:08:59.590 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:08:59.590 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:08:59.590 08:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:07.734 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:07.734 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:09:07.734 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:07.734 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:07.734 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:07.734 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:07.734 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:07.734 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:09:07.734 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:07.734 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:09:07.734 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:09:07.734 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:09:07.734 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:09:07.734 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:09:07.734 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:09:07.734 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:07.734 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:07.734 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:07.734 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:07.734 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:07.734 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:07.734 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:07.734 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:07.734 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:07.734 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:07.734 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:07.734 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:09:07.734 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:09:07.734 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:09:07.734 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:09:07.734 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:09:07.734 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:09:07.734 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:07.734 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:07.734 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:07.734 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:07.734 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:07.734 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:07.734 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:07.735 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:07.735 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:07.735 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # is_hw=yes 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:07.735 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:07.735 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.524 ms 00:09:07.735 00:09:07.735 --- 10.0.0.2 ping statistics --- 00:09:07.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:07.735 rtt min/avg/max/mdev = 0.524/0.524/0.524/0.000 ms 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:07.735 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:07.735 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.304 ms 00:09:07.735 00:09:07.735 --- 10.0.0.1 ping statistics --- 00:09:07.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:07.735 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # return 0 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # nvmfpid=3561789 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # waitforlisten 3561789 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 3561789 ']' 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:07.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:07.735 08:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:07.735 [2024-10-01 08:23:58.523476] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:09:07.735 [2024-10-01 08:23:58.523527] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:07.735 [2024-10-01 08:23:58.607626] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:07.735 [2024-10-01 08:23:58.682118] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:07.735 [2024-10-01 08:23:58.682172] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:07.735 [2024-10-01 08:23:58.682180] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:07.736 [2024-10-01 08:23:58.682187] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:07.736 [2024-10-01 08:23:58.682193] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:07.736 [2024-10-01 08:23:58.684033] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:07.736 [2024-10-01 08:23:58.684222] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:09:07.736 [2024-10-01 08:23:58.684383] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:09:07.736 [2024-10-01 08:23:58.684384] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:07.736 08:23:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:07.736 08:23:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:09:07.736 08:23:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:07.736 08:23:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:07.736 08:23:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:07.736 08:23:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:07.736 08:23:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:07.736 08:23:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.736 08:23:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:07.736 [2024-10-01 08:23:59.372200] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:07.736 08:23:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.736 08:23:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:09:07.736 08:23:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:07.736 08:23:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:07.736 08:23:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:09:07.736 08:23:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:09:07.736 08:23:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:09:07.736 08:23:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.736 08:23:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:07.736 Malloc0 00:09:07.736 [2024-10-01 08:23:59.435516] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:07.736 08:23:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.736 08:23:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:09:07.736 08:23:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:07.736 08:23:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:07.736 08:23:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3562073 00:09:07.736 08:23:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3562073 /var/tmp/bdevperf.sock 00:09:07.736 08:23:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 3562073 ']' 00:09:07.736 08:23:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:07.736 08:23:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:07.736 08:23:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:07.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:07.736 08:23:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:09:07.736 08:23:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:09:07.736 08:23:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:07.736 08:23:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:07.736 08:23:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:09:07.736 08:23:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:09:07.736 08:23:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:07.736 08:23:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:07.736 { 00:09:07.736 "params": { 00:09:07.736 "name": "Nvme$subsystem", 00:09:07.736 "trtype": "$TEST_TRANSPORT", 00:09:07.736 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:07.736 "adrfam": "ipv4", 00:09:07.736 "trsvcid": "$NVMF_PORT", 00:09:07.736 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:07.736 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:07.736 "hdgst": ${hdgst:-false}, 00:09:07.736 "ddgst": ${ddgst:-false} 00:09:07.736 }, 00:09:07.736 "method": "bdev_nvme_attach_controller" 00:09:07.736 } 00:09:07.736 EOF 00:09:07.736 )") 00:09:07.736 08:23:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:09:07.736 08:23:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:09:07.736 08:23:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:09:07.736 08:23:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:07.736 "params": { 00:09:07.736 "name": "Nvme0", 00:09:07.736 "trtype": "tcp", 00:09:07.736 "traddr": "10.0.0.2", 00:09:07.736 "adrfam": "ipv4", 00:09:07.736 "trsvcid": "4420", 00:09:07.736 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:07.736 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:07.736 "hdgst": false, 00:09:07.736 "ddgst": false 00:09:07.736 }, 00:09:07.736 "method": "bdev_nvme_attach_controller" 00:09:07.736 }' 00:09:07.736 [2024-10-01 08:23:59.548500] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:09:07.736 [2024-10-01 08:23:59.548559] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3562073 ] 00:09:07.996 [2024-10-01 08:23:59.609965] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.996 [2024-10-01 08:23:59.674750] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.996 Running I/O for 10 seconds... 00:09:08.568 08:24:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:08.568 08:24:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:09:08.568 08:24:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:09:08.568 08:24:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.568 08:24:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:08.568 08:24:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.568 08:24:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:08.568 08:24:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:09:08.568 08:24:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:09:08.568 08:24:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:09:08.568 08:24:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:09:08.568 08:24:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:09:08.568 08:24:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:09:08.568 08:24:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:08.568 08:24:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:08.568 08:24:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:08.568 08:24:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.568 08:24:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:08.830 08:24:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.830 08:24:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=953 00:09:08.830 08:24:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 953 -ge 100 ']' 00:09:08.830 08:24:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:09:08.830 08:24:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:09:08.830 08:24:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:09:08.830 08:24:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:08.830 08:24:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.830 08:24:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:08.830 [2024-10-01 08:24:00.426598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c53b0 is same with the state(6) to be set 00:09:08.830 [2024-10-01 08:24:00.426680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c53b0 is same with the state(6) to be set 00:09:08.830 [2024-10-01 08:24:00.426688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c53b0 is same with the state(6) to be set 00:09:08.831 [2024-10-01 08:24:00.426696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c53b0 is same with the state(6) to be set 00:09:08.831 [2024-10-01 08:24:00.426703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c53b0 is same with the state(6) to be set 00:09:08.831 [2024-10-01 08:24:00.426710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c53b0 is same with the state(6) to be set 00:09:08.831 [2024-10-01 08:24:00.426716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c53b0 is same with the state(6) to be set 00:09:08.831 08:24:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.831 08:24:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:08.831 08:24:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.831 08:24:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:08.831 [2024-10-01 08:24:00.434117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:09:08.831 [2024-10-01 08:24:00.434153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.831 [2024-10-01 08:24:00.434166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:09:08.831 [2024-10-01 08:24:00.434174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.831 [2024-10-01 08:24:00.434183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:09:08.831 [2024-10-01 08:24:00.434191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.831 [2024-10-01 08:24:00.434199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:09:08.831 [2024-10-01 08:24:00.434213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.831 [2024-10-01 08:24:00.434221] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2075280 is same with the state(6) to be set 00:09:08.831 [2024-10-01 08:24:00.436977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.831 [2024-10-01 08:24:00.437003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.831 [2024-10-01 08:24:00.437018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.831 [2024-10-01 08:24:00.437026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.831 [2024-10-01 08:24:00.437037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.831 [2024-10-01 08:24:00.437045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.831 [2024-10-01 08:24:00.437055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.831 [2024-10-01 08:24:00.437062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.831 [2024-10-01 08:24:00.437072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.831 [2024-10-01 08:24:00.437080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.831 [2024-10-01 08:24:00.437089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.831 [2024-10-01 08:24:00.437097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.831 [2024-10-01 08:24:00.437107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.831 [2024-10-01 08:24:00.437114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.831 [2024-10-01 08:24:00.437125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.831 [2024-10-01 08:24:00.437133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.831 [2024-10-01 08:24:00.437142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.831 [2024-10-01 08:24:00.437150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.831 [2024-10-01 08:24:00.437159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.831 [2024-10-01 08:24:00.437167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.831 [2024-10-01 08:24:00.437177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.831 [2024-10-01 08:24:00.437185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.831 [2024-10-01 08:24:00.437195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.831 [2024-10-01 08:24:00.437206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.831 [2024-10-01 08:24:00.437215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.831 [2024-10-01 08:24:00.437223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.831 [2024-10-01 08:24:00.437233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.831 [2024-10-01 08:24:00.437241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.831 [2024-10-01 08:24:00.437251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.831 [2024-10-01 08:24:00.437259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.831 [2024-10-01 08:24:00.437268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.831 [2024-10-01 08:24:00.437276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.831 [2024-10-01 08:24:00.437285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.831 [2024-10-01 08:24:00.437293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.831 [2024-10-01 08:24:00.437303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.831 [2024-10-01 08:24:00.437311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.831 [2024-10-01 08:24:00.437321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.831 [2024-10-01 08:24:00.437329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.832 [2024-10-01 08:24:00.437338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.832 [2024-10-01 08:24:00.437346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.832 [2024-10-01 08:24:00.437356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.832 [2024-10-01 08:24:00.437364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.832 [2024-10-01 08:24:00.437373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.832 [2024-10-01 08:24:00.437381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.832 [2024-10-01 08:24:00.437391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.832 [2024-10-01 08:24:00.437399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.832 [2024-10-01 08:24:00.437409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.832 [2024-10-01 08:24:00.437417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.832 [2024-10-01 08:24:00.437426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:3072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.832 [2024-10-01 08:24:00.437437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.832 [2024-10-01 08:24:00.437448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.832 [2024-10-01 08:24:00.437455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.832 [2024-10-01 08:24:00.437465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:3328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.832 [2024-10-01 08:24:00.437472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.832 [2024-10-01 08:24:00.437482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.832 [2024-10-01 08:24:00.437490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.832 [2024-10-01 08:24:00.437500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.832 [2024-10-01 08:24:00.437507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.832 [2024-10-01 08:24:00.437517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.832 [2024-10-01 08:24:00.437524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.832 [2024-10-01 08:24:00.437534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.832 [2024-10-01 08:24:00.437543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.832 [2024-10-01 08:24:00.437552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.832 [2024-10-01 08:24:00.437560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.832 [2024-10-01 08:24:00.437570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.832 [2024-10-01 08:24:00.437578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.832 [2024-10-01 08:24:00.437587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.832 [2024-10-01 08:24:00.437596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.832 [2024-10-01 08:24:00.437605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.832 [2024-10-01 08:24:00.437613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.832 [2024-10-01 08:24:00.437622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.832 [2024-10-01 08:24:00.437631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.832 [2024-10-01 08:24:00.437641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.832 [2024-10-01 08:24:00.437649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.832 [2024-10-01 08:24:00.437661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.832 [2024-10-01 08:24:00.437669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.832 [2024-10-01 08:24:00.437678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.832 [2024-10-01 08:24:00.437686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.832 [2024-10-01 08:24:00.437696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.832 [2024-10-01 08:24:00.437704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.832 [2024-10-01 08:24:00.437714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.832 [2024-10-01 08:24:00.437721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.832 [2024-10-01 08:24:00.437731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.832 [2024-10-01 08:24:00.437738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.832 [2024-10-01 08:24:00.437748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.832 [2024-10-01 08:24:00.437756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.832 [2024-10-01 08:24:00.437766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.832 [2024-10-01 08:24:00.437773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.832 [2024-10-01 08:24:00.437783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.832 [2024-10-01 08:24:00.437790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.832 [2024-10-01 08:24:00.437800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.832 [2024-10-01 08:24:00.437808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.832 [2024-10-01 08:24:00.437817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.832 [2024-10-01 08:24:00.437824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.832 [2024-10-01 08:24:00.437834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.832 [2024-10-01 08:24:00.437841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.832 [2024-10-01 08:24:00.437851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.832 [2024-10-01 08:24:00.437859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.832 [2024-10-01 08:24:00.437869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.832 [2024-10-01 08:24:00.437878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.832 [2024-10-01 08:24:00.437887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:6400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.832 [2024-10-01 08:24:00.437894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.832 [2024-10-01 08:24:00.437904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.832 [2024-10-01 08:24:00.437912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.832 [2024-10-01 08:24:00.437921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.833 [2024-10-01 08:24:00.437929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.833 [2024-10-01 08:24:00.437938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.833 [2024-10-01 08:24:00.437946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.833 [2024-10-01 08:24:00.437955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.833 [2024-10-01 08:24:00.437963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.833 [2024-10-01 08:24:00.437973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.833 [2024-10-01 08:24:00.437980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.833 [2024-10-01 08:24:00.437990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.833 [2024-10-01 08:24:00.438001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.833 [2024-10-01 08:24:00.438011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.833 [2024-10-01 08:24:00.438020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.833 [2024-10-01 08:24:00.438030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.833 [2024-10-01 08:24:00.438038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.833 [2024-10-01 08:24:00.438048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.833 [2024-10-01 08:24:00.438056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.833 [2024-10-01 08:24:00.438065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.833 [2024-10-01 08:24:00.438073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.833 [2024-10-01 08:24:00.438082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.833 [2024-10-01 08:24:00.438090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.833 [2024-10-01 08:24:00.438101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.833 [2024-10-01 08:24:00.438109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.833 [2024-10-01 08:24:00.438118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:08.833 [2024-10-01 08:24:00.438125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.833 [2024-10-01 08:24:00.438178] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x228dff0 was disconnected and freed. reset controller. 00:09:08.833 [2024-10-01 08:24:00.439367] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:09:08.833 task offset: 0 on job bdev=Nvme0n1 fails 00:09:08.833 00:09:08.833 Latency(us) 00:09:08.833 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:08.833 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:08.833 Job: Nvme0n1 ended in about 0.62 seconds with error 00:09:08.833 Verification LBA range: start 0x0 length 0x400 00:09:08.833 Nvme0n1 : 0.62 1658.47 103.65 103.65 0.00 35458.15 1529.17 33860.27 00:09:08.833 =================================================================================================================== 00:09:08.833 Total : 1658.47 103.65 103.65 0.00 35458.15 1529.17 33860.27 00:09:08.833 [2024-10-01 08:24:00.441346] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:08.833 [2024-10-01 08:24:00.441367] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2075280 (9): Bad file descriptor 00:09:08.833 08:24:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.833 08:24:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:09:08.833 [2024-10-01 08:24:00.452640] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:09.774 08:24:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3562073 00:09:09.774 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3562073) - No such process 00:09:09.774 08:24:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:09:09.774 08:24:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:09:09.774 08:24:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:09:09.774 08:24:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:09:09.774 08:24:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:09:09.775 08:24:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:09:09.775 08:24:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:09.775 08:24:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:09.775 { 00:09:09.775 "params": { 00:09:09.775 "name": "Nvme$subsystem", 00:09:09.775 "trtype": "$TEST_TRANSPORT", 00:09:09.775 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:09.775 "adrfam": "ipv4", 00:09:09.775 "trsvcid": "$NVMF_PORT", 00:09:09.775 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:09.775 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:09.775 "hdgst": ${hdgst:-false}, 00:09:09.775 "ddgst": ${ddgst:-false} 00:09:09.775 }, 00:09:09.775 "method": "bdev_nvme_attach_controller" 00:09:09.775 } 00:09:09.775 EOF 00:09:09.775 )") 00:09:09.775 08:24:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:09:09.775 08:24:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:09:09.775 08:24:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:09:09.775 08:24:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:09.775 "params": { 00:09:09.775 "name": "Nvme0", 00:09:09.775 "trtype": "tcp", 00:09:09.775 "traddr": "10.0.0.2", 00:09:09.775 "adrfam": "ipv4", 00:09:09.775 "trsvcid": "4420", 00:09:09.775 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:09.775 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:09.775 "hdgst": false, 00:09:09.775 "ddgst": false 00:09:09.775 }, 00:09:09.775 "method": "bdev_nvme_attach_controller" 00:09:09.775 }' 00:09:09.775 [2024-10-01 08:24:01.503378] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:09:09.775 [2024-10-01 08:24:01.503438] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3562605 ] 00:09:09.775 [2024-10-01 08:24:01.562303] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.036 [2024-10-01 08:24:01.629508] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.036 Running I/O for 1 seconds... 00:09:11.419 1598.00 IOPS, 99.88 MiB/s 00:09:11.419 Latency(us) 00:09:11.419 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:11.419 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:11.419 Verification LBA range: start 0x0 length 0x400 00:09:11.420 Nvme0n1 : 1.04 1603.84 100.24 0.00 0.00 39215.35 6307.84 32549.55 00:09:11.420 =================================================================================================================== 00:09:11.420 Total : 1603.84 100.24 0.00 0.00 39215.35 6307.84 32549.55 00:09:11.420 08:24:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:09:11.420 08:24:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:09:11.420 08:24:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:09:11.420 08:24:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:09:11.420 08:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:09:11.420 08:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:11.420 08:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:09:11.420 08:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:11.420 08:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:09:11.420 08:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:11.420 08:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:11.420 rmmod nvme_tcp 00:09:11.420 rmmod nvme_fabrics 00:09:11.420 rmmod nvme_keyring 00:09:11.420 08:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:11.420 08:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:09:11.420 08:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:09:11.420 08:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@513 -- # '[' -n 3561789 ']' 00:09:11.420 08:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # killprocess 3561789 00:09:11.420 08:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 3561789 ']' 00:09:11.420 08:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 3561789 00:09:11.420 08:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:09:11.420 08:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:11.420 08:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3561789 00:09:11.420 08:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:11.420 08:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:11.420 08:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3561789' 00:09:11.420 killing process with pid 3561789 00:09:11.420 08:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 3561789 00:09:11.420 08:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 3561789 00:09:11.420 [2024-10-01 08:24:03.235906] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:09:11.680 08:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:11.680 08:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:11.680 08:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:11.680 08:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:09:11.680 08:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-save 00:09:11.680 08:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:11.680 08:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-restore 00:09:11.680 08:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:11.680 08:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:11.680 08:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:11.680 08:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:11.680 08:24:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:13.594 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:13.594 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:09:13.594 00:09:13.594 real 0m14.337s 00:09:13.594 user 0m22.872s 00:09:13.594 sys 0m6.504s 00:09:13.594 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:13.594 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:13.594 ************************************ 00:09:13.594 END TEST nvmf_host_management 00:09:13.594 ************************************ 00:09:13.594 08:24:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:13.594 08:24:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:13.594 08:24:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:13.594 08:24:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:13.594 ************************************ 00:09:13.594 START TEST nvmf_lvol 00:09:13.594 ************************************ 00:09:13.855 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:13.855 * Looking for test storage... 00:09:13.855 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:13.855 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:13.855 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:09:13.855 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:13.855 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:13.855 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:13.855 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:13.855 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:13.855 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:09:13.855 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:09:13.855 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:09:13.855 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:09:13.855 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:09:13.855 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:09:13.855 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:09:13.855 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:13.855 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:09:13.855 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:09:13.855 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:13.856 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:13.856 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:09:13.856 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:09:13.856 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:13.856 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:09:13.856 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:09:13.856 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:09:13.856 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:09:13.856 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:13.856 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:09:13.856 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:09:13.856 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:13.856 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:13.856 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:09:13.856 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:13.856 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:13.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.856 --rc genhtml_branch_coverage=1 00:09:13.856 --rc genhtml_function_coverage=1 00:09:13.856 --rc genhtml_legend=1 00:09:13.856 --rc geninfo_all_blocks=1 00:09:13.856 --rc geninfo_unexecuted_blocks=1 00:09:13.856 00:09:13.856 ' 00:09:13.856 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:13.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.856 --rc genhtml_branch_coverage=1 00:09:13.856 --rc genhtml_function_coverage=1 00:09:13.856 --rc genhtml_legend=1 00:09:13.856 --rc geninfo_all_blocks=1 00:09:13.856 --rc geninfo_unexecuted_blocks=1 00:09:13.856 00:09:13.856 ' 00:09:13.856 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:13.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.856 --rc genhtml_branch_coverage=1 00:09:13.856 --rc genhtml_function_coverage=1 00:09:13.856 --rc genhtml_legend=1 00:09:13.856 --rc geninfo_all_blocks=1 00:09:13.856 --rc geninfo_unexecuted_blocks=1 00:09:13.856 00:09:13.856 ' 00:09:13.856 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:13.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.856 --rc genhtml_branch_coverage=1 00:09:13.856 --rc genhtml_function_coverage=1 00:09:13.856 --rc genhtml_legend=1 00:09:13.856 --rc geninfo_all_blocks=1 00:09:13.856 --rc geninfo_unexecuted_blocks=1 00:09:13.856 00:09:13.856 ' 00:09:13.856 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:13.856 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:09:13.856 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:13.856 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:13.856 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:13.856 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:13.856 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:13.856 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:13.856 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:13.856 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:13.856 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:13.856 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:13.856 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:13.856 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:13.856 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:13.856 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:13.856 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:13.856 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:13.856 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:13.856 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:09:13.856 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:13.856 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:13.856 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:13.856 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.856 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.856 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.856 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:09:13.856 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.856 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:09:13.856 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:13.856 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:13.856 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:13.856 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:13.856 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:13.856 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:13.856 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:13.856 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:13.856 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:13.857 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:13.857 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:13.857 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:13.857 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:09:13.857 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:09:13.857 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:13.857 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:09:13.857 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:13.857 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:13.857 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:13.857 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:13.857 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:13.857 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:13.857 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:13.857 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:13.857 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:09:13.857 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:09:13.857 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:09:13.857 08:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:21.997 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:21.997 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:09:21.997 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:21.997 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:21.997 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:21.997 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:21.997 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:21.997 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:09:21.997 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:21.997 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:09:21.997 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:09:21.997 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:09:21.997 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:09:21.997 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:09:21.997 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:21.998 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:21.998 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:21.998 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:21.998 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # is_hw=yes 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:21.998 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:21.998 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.400 ms 00:09:21.998 00:09:21.998 --- 10.0.0.2 ping statistics --- 00:09:21.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:21.998 rtt min/avg/max/mdev = 0.400/0.400/0.400/0.000 ms 00:09:21.998 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:21.998 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:21.998 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:09:21.998 00:09:21.998 --- 10.0.0.1 ping statistics --- 00:09:21.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:21.998 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:09:21.999 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:21.999 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # return 0 00:09:21.999 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:21.999 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:21.999 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:21.999 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:21.999 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:21.999 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:21.999 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:21.999 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:09:21.999 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:21.999 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:21.999 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:21.999 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # nvmfpid=3567497 00:09:21.999 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # waitforlisten 3567497 00:09:21.999 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:09:21.999 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 3567497 ']' 00:09:21.999 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.999 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:21.999 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.999 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:21.999 08:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:21.999 [2024-10-01 08:24:13.004947] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:09:21.999 [2024-10-01 08:24:13.005026] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:21.999 [2024-10-01 08:24:13.076471] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:21.999 [2024-10-01 08:24:13.149684] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:21.999 [2024-10-01 08:24:13.149724] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:21.999 [2024-10-01 08:24:13.149733] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:21.999 [2024-10-01 08:24:13.149740] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:21.999 [2024-10-01 08:24:13.149746] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:21.999 [2024-10-01 08:24:13.150785] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:21.999 [2024-10-01 08:24:13.150920] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:21.999 [2024-10-01 08:24:13.150923] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.999 08:24:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:21.999 08:24:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:09:21.999 08:24:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:21.999 08:24:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:21.999 08:24:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:22.260 08:24:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:22.260 08:24:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:22.260 [2024-10-01 08:24:14.008566] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:22.260 08:24:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:22.521 08:24:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:09:22.521 08:24:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:22.783 08:24:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:09:22.783 08:24:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:09:23.044 08:24:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:09:23.044 08:24:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=bca66547-05b6-4e2f-ae7e-1d70f161d9d3 00:09:23.044 08:24:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u bca66547-05b6-4e2f-ae7e-1d70f161d9d3 lvol 20 00:09:23.304 08:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=387265bc-1c34-4ec0-ad54-f01b43d46d66 00:09:23.304 08:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:23.564 08:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 387265bc-1c34-4ec0-ad54-f01b43d46d66 00:09:23.564 08:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:23.824 [2024-10-01 08:24:15.533112] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:23.824 08:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:24.083 08:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3568148 00:09:24.083 08:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:09:24.084 08:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:09:25.025 08:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 387265bc-1c34-4ec0-ad54-f01b43d46d66 MY_SNAPSHOT 00:09:25.285 08:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=e4730b92-0659-4cbd-b33f-162f612e1201 00:09:25.285 08:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 387265bc-1c34-4ec0-ad54-f01b43d46d66 30 00:09:25.545 08:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone e4730b92-0659-4cbd-b33f-162f612e1201 MY_CLONE 00:09:25.545 08:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=bf09443e-7961-48d3-97b4-403e3ac332be 00:09:25.545 08:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate bf09443e-7961-48d3-97b4-403e3ac332be 00:09:25.805 08:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3568148 00:09:35.872 Initializing NVMe Controllers 00:09:35.872 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:35.872 Controller IO queue size 128, less than required. 00:09:35.872 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:35.872 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:35.872 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:35.872 Initialization complete. Launching workers. 00:09:35.872 ======================================================== 00:09:35.872 Latency(us) 00:09:35.872 Device Information : IOPS MiB/s Average min max 00:09:35.872 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12062.10 47.12 10615.85 1585.84 43068.74 00:09:35.872 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17450.80 68.17 7335.63 3072.10 49003.59 00:09:35.872 ======================================================== 00:09:35.872 Total : 29512.90 115.28 8676.27 1585.84 49003.59 00:09:35.872 00:09:35.872 08:24:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:35.872 08:24:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 387265bc-1c34-4ec0-ad54-f01b43d46d66 00:09:35.872 08:24:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bca66547-05b6-4e2f-ae7e-1d70f161d9d3 00:09:35.872 08:24:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:35.872 08:24:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:35.872 08:24:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:35.872 08:24:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:35.872 08:24:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:09:35.872 08:24:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:35.872 08:24:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:09:35.872 08:24:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:35.872 08:24:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:35.872 rmmod nvme_tcp 00:09:35.872 rmmod nvme_fabrics 00:09:35.872 rmmod nvme_keyring 00:09:35.872 08:24:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:35.872 08:24:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:09:35.872 08:24:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:09:35.872 08:24:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@513 -- # '[' -n 3567497 ']' 00:09:35.872 08:24:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # killprocess 3567497 00:09:35.872 08:24:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 3567497 ']' 00:09:35.872 08:24:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 3567497 00:09:35.872 08:24:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:09:35.872 08:24:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:35.872 08:24:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3567497 00:09:35.872 08:24:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:35.872 08:24:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:35.872 08:24:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3567497' 00:09:35.872 killing process with pid 3567497 00:09:35.872 08:24:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 3567497 00:09:35.872 08:24:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 3567497 00:09:35.872 08:24:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:35.872 08:24:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:35.872 08:24:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:35.872 08:24:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:09:35.872 08:24:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-save 00:09:35.872 08:24:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:35.872 08:24:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-restore 00:09:35.872 08:24:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:35.872 08:24:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:35.872 08:24:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.872 08:24:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:35.872 08:24:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:37.259 08:24:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:37.259 00:09:37.259 real 0m23.503s 00:09:37.259 user 1m3.786s 00:09:37.259 sys 0m8.470s 00:09:37.259 08:24:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:37.259 08:24:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:37.259 ************************************ 00:09:37.259 END TEST nvmf_lvol 00:09:37.259 ************************************ 00:09:37.259 08:24:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:37.259 08:24:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:37.259 08:24:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:37.259 08:24:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:37.259 ************************************ 00:09:37.259 START TEST nvmf_lvs_grow 00:09:37.259 ************************************ 00:09:37.259 08:24:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:37.520 * Looking for test storage... 00:09:37.521 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:37.521 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:37.521 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:09:37.521 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:37.521 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:37.521 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:37.521 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:37.521 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:37.521 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:09:37.521 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:09:37.521 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:09:37.521 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:09:37.521 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:09:37.521 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:09:37.521 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:09:37.521 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:37.521 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:09:37.521 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:09:37.521 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:37.521 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:37.521 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:09:37.521 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:09:37.521 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:37.521 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:09:37.521 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:09:37.521 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:09:37.521 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:09:37.521 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:37.521 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:09:37.521 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:09:37.521 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:37.521 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:37.521 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:09:37.521 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:37.521 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:37.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.521 --rc genhtml_branch_coverage=1 00:09:37.521 --rc genhtml_function_coverage=1 00:09:37.521 --rc genhtml_legend=1 00:09:37.521 --rc geninfo_all_blocks=1 00:09:37.521 --rc geninfo_unexecuted_blocks=1 00:09:37.521 00:09:37.521 ' 00:09:37.521 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:37.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.521 --rc genhtml_branch_coverage=1 00:09:37.521 --rc genhtml_function_coverage=1 00:09:37.521 --rc genhtml_legend=1 00:09:37.521 --rc geninfo_all_blocks=1 00:09:37.521 --rc geninfo_unexecuted_blocks=1 00:09:37.521 00:09:37.521 ' 00:09:37.521 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:37.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.521 --rc genhtml_branch_coverage=1 00:09:37.521 --rc genhtml_function_coverage=1 00:09:37.521 --rc genhtml_legend=1 00:09:37.521 --rc geninfo_all_blocks=1 00:09:37.521 --rc geninfo_unexecuted_blocks=1 00:09:37.521 00:09:37.521 ' 00:09:37.521 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:37.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.521 --rc genhtml_branch_coverage=1 00:09:37.521 --rc genhtml_function_coverage=1 00:09:37.521 --rc genhtml_legend=1 00:09:37.521 --rc geninfo_all_blocks=1 00:09:37.521 --rc geninfo_unexecuted_blocks=1 00:09:37.521 00:09:37.521 ' 00:09:37.521 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:37.521 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:09:37.521 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:37.521 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:37.521 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:37.521 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:37.521 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:37.521 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:37.521 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:37.521 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:37.521 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:37.521 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:37.521 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:37.521 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:37.521 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:37.521 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:37.521 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:37.521 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:37.521 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:37.521 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:09:37.521 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:37.521 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:37.521 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:37.521 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.521 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.521 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.521 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:09:37.522 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.522 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:09:37.522 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:37.522 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:37.522 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:37.522 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:37.522 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:37.522 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:37.522 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:37.522 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:37.522 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:37.522 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:37.522 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:37.522 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:37.522 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:09:37.522 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:37.522 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:37.522 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:37.522 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:37.522 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:37.522 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:37.522 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:37.522 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:37.522 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:09:37.522 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:09:37.522 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:09:37.522 08:24:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:45.668 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:45.668 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:45.668 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:45.668 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # is_hw=yes 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:45.668 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:45.669 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:45.669 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:45.669 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:45.669 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:45.669 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:45.669 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:45.669 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:45.669 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:45.669 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:45.669 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:45.669 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:45.669 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:45.669 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:45.669 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:45.669 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:45.669 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:45.669 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:45.669 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:45.669 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.601 ms 00:09:45.669 00:09:45.669 --- 10.0.0.2 ping statistics --- 00:09:45.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.669 rtt min/avg/max/mdev = 0.601/0.601/0.601/0.000 ms 00:09:45.669 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:45.669 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:45.669 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:09:45.669 00:09:45.669 --- 10.0.0.1 ping statistics --- 00:09:45.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.669 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:09:45.669 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:45.669 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # return 0 00:09:45.669 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:45.669 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:45.669 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:45.669 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:45.669 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:45.669 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:45.669 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:45.669 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:45.669 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:45.669 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:45.669 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:45.669 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # nvmfpid=3574517 00:09:45.669 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # waitforlisten 3574517 00:09:45.669 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:45.669 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 3574517 ']' 00:09:45.669 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.669 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:45.669 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.669 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:45.669 08:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:45.669 [2024-10-01 08:24:36.584768] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:09:45.669 [2024-10-01 08:24:36.584819] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:45.669 [2024-10-01 08:24:36.650041] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.669 [2024-10-01 08:24:36.713466] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:45.669 [2024-10-01 08:24:36.713500] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:45.669 [2024-10-01 08:24:36.713508] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:45.669 [2024-10-01 08:24:36.713515] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:45.669 [2024-10-01 08:24:36.713521] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:45.669 [2024-10-01 08:24:36.714065] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.669 08:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:45.669 08:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:09:45.669 08:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:45.669 08:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:45.669 08:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:45.669 08:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:45.669 08:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:45.931 [2024-10-01 08:24:37.566118] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:45.931 08:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:45.931 08:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:45.931 08:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:45.931 08:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:45.931 ************************************ 00:09:45.931 START TEST lvs_grow_clean 00:09:45.931 ************************************ 00:09:45.931 08:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:09:45.931 08:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:45.931 08:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:45.931 08:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:45.931 08:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:45.931 08:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:45.931 08:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:45.931 08:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:45.931 08:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:45.931 08:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:46.192 08:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:46.192 08:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:46.192 08:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=d95a60a6-ab2e-47cf-939a-926484b78052 00:09:46.192 08:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d95a60a6-ab2e-47cf-939a-926484b78052 00:09:46.192 08:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:46.450 08:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:46.450 08:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:46.450 08:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d95a60a6-ab2e-47cf-939a-926484b78052 lvol 150 00:09:46.710 08:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=8434d0c1-bc56-4ab2-912b-5e1250d44e52 00:09:46.710 08:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:46.710 08:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:46.710 [2024-10-01 08:24:38.484732] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:46.710 [2024-10-01 08:24:38.484783] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:46.710 true 00:09:46.710 08:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d95a60a6-ab2e-47cf-939a-926484b78052 00:09:46.710 08:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:46.971 08:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:46.971 08:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:47.233 08:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8434d0c1-bc56-4ab2-912b-5e1250d44e52 00:09:47.233 08:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:47.495 [2024-10-01 08:24:39.154872] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:47.495 08:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:47.754 08:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3575225 00:09:47.754 08:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:47.754 08:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:47.754 08:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3575225 /var/tmp/bdevperf.sock 00:09:47.754 08:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 3575225 ']' 00:09:47.754 08:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:47.754 08:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:47.754 08:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:47.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:47.754 08:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:47.754 08:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:47.754 [2024-10-01 08:24:39.382758] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:09:47.754 [2024-10-01 08:24:39.382811] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3575225 ] 00:09:47.754 [2024-10-01 08:24:39.460064] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.754 [2024-10-01 08:24:39.524178] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:48.422 08:24:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:48.422 08:24:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:09:48.422 08:24:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:48.994 Nvme0n1 00:09:48.994 08:24:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:48.994 [ 00:09:48.994 { 00:09:48.994 "name": "Nvme0n1", 00:09:48.994 "aliases": [ 00:09:48.994 "8434d0c1-bc56-4ab2-912b-5e1250d44e52" 00:09:48.994 ], 00:09:48.994 "product_name": "NVMe disk", 00:09:48.994 "block_size": 4096, 00:09:48.994 "num_blocks": 38912, 00:09:48.994 "uuid": "8434d0c1-bc56-4ab2-912b-5e1250d44e52", 00:09:48.994 "numa_id": 0, 00:09:48.994 "assigned_rate_limits": { 00:09:48.994 "rw_ios_per_sec": 0, 00:09:48.994 "rw_mbytes_per_sec": 0, 00:09:48.994 "r_mbytes_per_sec": 0, 00:09:48.994 "w_mbytes_per_sec": 0 00:09:48.994 }, 00:09:48.994 "claimed": false, 00:09:48.994 "zoned": false, 00:09:48.994 "supported_io_types": { 00:09:48.994 "read": true, 00:09:48.994 "write": true, 00:09:48.994 "unmap": true, 00:09:48.994 "flush": true, 00:09:48.994 "reset": true, 00:09:48.994 "nvme_admin": true, 00:09:48.994 "nvme_io": true, 00:09:48.994 "nvme_io_md": false, 00:09:48.994 "write_zeroes": true, 00:09:48.994 "zcopy": false, 00:09:48.994 "get_zone_info": false, 00:09:48.994 "zone_management": false, 00:09:48.994 "zone_append": false, 00:09:48.994 "compare": true, 00:09:48.994 "compare_and_write": true, 00:09:48.994 "abort": true, 00:09:48.994 "seek_hole": false, 00:09:48.994 "seek_data": false, 00:09:48.994 "copy": true, 00:09:48.994 "nvme_iov_md": false 00:09:48.994 }, 00:09:48.994 "memory_domains": [ 00:09:48.994 { 00:09:48.994 "dma_device_id": "system", 00:09:48.994 "dma_device_type": 1 00:09:48.994 } 00:09:48.994 ], 00:09:48.994 "driver_specific": { 00:09:48.994 "nvme": [ 00:09:48.994 { 00:09:48.994 "trid": { 00:09:48.994 "trtype": "TCP", 00:09:48.994 "adrfam": "IPv4", 00:09:48.994 "traddr": "10.0.0.2", 00:09:48.994 "trsvcid": "4420", 00:09:48.994 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:48.994 }, 00:09:48.994 "ctrlr_data": { 00:09:48.994 "cntlid": 1, 00:09:48.994 "vendor_id": "0x8086", 00:09:48.994 "model_number": "SPDK bdev Controller", 00:09:48.994 "serial_number": "SPDK0", 00:09:48.994 "firmware_revision": "25.01", 00:09:48.994 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:48.994 "oacs": { 00:09:48.994 "security": 0, 00:09:48.994 "format": 0, 00:09:48.994 "firmware": 0, 00:09:48.994 "ns_manage": 0 00:09:48.994 }, 00:09:48.994 "multi_ctrlr": true, 00:09:48.994 "ana_reporting": false 00:09:48.994 }, 00:09:48.994 "vs": { 00:09:48.994 "nvme_version": "1.3" 00:09:48.994 }, 00:09:48.994 "ns_data": { 00:09:48.994 "id": 1, 00:09:48.994 "can_share": true 00:09:48.994 } 00:09:48.994 } 00:09:48.994 ], 00:09:48.994 "mp_policy": "active_passive" 00:09:48.994 } 00:09:48.994 } 00:09:48.994 ] 00:09:48.994 08:24:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3575525 00:09:48.994 08:24:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:48.994 08:24:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:49.254 Running I/O for 10 seconds... 00:09:50.195 Latency(us) 00:09:50.195 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:50.195 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:50.195 Nvme0n1 : 1.00 17910.00 69.96 0.00 0.00 0.00 0.00 0.00 00:09:50.195 =================================================================================================================== 00:09:50.195 Total : 17910.00 69.96 0.00 0.00 0.00 0.00 0.00 00:09:50.195 00:09:51.136 08:24:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u d95a60a6-ab2e-47cf-939a-926484b78052 00:09:51.136 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:51.136 Nvme0n1 : 2.00 17939.00 70.07 0.00 0.00 0.00 0.00 0.00 00:09:51.136 =================================================================================================================== 00:09:51.136 Total : 17939.00 70.07 0.00 0.00 0.00 0.00 0.00 00:09:51.136 00:09:51.136 true 00:09:51.396 08:24:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d95a60a6-ab2e-47cf-939a-926484b78052 00:09:51.396 08:24:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:51.396 08:24:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:51.396 08:24:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:51.396 08:24:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3575525 00:09:52.338 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:52.338 Nvme0n1 : 3.00 17988.33 70.27 0.00 0.00 0.00 0.00 0.00 00:09:52.338 =================================================================================================================== 00:09:52.338 Total : 17988.33 70.27 0.00 0.00 0.00 0.00 0.00 00:09:52.338 00:09:53.278 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:53.278 Nvme0n1 : 4.00 18035.75 70.45 0.00 0.00 0.00 0.00 0.00 00:09:53.278 =================================================================================================================== 00:09:53.278 Total : 18035.75 70.45 0.00 0.00 0.00 0.00 0.00 00:09:53.278 00:09:54.218 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:54.218 Nvme0n1 : 5.00 18053.40 70.52 0.00 0.00 0.00 0.00 0.00 00:09:54.218 =================================================================================================================== 00:09:54.218 Total : 18053.40 70.52 0.00 0.00 0.00 0.00 0.00 00:09:54.218 00:09:55.159 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:55.159 Nvme0n1 : 6.00 18071.83 70.59 0.00 0.00 0.00 0.00 0.00 00:09:55.159 =================================================================================================================== 00:09:55.159 Total : 18071.83 70.59 0.00 0.00 0.00 0.00 0.00 00:09:55.159 00:09:56.097 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:56.097 Nvme0n1 : 7.00 18092.00 70.67 0.00 0.00 0.00 0.00 0.00 00:09:56.097 =================================================================================================================== 00:09:56.097 Total : 18092.00 70.67 0.00 0.00 0.00 0.00 0.00 00:09:56.097 00:09:57.480 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:57.480 Nvme0n1 : 8.00 18107.62 70.73 0.00 0.00 0.00 0.00 0.00 00:09:57.480 =================================================================================================================== 00:09:57.480 Total : 18107.62 70.73 0.00 0.00 0.00 0.00 0.00 00:09:57.480 00:09:58.050 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:58.050 Nvme0n1 : 9.00 18121.67 70.79 0.00 0.00 0.00 0.00 0.00 00:09:58.050 =================================================================================================================== 00:09:58.050 Total : 18121.67 70.79 0.00 0.00 0.00 0.00 0.00 00:09:58.050 00:09:59.430 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:59.430 Nvme0n1 : 10.00 18128.30 70.81 0.00 0.00 0.00 0.00 0.00 00:09:59.430 =================================================================================================================== 00:09:59.430 Total : 18128.30 70.81 0.00 0.00 0.00 0.00 0.00 00:09:59.430 00:09:59.430 00:09:59.430 Latency(us) 00:09:59.430 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:59.430 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:59.430 Nvme0n1 : 10.01 18129.03 70.82 0.00 0.00 7057.36 2075.31 12724.91 00:09:59.430 =================================================================================================================== 00:09:59.430 Total : 18129.03 70.82 0.00 0.00 7057.36 2075.31 12724.91 00:09:59.430 { 00:09:59.430 "results": [ 00:09:59.430 { 00:09:59.430 "job": "Nvme0n1", 00:09:59.430 "core_mask": "0x2", 00:09:59.430 "workload": "randwrite", 00:09:59.430 "status": "finished", 00:09:59.430 "queue_depth": 128, 00:09:59.430 "io_size": 4096, 00:09:59.430 "runtime": 10.006657, 00:09:59.430 "iops": 18129.03150372797, 00:09:59.430 "mibps": 70.81652931143738, 00:09:59.430 "io_failed": 0, 00:09:59.430 "io_timeout": 0, 00:09:59.430 "avg_latency_us": 7057.361420421032, 00:09:59.430 "min_latency_us": 2075.306666666667, 00:09:59.430 "max_latency_us": 12724.906666666666 00:09:59.430 } 00:09:59.431 ], 00:09:59.431 "core_count": 1 00:09:59.431 } 00:09:59.431 08:24:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3575225 00:09:59.431 08:24:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 3575225 ']' 00:09:59.431 08:24:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 3575225 00:09:59.431 08:24:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:09:59.431 08:24:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:59.431 08:24:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3575225 00:09:59.431 08:24:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:59.431 08:24:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:59.431 08:24:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3575225' 00:09:59.431 killing process with pid 3575225 00:09:59.431 08:24:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 3575225 00:09:59.431 Received shutdown signal, test time was about 10.000000 seconds 00:09:59.431 00:09:59.431 Latency(us) 00:09:59.431 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:59.431 =================================================================================================================== 00:09:59.431 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:59.431 08:24:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 3575225 00:09:59.431 08:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:59.431 08:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:59.690 08:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d95a60a6-ab2e-47cf-939a-926484b78052 00:09:59.690 08:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:59.950 08:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:59.950 08:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:59.950 08:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:59.950 [2024-10-01 08:24:51.762337] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:00.210 08:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d95a60a6-ab2e-47cf-939a-926484b78052 00:10:00.210 08:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:10:00.210 08:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d95a60a6-ab2e-47cf-939a-926484b78052 00:10:00.210 08:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:00.210 08:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:00.210 08:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:00.210 08:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:00.210 08:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:00.210 08:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:00.210 08:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:00.210 08:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:10:00.210 08:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d95a60a6-ab2e-47cf-939a-926484b78052 00:10:00.210 request: 00:10:00.210 { 00:10:00.210 "uuid": "d95a60a6-ab2e-47cf-939a-926484b78052", 00:10:00.210 "method": "bdev_lvol_get_lvstores", 00:10:00.210 "req_id": 1 00:10:00.210 } 00:10:00.210 Got JSON-RPC error response 00:10:00.210 response: 00:10:00.210 { 00:10:00.210 "code": -19, 00:10:00.210 "message": "No such device" 00:10:00.210 } 00:10:00.210 08:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:10:00.210 08:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:00.210 08:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:00.210 08:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:00.210 08:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:00.471 aio_bdev 00:10:00.471 08:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 8434d0c1-bc56-4ab2-912b-5e1250d44e52 00:10:00.471 08:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=8434d0c1-bc56-4ab2-912b-5e1250d44e52 00:10:00.471 08:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:00.471 08:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:10:00.471 08:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:00.471 08:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:00.471 08:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:00.731 08:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8434d0c1-bc56-4ab2-912b-5e1250d44e52 -t 2000 00:10:00.731 [ 00:10:00.731 { 00:10:00.731 "name": "8434d0c1-bc56-4ab2-912b-5e1250d44e52", 00:10:00.731 "aliases": [ 00:10:00.731 "lvs/lvol" 00:10:00.731 ], 00:10:00.732 "product_name": "Logical Volume", 00:10:00.732 "block_size": 4096, 00:10:00.732 "num_blocks": 38912, 00:10:00.732 "uuid": "8434d0c1-bc56-4ab2-912b-5e1250d44e52", 00:10:00.732 "assigned_rate_limits": { 00:10:00.732 "rw_ios_per_sec": 0, 00:10:00.732 "rw_mbytes_per_sec": 0, 00:10:00.732 "r_mbytes_per_sec": 0, 00:10:00.732 "w_mbytes_per_sec": 0 00:10:00.732 }, 00:10:00.732 "claimed": false, 00:10:00.732 "zoned": false, 00:10:00.732 "supported_io_types": { 00:10:00.732 "read": true, 00:10:00.732 "write": true, 00:10:00.732 "unmap": true, 00:10:00.732 "flush": false, 00:10:00.732 "reset": true, 00:10:00.732 "nvme_admin": false, 00:10:00.732 "nvme_io": false, 00:10:00.732 "nvme_io_md": false, 00:10:00.732 "write_zeroes": true, 00:10:00.732 "zcopy": false, 00:10:00.732 "get_zone_info": false, 00:10:00.732 "zone_management": false, 00:10:00.732 "zone_append": false, 00:10:00.732 "compare": false, 00:10:00.732 "compare_and_write": false, 00:10:00.732 "abort": false, 00:10:00.732 "seek_hole": true, 00:10:00.732 "seek_data": true, 00:10:00.732 "copy": false, 00:10:00.732 "nvme_iov_md": false 00:10:00.732 }, 00:10:00.732 "driver_specific": { 00:10:00.732 "lvol": { 00:10:00.732 "lvol_store_uuid": "d95a60a6-ab2e-47cf-939a-926484b78052", 00:10:00.732 "base_bdev": "aio_bdev", 00:10:00.732 "thin_provision": false, 00:10:00.732 "num_allocated_clusters": 38, 00:10:00.732 "snapshot": false, 00:10:00.732 "clone": false, 00:10:00.732 "esnap_clone": false 00:10:00.732 } 00:10:00.732 } 00:10:00.732 } 00:10:00.732 ] 00:10:00.732 08:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:10:00.732 08:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d95a60a6-ab2e-47cf-939a-926484b78052 00:10:00.732 08:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:00.992 08:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:00.992 08:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d95a60a6-ab2e-47cf-939a-926484b78052 00:10:00.992 08:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:00.992 08:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:00.992 08:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8434d0c1-bc56-4ab2-912b-5e1250d44e52 00:10:01.252 08:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d95a60a6-ab2e-47cf-939a-926484b78052 00:10:01.512 08:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:01.512 08:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:01.774 00:10:01.774 real 0m15.727s 00:10:01.774 user 0m15.468s 00:10:01.774 sys 0m1.372s 00:10:01.774 08:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:01.774 08:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:01.774 ************************************ 00:10:01.774 END TEST lvs_grow_clean 00:10:01.774 ************************************ 00:10:01.774 08:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:10:01.774 08:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:01.774 08:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:01.774 08:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:01.774 ************************************ 00:10:01.774 START TEST lvs_grow_dirty 00:10:01.774 ************************************ 00:10:01.774 08:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:10:01.774 08:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:01.774 08:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:01.774 08:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:01.774 08:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:01.774 08:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:01.774 08:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:01.774 08:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:01.774 08:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:01.774 08:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:02.035 08:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:02.035 08:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:02.035 08:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=b8bad3db-7715-4293-aab7-c51057be662d 00:10:02.035 08:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b8bad3db-7715-4293-aab7-c51057be662d 00:10:02.035 08:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:02.296 08:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:02.296 08:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:02.296 08:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b8bad3db-7715-4293-aab7-c51057be662d lvol 150 00:10:02.557 08:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=f3c141a9-dcfa-4736-9e9d-3a520483a07f 00:10:02.557 08:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:02.557 08:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:02.557 [2024-10-01 08:24:54.261525] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:02.557 [2024-10-01 08:24:54.261575] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:02.557 true 00:10:02.557 08:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:02.557 08:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b8bad3db-7715-4293-aab7-c51057be662d 00:10:02.817 08:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:02.817 08:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:02.817 08:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f3c141a9-dcfa-4736-9e9d-3a520483a07f 00:10:03.078 08:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:03.338 [2024-10-01 08:24:54.935585] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:03.338 08:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:03.338 08:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3578328 00:10:03.338 08:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:03.338 08:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:03.338 08:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3578328 /var/tmp/bdevperf.sock 00:10:03.338 08:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 3578328 ']' 00:10:03.338 08:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:03.338 08:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:03.338 08:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:03.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:03.338 08:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:03.338 08:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:03.338 [2024-10-01 08:24:55.153553] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:10:03.338 [2024-10-01 08:24:55.153605] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3578328 ] 00:10:03.598 [2024-10-01 08:24:55.229346] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.598 [2024-10-01 08:24:55.293400] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:04.168 08:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:04.169 08:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:10:04.169 08:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:04.739 Nvme0n1 00:10:04.739 08:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:04.739 [ 00:10:04.739 { 00:10:04.739 "name": "Nvme0n1", 00:10:04.739 "aliases": [ 00:10:04.739 "f3c141a9-dcfa-4736-9e9d-3a520483a07f" 00:10:04.739 ], 00:10:04.739 "product_name": "NVMe disk", 00:10:04.739 "block_size": 4096, 00:10:04.739 "num_blocks": 38912, 00:10:04.739 "uuid": "f3c141a9-dcfa-4736-9e9d-3a520483a07f", 00:10:04.739 "numa_id": 0, 00:10:04.739 "assigned_rate_limits": { 00:10:04.739 "rw_ios_per_sec": 0, 00:10:04.739 "rw_mbytes_per_sec": 0, 00:10:04.739 "r_mbytes_per_sec": 0, 00:10:04.739 "w_mbytes_per_sec": 0 00:10:04.739 }, 00:10:04.739 "claimed": false, 00:10:04.739 "zoned": false, 00:10:04.739 "supported_io_types": { 00:10:04.739 "read": true, 00:10:04.739 "write": true, 00:10:04.739 "unmap": true, 00:10:04.739 "flush": true, 00:10:04.739 "reset": true, 00:10:04.739 "nvme_admin": true, 00:10:04.739 "nvme_io": true, 00:10:04.739 "nvme_io_md": false, 00:10:04.739 "write_zeroes": true, 00:10:04.739 "zcopy": false, 00:10:04.739 "get_zone_info": false, 00:10:04.739 "zone_management": false, 00:10:04.739 "zone_append": false, 00:10:04.739 "compare": true, 00:10:04.739 "compare_and_write": true, 00:10:04.739 "abort": true, 00:10:04.739 "seek_hole": false, 00:10:04.739 "seek_data": false, 00:10:04.739 "copy": true, 00:10:04.739 "nvme_iov_md": false 00:10:04.739 }, 00:10:04.739 "memory_domains": [ 00:10:04.739 { 00:10:04.739 "dma_device_id": "system", 00:10:04.739 "dma_device_type": 1 00:10:04.739 } 00:10:04.739 ], 00:10:04.739 "driver_specific": { 00:10:04.739 "nvme": [ 00:10:04.739 { 00:10:04.739 "trid": { 00:10:04.739 "trtype": "TCP", 00:10:04.739 "adrfam": "IPv4", 00:10:04.739 "traddr": "10.0.0.2", 00:10:04.739 "trsvcid": "4420", 00:10:04.739 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:04.739 }, 00:10:04.739 "ctrlr_data": { 00:10:04.739 "cntlid": 1, 00:10:04.739 "vendor_id": "0x8086", 00:10:04.739 "model_number": "SPDK bdev Controller", 00:10:04.739 "serial_number": "SPDK0", 00:10:04.739 "firmware_revision": "25.01", 00:10:04.739 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:04.739 "oacs": { 00:10:04.739 "security": 0, 00:10:04.739 "format": 0, 00:10:04.739 "firmware": 0, 00:10:04.739 "ns_manage": 0 00:10:04.739 }, 00:10:04.739 "multi_ctrlr": true, 00:10:04.739 "ana_reporting": false 00:10:04.739 }, 00:10:04.739 "vs": { 00:10:04.739 "nvme_version": "1.3" 00:10:04.739 }, 00:10:04.739 "ns_data": { 00:10:04.739 "id": 1, 00:10:04.739 "can_share": true 00:10:04.739 } 00:10:04.739 } 00:10:04.739 ], 00:10:04.739 "mp_policy": "active_passive" 00:10:04.739 } 00:10:04.739 } 00:10:04.739 ] 00:10:04.739 08:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3578662 00:10:04.739 08:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:04.739 08:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:05.000 Running I/O for 10 seconds... 00:10:05.943 Latency(us) 00:10:05.943 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:05.943 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:05.943 Nvme0n1 : 1.00 17428.00 68.08 0.00 0.00 0.00 0.00 0.00 00:10:05.943 =================================================================================================================== 00:10:05.943 Total : 17428.00 68.08 0.00 0.00 0.00 0.00 0.00 00:10:05.943 00:10:06.884 08:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b8bad3db-7715-4293-aab7-c51057be662d 00:10:06.884 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:06.884 Nvme0n1 : 2.00 17510.00 68.40 0.00 0.00 0.00 0.00 0.00 00:10:06.884 =================================================================================================================== 00:10:06.884 Total : 17510.00 68.40 0.00 0.00 0.00 0.00 0.00 00:10:06.884 00:10:07.145 true 00:10:07.145 08:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b8bad3db-7715-4293-aab7-c51057be662d 00:10:07.145 08:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:07.145 08:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:07.145 08:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:07.145 08:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3578662 00:10:08.088 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:08.088 Nvme0n1 : 3.00 17542.67 68.53 0.00 0.00 0.00 0.00 0.00 00:10:08.088 =================================================================================================================== 00:10:08.088 Total : 17542.67 68.53 0.00 0.00 0.00 0.00 0.00 00:10:08.088 00:10:09.030 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:09.030 Nvme0n1 : 4.00 17569.00 68.63 0.00 0.00 0.00 0.00 0.00 00:10:09.030 =================================================================================================================== 00:10:09.030 Total : 17569.00 68.63 0.00 0.00 0.00 0.00 0.00 00:10:09.030 00:10:09.971 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:09.971 Nvme0n1 : 5.00 17596.00 68.73 0.00 0.00 0.00 0.00 0.00 00:10:09.971 =================================================================================================================== 00:10:09.971 Total : 17596.00 68.73 0.00 0.00 0.00 0.00 0.00 00:10:09.971 00:10:10.913 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:10.913 Nvme0n1 : 6.00 17618.00 68.82 0.00 0.00 0.00 0.00 0.00 00:10:10.913 =================================================================================================================== 00:10:10.913 Total : 17618.00 68.82 0.00 0.00 0.00 0.00 0.00 00:10:10.913 00:10:11.857 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:11.857 Nvme0n1 : 7.00 17638.29 68.90 0.00 0.00 0.00 0.00 0.00 00:10:11.857 =================================================================================================================== 00:10:11.857 Total : 17638.29 68.90 0.00 0.00 0.00 0.00 0.00 00:10:11.857 00:10:13.241 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:13.241 Nvme0n1 : 8.00 17652.50 68.96 0.00 0.00 0.00 0.00 0.00 00:10:13.241 =================================================================================================================== 00:10:13.241 Total : 17652.50 68.96 0.00 0.00 0.00 0.00 0.00 00:10:13.241 00:10:14.179 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:14.179 Nvme0n1 : 9.00 17667.11 69.01 0.00 0.00 0.00 0.00 0.00 00:10:14.179 =================================================================================================================== 00:10:14.179 Total : 17667.11 69.01 0.00 0.00 0.00 0.00 0.00 00:10:14.179 00:10:15.119 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:15.119 Nvme0n1 : 10.00 17677.20 69.05 0.00 0.00 0.00 0.00 0.00 00:10:15.119 =================================================================================================================== 00:10:15.119 Total : 17677.20 69.05 0.00 0.00 0.00 0.00 0.00 00:10:15.119 00:10:15.119 00:10:15.119 Latency(us) 00:10:15.119 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:15.119 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:15.119 Nvme0n1 : 10.01 17677.42 69.05 0.00 0.00 7235.43 2580.48 9830.40 00:10:15.119 =================================================================================================================== 00:10:15.119 Total : 17677.42 69.05 0.00 0.00 7235.43 2580.48 9830.40 00:10:15.119 { 00:10:15.119 "results": [ 00:10:15.119 { 00:10:15.119 "job": "Nvme0n1", 00:10:15.119 "core_mask": "0x2", 00:10:15.119 "workload": "randwrite", 00:10:15.119 "status": "finished", 00:10:15.119 "queue_depth": 128, 00:10:15.119 "io_size": 4096, 00:10:15.119 "runtime": 10.007117, 00:10:15.119 "iops": 17677.418980911287, 00:10:15.119 "mibps": 69.05241789418471, 00:10:15.119 "io_failed": 0, 00:10:15.119 "io_timeout": 0, 00:10:15.119 "avg_latency_us": 7235.427360994912, 00:10:15.119 "min_latency_us": 2580.48, 00:10:15.119 "max_latency_us": 9830.4 00:10:15.119 } 00:10:15.119 ], 00:10:15.119 "core_count": 1 00:10:15.119 } 00:10:15.119 08:25:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3578328 00:10:15.119 08:25:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 3578328 ']' 00:10:15.119 08:25:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 3578328 00:10:15.119 08:25:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:10:15.119 08:25:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:15.119 08:25:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3578328 00:10:15.119 08:25:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:15.119 08:25:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:15.119 08:25:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3578328' 00:10:15.119 killing process with pid 3578328 00:10:15.119 08:25:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 3578328 00:10:15.119 Received shutdown signal, test time was about 10.000000 seconds 00:10:15.119 00:10:15.119 Latency(us) 00:10:15.119 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:15.119 =================================================================================================================== 00:10:15.119 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:15.119 08:25:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 3578328 00:10:15.119 08:25:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:15.379 08:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:15.640 08:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b8bad3db-7715-4293-aab7-c51057be662d 00:10:15.640 08:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:15.640 08:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:15.640 08:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:10:15.640 08:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3574517 00:10:15.640 08:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3574517 00:10:15.640 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3574517 Killed "${NVMF_APP[@]}" "$@" 00:10:15.640 08:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:10:15.640 08:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:10:15.640 08:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:15.640 08:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:15.640 08:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:15.640 08:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # nvmfpid=3580758 00:10:15.640 08:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # waitforlisten 3580758 00:10:15.640 08:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:15.640 08:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 3580758 ']' 00:10:15.640 08:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.640 08:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:15.640 08:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.640 08:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:15.640 08:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:15.901 [2024-10-01 08:25:07.475624] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:10:15.901 [2024-10-01 08:25:07.475686] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:15.901 [2024-10-01 08:25:07.542323] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.901 [2024-10-01 08:25:07.606476] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:15.901 [2024-10-01 08:25:07.606511] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:15.901 [2024-10-01 08:25:07.606519] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:15.901 [2024-10-01 08:25:07.606526] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:15.901 [2024-10-01 08:25:07.606531] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:15.901 [2024-10-01 08:25:07.607078] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.472 08:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:16.472 08:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:10:16.472 08:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:16.472 08:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:16.472 08:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:16.472 08:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:16.733 08:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:16.733 [2024-10-01 08:25:08.449367] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:10:16.733 [2024-10-01 08:25:08.449469] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:10:16.733 [2024-10-01 08:25:08.449499] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:10:16.733 08:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:10:16.733 08:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev f3c141a9-dcfa-4736-9e9d-3a520483a07f 00:10:16.733 08:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=f3c141a9-dcfa-4736-9e9d-3a520483a07f 00:10:16.733 08:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:16.733 08:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:10:16.733 08:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:16.733 08:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:16.733 08:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:16.993 08:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f3c141a9-dcfa-4736-9e9d-3a520483a07f -t 2000 00:10:16.994 [ 00:10:16.994 { 00:10:16.994 "name": "f3c141a9-dcfa-4736-9e9d-3a520483a07f", 00:10:16.994 "aliases": [ 00:10:16.994 "lvs/lvol" 00:10:16.994 ], 00:10:16.994 "product_name": "Logical Volume", 00:10:16.994 "block_size": 4096, 00:10:16.994 "num_blocks": 38912, 00:10:16.994 "uuid": "f3c141a9-dcfa-4736-9e9d-3a520483a07f", 00:10:16.994 "assigned_rate_limits": { 00:10:16.994 "rw_ios_per_sec": 0, 00:10:16.994 "rw_mbytes_per_sec": 0, 00:10:16.994 "r_mbytes_per_sec": 0, 00:10:16.994 "w_mbytes_per_sec": 0 00:10:16.994 }, 00:10:16.994 "claimed": false, 00:10:16.994 "zoned": false, 00:10:16.994 "supported_io_types": { 00:10:16.994 "read": true, 00:10:16.994 "write": true, 00:10:16.994 "unmap": true, 00:10:16.994 "flush": false, 00:10:16.994 "reset": true, 00:10:16.994 "nvme_admin": false, 00:10:16.994 "nvme_io": false, 00:10:16.994 "nvme_io_md": false, 00:10:16.994 "write_zeroes": true, 00:10:16.994 "zcopy": false, 00:10:16.994 "get_zone_info": false, 00:10:16.994 "zone_management": false, 00:10:16.994 "zone_append": false, 00:10:16.994 "compare": false, 00:10:16.994 "compare_and_write": false, 00:10:16.994 "abort": false, 00:10:16.994 "seek_hole": true, 00:10:16.994 "seek_data": true, 00:10:16.994 "copy": false, 00:10:16.994 "nvme_iov_md": false 00:10:16.994 }, 00:10:16.994 "driver_specific": { 00:10:16.994 "lvol": { 00:10:16.994 "lvol_store_uuid": "b8bad3db-7715-4293-aab7-c51057be662d", 00:10:16.994 "base_bdev": "aio_bdev", 00:10:16.994 "thin_provision": false, 00:10:16.994 "num_allocated_clusters": 38, 00:10:16.994 "snapshot": false, 00:10:16.994 "clone": false, 00:10:16.994 "esnap_clone": false 00:10:16.994 } 00:10:16.994 } 00:10:16.994 } 00:10:16.994 ] 00:10:17.255 08:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:10:17.255 08:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b8bad3db-7715-4293-aab7-c51057be662d 00:10:17.255 08:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:10:17.255 08:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:10:17.255 08:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b8bad3db-7715-4293-aab7-c51057be662d 00:10:17.255 08:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:10:17.516 08:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:10:17.516 08:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:17.516 [2024-10-01 08:25:09.313696] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:17.777 08:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b8bad3db-7715-4293-aab7-c51057be662d 00:10:17.777 08:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:10:17.777 08:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b8bad3db-7715-4293-aab7-c51057be662d 00:10:17.777 08:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:17.777 08:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:17.777 08:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:17.777 08:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:17.777 08:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:17.777 08:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:17.777 08:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:17.777 08:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:10:17.777 08:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b8bad3db-7715-4293-aab7-c51057be662d 00:10:17.777 request: 00:10:17.777 { 00:10:17.777 "uuid": "b8bad3db-7715-4293-aab7-c51057be662d", 00:10:17.777 "method": "bdev_lvol_get_lvstores", 00:10:17.777 "req_id": 1 00:10:17.777 } 00:10:17.777 Got JSON-RPC error response 00:10:17.777 response: 00:10:17.777 { 00:10:17.777 "code": -19, 00:10:17.777 "message": "No such device" 00:10:17.777 } 00:10:17.777 08:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:10:17.777 08:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:17.777 08:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:17.777 08:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:17.777 08:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:18.038 aio_bdev 00:10:18.038 08:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f3c141a9-dcfa-4736-9e9d-3a520483a07f 00:10:18.038 08:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=f3c141a9-dcfa-4736-9e9d-3a520483a07f 00:10:18.038 08:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:18.038 08:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:10:18.038 08:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:18.038 08:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:18.038 08:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:18.299 08:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f3c141a9-dcfa-4736-9e9d-3a520483a07f -t 2000 00:10:18.299 [ 00:10:18.299 { 00:10:18.299 "name": "f3c141a9-dcfa-4736-9e9d-3a520483a07f", 00:10:18.299 "aliases": [ 00:10:18.299 "lvs/lvol" 00:10:18.299 ], 00:10:18.299 "product_name": "Logical Volume", 00:10:18.299 "block_size": 4096, 00:10:18.299 "num_blocks": 38912, 00:10:18.299 "uuid": "f3c141a9-dcfa-4736-9e9d-3a520483a07f", 00:10:18.299 "assigned_rate_limits": { 00:10:18.299 "rw_ios_per_sec": 0, 00:10:18.299 "rw_mbytes_per_sec": 0, 00:10:18.299 "r_mbytes_per_sec": 0, 00:10:18.299 "w_mbytes_per_sec": 0 00:10:18.299 }, 00:10:18.299 "claimed": false, 00:10:18.299 "zoned": false, 00:10:18.299 "supported_io_types": { 00:10:18.299 "read": true, 00:10:18.299 "write": true, 00:10:18.299 "unmap": true, 00:10:18.299 "flush": false, 00:10:18.299 "reset": true, 00:10:18.299 "nvme_admin": false, 00:10:18.299 "nvme_io": false, 00:10:18.299 "nvme_io_md": false, 00:10:18.299 "write_zeroes": true, 00:10:18.299 "zcopy": false, 00:10:18.299 "get_zone_info": false, 00:10:18.299 "zone_management": false, 00:10:18.299 "zone_append": false, 00:10:18.299 "compare": false, 00:10:18.299 "compare_and_write": false, 00:10:18.299 "abort": false, 00:10:18.299 "seek_hole": true, 00:10:18.299 "seek_data": true, 00:10:18.299 "copy": false, 00:10:18.299 "nvme_iov_md": false 00:10:18.299 }, 00:10:18.299 "driver_specific": { 00:10:18.299 "lvol": { 00:10:18.299 "lvol_store_uuid": "b8bad3db-7715-4293-aab7-c51057be662d", 00:10:18.299 "base_bdev": "aio_bdev", 00:10:18.299 "thin_provision": false, 00:10:18.299 "num_allocated_clusters": 38, 00:10:18.299 "snapshot": false, 00:10:18.299 "clone": false, 00:10:18.299 "esnap_clone": false 00:10:18.299 } 00:10:18.299 } 00:10:18.299 } 00:10:18.299 ] 00:10:18.299 08:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:10:18.299 08:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b8bad3db-7715-4293-aab7-c51057be662d 00:10:18.299 08:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:18.560 08:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:18.560 08:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b8bad3db-7715-4293-aab7-c51057be662d 00:10:18.560 08:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:18.821 08:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:18.821 08:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f3c141a9-dcfa-4736-9e9d-3a520483a07f 00:10:18.821 08:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b8bad3db-7715-4293-aab7-c51057be662d 00:10:19.082 08:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:19.343 08:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:19.343 00:10:19.343 real 0m17.526s 00:10:19.343 user 0m45.721s 00:10:19.343 sys 0m2.969s 00:10:19.343 08:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:19.343 08:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:19.343 ************************************ 00:10:19.343 END TEST lvs_grow_dirty 00:10:19.343 ************************************ 00:10:19.343 08:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:10:19.343 08:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:10:19.343 08:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:10:19.343 08:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:10:19.343 08:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:19.343 08:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:10:19.343 08:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:10:19.343 08:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:10:19.343 08:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:19.343 nvmf_trace.0 00:10:19.343 08:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:10:19.343 08:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:10:19.343 08:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:19.343 08:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:10:19.343 08:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:19.343 08:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:10:19.343 08:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:19.343 08:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:19.343 rmmod nvme_tcp 00:10:19.343 rmmod nvme_fabrics 00:10:19.343 rmmod nvme_keyring 00:10:19.343 08:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:19.343 08:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:10:19.343 08:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:10:19.343 08:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@513 -- # '[' -n 3580758 ']' 00:10:19.343 08:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # killprocess 3580758 00:10:19.343 08:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 3580758 ']' 00:10:19.343 08:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 3580758 00:10:19.343 08:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:10:19.343 08:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:19.343 08:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3580758 00:10:19.604 08:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:19.604 08:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:19.604 08:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3580758' 00:10:19.604 killing process with pid 3580758 00:10:19.604 08:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 3580758 00:10:19.604 08:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 3580758 00:10:19.604 08:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:19.604 08:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:19.604 08:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:19.604 08:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:10:19.604 08:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-save 00:10:19.604 08:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:19.604 08:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-restore 00:10:19.604 08:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:19.604 08:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:19.604 08:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:19.604 08:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:19.604 08:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:22.152 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:22.152 00:10:22.152 real 0m44.414s 00:10:22.152 user 1m7.522s 00:10:22.152 sys 0m10.315s 00:10:22.152 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:22.152 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:22.152 ************************************ 00:10:22.152 END TEST nvmf_lvs_grow 00:10:22.152 ************************************ 00:10:22.152 08:25:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:22.152 08:25:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:22.152 08:25:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:22.152 08:25:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:22.152 ************************************ 00:10:22.152 START TEST nvmf_bdev_io_wait 00:10:22.152 ************************************ 00:10:22.152 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:22.152 * Looking for test storage... 00:10:22.152 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:22.152 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:22.152 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:10:22.152 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:22.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.153 --rc genhtml_branch_coverage=1 00:10:22.153 --rc genhtml_function_coverage=1 00:10:22.153 --rc genhtml_legend=1 00:10:22.153 --rc geninfo_all_blocks=1 00:10:22.153 --rc geninfo_unexecuted_blocks=1 00:10:22.153 00:10:22.153 ' 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:22.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.153 --rc genhtml_branch_coverage=1 00:10:22.153 --rc genhtml_function_coverage=1 00:10:22.153 --rc genhtml_legend=1 00:10:22.153 --rc geninfo_all_blocks=1 00:10:22.153 --rc geninfo_unexecuted_blocks=1 00:10:22.153 00:10:22.153 ' 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:22.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.153 --rc genhtml_branch_coverage=1 00:10:22.153 --rc genhtml_function_coverage=1 00:10:22.153 --rc genhtml_legend=1 00:10:22.153 --rc geninfo_all_blocks=1 00:10:22.153 --rc geninfo_unexecuted_blocks=1 00:10:22.153 00:10:22.153 ' 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:22.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.153 --rc genhtml_branch_coverage=1 00:10:22.153 --rc genhtml_function_coverage=1 00:10:22.153 --rc genhtml_legend=1 00:10:22.153 --rc geninfo_all_blocks=1 00:10:22.153 --rc geninfo_unexecuted_blocks=1 00:10:22.153 00:10:22.153 ' 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:22.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:22.153 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:22.154 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:22.154 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:22.154 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:22.154 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:22.154 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:10:22.154 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:10:22.154 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:10:22.154 08:25:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:28.808 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:28.808 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:10:28.808 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:28.808 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:28.808 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:28.808 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:28.808 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:28.808 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:10:28.808 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:28.808 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:10:28.808 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:10:28.808 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:10:28.808 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:10:28.808 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:10:28.808 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:10:28.808 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:28.808 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:28.808 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:28.808 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:28.808 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:28.808 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:28.808 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:28.808 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:28.808 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:28.808 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:28.808 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:28.808 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:10:28.808 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:10:28.808 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:10:28.808 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:10:28.808 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:10:28.808 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:10:28.808 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:28.808 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:28.808 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:28.808 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:28.808 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:28.808 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:28.808 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:28.808 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:28.808 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:28.808 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:28.808 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:28.808 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:28.808 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:28.808 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:28.808 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:28.808 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:28.808 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:10:28.808 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:10:28.809 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:10:28.809 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:28.809 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:28.809 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:28.809 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:28.809 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:28.809 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:28.809 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:28.809 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:28.809 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:28.809 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:28.809 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:28.809 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:28.809 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:28.809 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:28.809 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:28.809 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:28.809 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:28.809 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:28.809 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:28.809 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:28.809 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:10:28.809 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # is_hw=yes 00:10:28.809 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:10:28.809 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:10:28.809 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:10:28.809 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:28.809 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:28.809 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:28.809 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:28.809 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:28.809 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:28.809 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:28.809 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:28.809 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:28.809 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:28.809 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:28.809 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:28.809 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:28.809 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:28.809 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:29.070 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:29.070 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:29.070 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:29.070 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:29.070 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:29.070 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:29.070 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:29.070 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:29.070 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:29.070 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.612 ms 00:10:29.070 00:10:29.070 --- 10.0.0.2 ping statistics --- 00:10:29.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:29.070 rtt min/avg/max/mdev = 0.612/0.612/0.612/0.000 ms 00:10:29.070 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:29.070 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:29.070 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:10:29.070 00:10:29.070 --- 10.0.0.1 ping statistics --- 00:10:29.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:29.070 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:10:29.070 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:29.070 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # return 0 00:10:29.070 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:29.070 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:29.070 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:29.070 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:29.070 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:29.070 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:29.070 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:29.331 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:29.331 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:29.331 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:29.331 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:29.331 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # nvmfpid=3585775 00:10:29.331 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # waitforlisten 3585775 00:10:29.331 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:29.331 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 3585775 ']' 00:10:29.331 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:29.331 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:29.331 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:29.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:29.331 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:29.331 08:25:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:29.331 [2024-10-01 08:25:21.001166] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:10:29.332 [2024-10-01 08:25:21.001219] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:29.332 [2024-10-01 08:25:21.069003] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:29.332 [2024-10-01 08:25:21.133735] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:29.332 [2024-10-01 08:25:21.133775] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:29.332 [2024-10-01 08:25:21.133783] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:29.332 [2024-10-01 08:25:21.133790] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:29.332 [2024-10-01 08:25:21.133796] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:29.332 [2024-10-01 08:25:21.135558] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:29.332 [2024-10-01 08:25:21.135672] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:29.332 [2024-10-01 08:25:21.135831] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.332 [2024-10-01 08:25:21.135832] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:30.273 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:30.273 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:10:30.273 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:30.273 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:30.273 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:30.273 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:30.273 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:30.273 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.273 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:30.273 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.273 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:30.273 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.273 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:30.273 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.273 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:30.273 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.273 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:30.273 [2024-10-01 08:25:21.904089] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:30.273 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.273 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:30.274 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.274 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:30.274 Malloc0 00:10:30.274 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.274 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:30.274 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.274 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:30.274 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.274 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:30.274 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.274 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:30.274 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.274 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:30.274 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.274 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:30.274 [2024-10-01 08:25:21.975093] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:30.274 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.274 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3586123 00:10:30.274 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3586125 00:10:30.274 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:30.274 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:30.274 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:10:30.274 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:10:30.274 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:30.274 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:30.274 { 00:10:30.274 "params": { 00:10:30.274 "name": "Nvme$subsystem", 00:10:30.274 "trtype": "$TEST_TRANSPORT", 00:10:30.274 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:30.274 "adrfam": "ipv4", 00:10:30.274 "trsvcid": "$NVMF_PORT", 00:10:30.274 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:30.274 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:30.274 "hdgst": ${hdgst:-false}, 00:10:30.274 "ddgst": ${ddgst:-false} 00:10:30.274 }, 00:10:30.274 "method": "bdev_nvme_attach_controller" 00:10:30.274 } 00:10:30.274 EOF 00:10:30.274 )") 00:10:30.274 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3586127 00:10:30.274 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:30.274 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:30.274 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:10:30.274 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:10:30.274 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:30.274 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3586130 00:10:30.274 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:30.274 { 00:10:30.274 "params": { 00:10:30.274 "name": "Nvme$subsystem", 00:10:30.274 "trtype": "$TEST_TRANSPORT", 00:10:30.274 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:30.274 "adrfam": "ipv4", 00:10:30.274 "trsvcid": "$NVMF_PORT", 00:10:30.274 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:30.274 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:30.274 "hdgst": ${hdgst:-false}, 00:10:30.274 "ddgst": ${ddgst:-false} 00:10:30.274 }, 00:10:30.274 "method": "bdev_nvme_attach_controller" 00:10:30.274 } 00:10:30.274 EOF 00:10:30.274 )") 00:10:30.274 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:30.274 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:10:30.274 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:30.274 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:10:30.274 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:10:30.274 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:10:30.274 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:30.274 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:30.274 { 00:10:30.274 "params": { 00:10:30.274 "name": "Nvme$subsystem", 00:10:30.274 "trtype": "$TEST_TRANSPORT", 00:10:30.274 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:30.274 "adrfam": "ipv4", 00:10:30.274 "trsvcid": "$NVMF_PORT", 00:10:30.274 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:30.274 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:30.274 "hdgst": ${hdgst:-false}, 00:10:30.274 "ddgst": ${ddgst:-false} 00:10:30.274 }, 00:10:30.274 "method": "bdev_nvme_attach_controller" 00:10:30.274 } 00:10:30.274 EOF 00:10:30.274 )") 00:10:30.274 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:30.274 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:30.274 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:10:30.274 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:10:30.274 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:10:30.274 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:30.274 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:30.274 { 00:10:30.274 "params": { 00:10:30.274 "name": "Nvme$subsystem", 00:10:30.274 "trtype": "$TEST_TRANSPORT", 00:10:30.274 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:30.274 "adrfam": "ipv4", 00:10:30.274 "trsvcid": "$NVMF_PORT", 00:10:30.274 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:30.274 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:30.274 "hdgst": ${hdgst:-false}, 00:10:30.274 "ddgst": ${ddgst:-false} 00:10:30.274 }, 00:10:30.274 "method": "bdev_nvme_attach_controller" 00:10:30.274 } 00:10:30.274 EOF 00:10:30.274 )") 00:10:30.274 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:10:30.274 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3586123 00:10:30.274 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:10:30.274 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:10:30.274 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:10:30.274 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:10:30.274 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:30.274 "params": { 00:10:30.274 "name": "Nvme1", 00:10:30.274 "trtype": "tcp", 00:10:30.274 "traddr": "10.0.0.2", 00:10:30.274 "adrfam": "ipv4", 00:10:30.274 "trsvcid": "4420", 00:10:30.274 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:30.274 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:30.274 "hdgst": false, 00:10:30.274 "ddgst": false 00:10:30.274 }, 00:10:30.274 "method": "bdev_nvme_attach_controller" 00:10:30.274 }' 00:10:30.274 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:10:30.274 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:10:30.274 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:10:30.274 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:30.274 "params": { 00:10:30.274 "name": "Nvme1", 00:10:30.274 "trtype": "tcp", 00:10:30.274 "traddr": "10.0.0.2", 00:10:30.274 "adrfam": "ipv4", 00:10:30.274 "trsvcid": "4420", 00:10:30.274 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:30.274 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:30.274 "hdgst": false, 00:10:30.274 "ddgst": false 00:10:30.274 }, 00:10:30.274 "method": "bdev_nvme_attach_controller" 00:10:30.274 }' 00:10:30.274 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:10:30.274 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:30.274 "params": { 00:10:30.274 "name": "Nvme1", 00:10:30.274 "trtype": "tcp", 00:10:30.274 "traddr": "10.0.0.2", 00:10:30.274 "adrfam": "ipv4", 00:10:30.274 "trsvcid": "4420", 00:10:30.274 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:30.274 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:30.274 "hdgst": false, 00:10:30.274 "ddgst": false 00:10:30.274 }, 00:10:30.274 "method": "bdev_nvme_attach_controller" 00:10:30.274 }' 00:10:30.274 08:25:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:10:30.274 08:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:30.275 "params": { 00:10:30.275 "name": "Nvme1", 00:10:30.275 "trtype": "tcp", 00:10:30.275 "traddr": "10.0.0.2", 00:10:30.275 "adrfam": "ipv4", 00:10:30.275 "trsvcid": "4420", 00:10:30.275 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:30.275 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:30.275 "hdgst": false, 00:10:30.275 "ddgst": false 00:10:30.275 }, 00:10:30.275 "method": "bdev_nvme_attach_controller" 00:10:30.275 }' 00:10:30.275 [2024-10-01 08:25:22.030669] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:10:30.275 [2024-10-01 08:25:22.030715] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:10:30.275 [2024-10-01 08:25:22.032513] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:10:30.275 [2024-10-01 08:25:22.032562] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:10:30.275 [2024-10-01 08:25:22.032611] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:10:30.275 [2024-10-01 08:25:22.032659] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:10:30.275 [2024-10-01 08:25:22.035187] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:10:30.275 [2024-10-01 08:25:22.035236] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:10:30.535 [2024-10-01 08:25:22.164567] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.535 [2024-10-01 08:25:22.205092] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.535 [2024-10-01 08:25:22.216626] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:10:30.535 [2024-10-01 08:25:22.254882] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:10:30.535 [2024-10-01 08:25:22.266705] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.535 [2024-10-01 08:25:22.318228] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.535 [2024-10-01 08:25:22.318280] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:10:30.795 [2024-10-01 08:25:22.368457] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:10:30.795 Running I/O for 1 seconds... 00:10:30.795 Running I/O for 1 seconds... 00:10:31.056 Running I/O for 1 seconds... 00:10:31.056 Running I/O for 1 seconds... 00:10:32.000 13246.00 IOPS, 51.74 MiB/s 00:10:32.000 Latency(us) 00:10:32.000 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:32.000 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:32.000 Nvme1n1 : 1.01 13298.95 51.95 0.00 0.00 9592.58 5324.80 18131.63 00:10:32.000 =================================================================================================================== 00:10:32.000 Total : 13298.95 51.95 0.00 0.00 9592.58 5324.80 18131.63 00:10:32.000 187936.00 IOPS, 734.12 MiB/s 00:10:32.000 Latency(us) 00:10:32.000 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:32.000 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:32.000 Nvme1n1 : 1.00 187563.25 732.67 0.00 0.00 678.54 310.61 1979.73 00:10:32.000 =================================================================================================================== 00:10:32.000 Total : 187563.25 732.67 0.00 0.00 678.54 310.61 1979.73 00:10:32.000 08:25:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3586125 00:10:32.000 17049.00 IOPS, 66.60 MiB/s 00:10:32.000 Latency(us) 00:10:32.000 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:32.000 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:32.000 Nvme1n1 : 1.01 17094.33 66.77 0.00 0.00 7468.75 3235.84 10704.21 00:10:32.000 =================================================================================================================== 00:10:32.000 Total : 17094.33 66.77 0.00 0.00 7468.75 3235.84 10704.21 00:10:32.000 13059.00 IOPS, 51.01 MiB/s 00:10:32.000 Latency(us) 00:10:32.000 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:32.000 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:32.000 Nvme1n1 : 1.01 13134.39 51.31 0.00 0.00 9716.83 4232.53 21408.43 00:10:32.000 =================================================================================================================== 00:10:32.000 Total : 13134.39 51.31 0.00 0.00 9716.83 4232.53 21408.43 00:10:32.000 08:25:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3586127 00:10:32.000 08:25:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3586130 00:10:32.261 08:25:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:32.261 08:25:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.261 08:25:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:32.261 08:25:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.261 08:25:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:32.261 08:25:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:32.261 08:25:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:32.261 08:25:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:10:32.261 08:25:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:32.261 08:25:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:10:32.261 08:25:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:32.261 08:25:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:32.261 rmmod nvme_tcp 00:10:32.261 rmmod nvme_fabrics 00:10:32.261 rmmod nvme_keyring 00:10:32.261 08:25:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:32.261 08:25:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:10:32.261 08:25:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:10:32.261 08:25:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@513 -- # '[' -n 3585775 ']' 00:10:32.261 08:25:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # killprocess 3585775 00:10:32.261 08:25:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 3585775 ']' 00:10:32.261 08:25:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 3585775 00:10:32.261 08:25:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:10:32.261 08:25:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:32.261 08:25:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3585775 00:10:32.261 08:25:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:32.261 08:25:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:32.261 08:25:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3585775' 00:10:32.261 killing process with pid 3585775 00:10:32.261 08:25:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 3585775 00:10:32.261 08:25:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 3585775 00:10:32.521 08:25:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:32.522 08:25:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:32.522 08:25:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:32.522 08:25:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:10:32.522 08:25:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-save 00:10:32.522 08:25:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:32.522 08:25:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-restore 00:10:32.522 08:25:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:32.522 08:25:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:32.522 08:25:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:32.522 08:25:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:32.522 08:25:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:34.436 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:34.436 00:10:34.436 real 0m12.755s 00:10:34.436 user 0m19.617s 00:10:34.436 sys 0m6.966s 00:10:34.436 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:34.436 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:34.436 ************************************ 00:10:34.436 END TEST nvmf_bdev_io_wait 00:10:34.436 ************************************ 00:10:34.698 08:25:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:34.698 08:25:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:34.698 08:25:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:34.698 08:25:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:34.698 ************************************ 00:10:34.698 START TEST nvmf_queue_depth 00:10:34.698 ************************************ 00:10:34.698 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:34.698 * Looking for test storage... 00:10:34.698 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:34.698 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:34.698 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:10:34.698 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:34.698 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:34.698 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:34.698 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:34.698 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:34.698 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:10:34.698 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:10:34.698 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:10:34.698 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:10:34.698 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:10:34.698 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:10:34.698 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:10:34.698 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:34.698 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:10:34.698 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:10:34.698 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:34.698 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:34.698 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:10:34.698 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:10:34.698 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:34.698 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:10:34.698 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:10:34.698 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:10:34.698 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:10:34.698 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:34.698 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:10:34.698 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:10:34.698 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:34.698 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:34.698 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:10:34.698 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:34.698 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:34.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.698 --rc genhtml_branch_coverage=1 00:10:34.698 --rc genhtml_function_coverage=1 00:10:34.698 --rc genhtml_legend=1 00:10:34.698 --rc geninfo_all_blocks=1 00:10:34.698 --rc geninfo_unexecuted_blocks=1 00:10:34.698 00:10:34.698 ' 00:10:34.698 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:34.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.699 --rc genhtml_branch_coverage=1 00:10:34.699 --rc genhtml_function_coverage=1 00:10:34.699 --rc genhtml_legend=1 00:10:34.699 --rc geninfo_all_blocks=1 00:10:34.699 --rc geninfo_unexecuted_blocks=1 00:10:34.699 00:10:34.699 ' 00:10:34.699 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:34.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.699 --rc genhtml_branch_coverage=1 00:10:34.699 --rc genhtml_function_coverage=1 00:10:34.699 --rc genhtml_legend=1 00:10:34.699 --rc geninfo_all_blocks=1 00:10:34.699 --rc geninfo_unexecuted_blocks=1 00:10:34.699 00:10:34.699 ' 00:10:34.699 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:34.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.699 --rc genhtml_branch_coverage=1 00:10:34.699 --rc genhtml_function_coverage=1 00:10:34.699 --rc genhtml_legend=1 00:10:34.699 --rc geninfo_all_blocks=1 00:10:34.699 --rc geninfo_unexecuted_blocks=1 00:10:34.699 00:10:34.699 ' 00:10:34.699 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:34.699 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:10:34.699 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:34.699 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:34.699 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:34.699 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:34.699 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:34.699 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:34.699 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:34.960 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:34.960 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:34.960 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:34.961 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:34.961 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:34.961 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:34.961 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:34.961 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:34.961 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:34.961 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:34.961 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:10:34.961 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:34.961 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:34.961 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:34.961 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.961 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.961 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.961 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:10:34.961 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.961 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:10:34.961 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:34.961 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:34.961 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:34.961 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:34.961 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:34.961 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:34.961 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:34.961 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:34.961 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:34.961 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:34.961 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:34.961 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:34.961 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:34.961 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:34.961 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:34.961 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:34.961 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:34.961 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:34.961 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:34.961 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:34.961 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:34.961 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:34.961 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:10:34.961 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:10:34.961 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:10:34.961 08:25:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:43.204 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:43.204 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:43.204 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:43.204 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # is_hw=yes 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:43.204 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:43.205 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:43.205 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:43.205 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:43.205 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:43.205 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:43.205 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:43.205 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:43.205 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:43.205 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.665 ms 00:10:43.205 00:10:43.205 --- 10.0.0.2 ping statistics --- 00:10:43.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:43.205 rtt min/avg/max/mdev = 0.665/0.665/0.665/0.000 ms 00:10:43.205 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:43.205 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:43.205 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:10:43.205 00:10:43.205 --- 10.0.0.1 ping statistics --- 00:10:43.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:43.205 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:10:43.205 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:43.205 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # return 0 00:10:43.205 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:43.205 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:43.205 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:43.205 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:43.205 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:43.205 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:43.205 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:43.205 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:43.205 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:43.205 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:43.205 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:43.205 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # nvmfpid=3590827 00:10:43.205 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # waitforlisten 3590827 00:10:43.205 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:43.205 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 3590827 ']' 00:10:43.205 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:43.205 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:43.205 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:43.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:43.205 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:43.205 08:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:43.205 [2024-10-01 08:25:33.920072] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:10:43.205 [2024-10-01 08:25:33.920141] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:43.205 [2024-10-01 08:25:34.012488] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.205 [2024-10-01 08:25:34.103764] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:43.205 [2024-10-01 08:25:34.103833] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:43.205 [2024-10-01 08:25:34.103842] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:43.205 [2024-10-01 08:25:34.103849] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:43.205 [2024-10-01 08:25:34.103856] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:43.205 [2024-10-01 08:25:34.104644] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:43.205 08:25:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:43.205 08:25:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:10:43.205 08:25:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:43.205 08:25:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:43.205 08:25:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:43.205 08:25:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:43.205 08:25:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:43.205 08:25:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.205 08:25:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:43.205 [2024-10-01 08:25:34.785724] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:43.205 08:25:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.205 08:25:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:43.205 08:25:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.205 08:25:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:43.205 Malloc0 00:10:43.205 08:25:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.205 08:25:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:43.205 08:25:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.205 08:25:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:43.205 08:25:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.205 08:25:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:43.205 08:25:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.205 08:25:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:43.205 08:25:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.205 08:25:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:43.205 08:25:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.205 08:25:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:43.205 [2024-10-01 08:25:34.849573] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:43.205 08:25:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.205 08:25:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3590923 00:10:43.205 08:25:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:43.205 08:25:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3590923 /var/tmp/bdevperf.sock 00:10:43.205 08:25:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 3590923 ']' 00:10:43.205 08:25:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:43.205 08:25:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:43.205 08:25:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:43.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:43.205 08:25:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:43.205 08:25:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:43.205 08:25:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:43.205 [2024-10-01 08:25:34.906479] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:10:43.205 [2024-10-01 08:25:34.906534] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3590923 ] 00:10:43.205 [2024-10-01 08:25:34.969605] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.467 [2024-10-01 08:25:35.041150] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.038 08:25:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:44.038 08:25:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:10:44.038 08:25:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:44.038 08:25:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.038 08:25:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:44.298 NVMe0n1 00:10:44.298 08:25:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.298 08:25:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:44.298 Running I/O for 10 seconds... 00:10:54.594 9841.00 IOPS, 38.44 MiB/s 10759.00 IOPS, 42.03 MiB/s 11107.00 IOPS, 43.39 MiB/s 11264.25 IOPS, 44.00 MiB/s 11338.40 IOPS, 44.29 MiB/s 11403.17 IOPS, 44.54 MiB/s 11410.43 IOPS, 44.57 MiB/s 11393.75 IOPS, 44.51 MiB/s 11458.56 IOPS, 44.76 MiB/s 11469.00 IOPS, 44.80 MiB/s 00:10:54.594 Latency(us) 00:10:54.594 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:54.594 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:54.594 Verification LBA range: start 0x0 length 0x4000 00:10:54.594 NVMe0n1 : 10.07 11491.83 44.89 0.00 0.00 88790.25 24794.45 63351.47 00:10:54.594 =================================================================================================================== 00:10:54.594 Total : 11491.83 44.89 0.00 0.00 88790.25 24794.45 63351.47 00:10:54.594 { 00:10:54.594 "results": [ 00:10:54.594 { 00:10:54.594 "job": "NVMe0n1", 00:10:54.594 "core_mask": "0x1", 00:10:54.594 "workload": "verify", 00:10:54.594 "status": "finished", 00:10:54.594 "verify_range": { 00:10:54.594 "start": 0, 00:10:54.594 "length": 16384 00:10:54.594 }, 00:10:54.594 "queue_depth": 1024, 00:10:54.594 "io_size": 4096, 00:10:54.594 "runtime": 10.069238, 00:10:54.594 "iops": 11491.832847728894, 00:10:54.594 "mibps": 44.88997206144099, 00:10:54.594 "io_failed": 0, 00:10:54.594 "io_timeout": 0, 00:10:54.594 "avg_latency_us": 88790.25340592611, 00:10:54.594 "min_latency_us": 24794.453333333335, 00:10:54.594 "max_latency_us": 63351.46666666667 00:10:54.594 } 00:10:54.594 ], 00:10:54.594 "core_count": 1 00:10:54.594 } 00:10:54.594 08:25:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3590923 00:10:54.594 08:25:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 3590923 ']' 00:10:54.594 08:25:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 3590923 00:10:54.594 08:25:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:10:54.594 08:25:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:54.594 08:25:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3590923 00:10:54.594 08:25:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:54.594 08:25:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:54.594 08:25:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3590923' 00:10:54.594 killing process with pid 3590923 00:10:54.594 08:25:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 3590923 00:10:54.594 Received shutdown signal, test time was about 10.000000 seconds 00:10:54.594 00:10:54.594 Latency(us) 00:10:54.594 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:54.594 =================================================================================================================== 00:10:54.594 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:54.594 08:25:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 3590923 00:10:54.594 08:25:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:54.594 08:25:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:54.594 08:25:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:54.594 08:25:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:10:54.594 08:25:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:54.594 08:25:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:10:54.594 08:25:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:54.594 08:25:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:54.594 rmmod nvme_tcp 00:10:54.594 rmmod nvme_fabrics 00:10:54.594 rmmod nvme_keyring 00:10:54.594 08:25:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:54.594 08:25:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:10:54.594 08:25:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:10:54.594 08:25:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@513 -- # '[' -n 3590827 ']' 00:10:54.594 08:25:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # killprocess 3590827 00:10:54.594 08:25:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 3590827 ']' 00:10:54.594 08:25:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 3590827 00:10:54.594 08:25:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:10:54.594 08:25:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:54.594 08:25:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3590827 00:10:54.855 08:25:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:54.855 08:25:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:54.855 08:25:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3590827' 00:10:54.855 killing process with pid 3590827 00:10:54.855 08:25:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 3590827 00:10:54.855 08:25:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 3590827 00:10:54.855 08:25:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:54.855 08:25:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:54.855 08:25:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:54.855 08:25:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:10:54.855 08:25:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-save 00:10:54.855 08:25:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:54.855 08:25:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-restore 00:10:54.855 08:25:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:54.855 08:25:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:54.855 08:25:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:54.855 08:25:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:54.855 08:25:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:57.399 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:57.399 00:10:57.399 real 0m22.313s 00:10:57.399 user 0m25.791s 00:10:57.399 sys 0m6.846s 00:10:57.399 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:57.399 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:57.399 ************************************ 00:10:57.399 END TEST nvmf_queue_depth 00:10:57.399 ************************************ 00:10:57.399 08:25:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:57.399 08:25:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:57.399 08:25:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:57.399 08:25:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:57.399 ************************************ 00:10:57.399 START TEST nvmf_target_multipath 00:10:57.399 ************************************ 00:10:57.399 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:57.399 * Looking for test storage... 00:10:57.399 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:57.399 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:57.399 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:10:57.399 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:57.399 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:57.399 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:57.399 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:57.399 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:57.399 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:10:57.399 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:10:57.399 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:10:57.399 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:10:57.399 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:10:57.399 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:10:57.399 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:10:57.399 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:57.399 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:10:57.399 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:10:57.399 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:57.399 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:57.399 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:10:57.399 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:10:57.399 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:57.399 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:10:57.399 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:10:57.399 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:10:57.399 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:10:57.399 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:57.399 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:10:57.399 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:10:57.399 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:57.399 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:57.399 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:10:57.399 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:57.399 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:57.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.399 --rc genhtml_branch_coverage=1 00:10:57.399 --rc genhtml_function_coverage=1 00:10:57.399 --rc genhtml_legend=1 00:10:57.400 --rc geninfo_all_blocks=1 00:10:57.400 --rc geninfo_unexecuted_blocks=1 00:10:57.400 00:10:57.400 ' 00:10:57.400 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:57.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.400 --rc genhtml_branch_coverage=1 00:10:57.400 --rc genhtml_function_coverage=1 00:10:57.400 --rc genhtml_legend=1 00:10:57.400 --rc geninfo_all_blocks=1 00:10:57.400 --rc geninfo_unexecuted_blocks=1 00:10:57.400 00:10:57.400 ' 00:10:57.400 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:57.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.400 --rc genhtml_branch_coverage=1 00:10:57.400 --rc genhtml_function_coverage=1 00:10:57.400 --rc genhtml_legend=1 00:10:57.400 --rc geninfo_all_blocks=1 00:10:57.400 --rc geninfo_unexecuted_blocks=1 00:10:57.400 00:10:57.400 ' 00:10:57.400 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:57.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.400 --rc genhtml_branch_coverage=1 00:10:57.400 --rc genhtml_function_coverage=1 00:10:57.400 --rc genhtml_legend=1 00:10:57.400 --rc geninfo_all_blocks=1 00:10:57.400 --rc geninfo_unexecuted_blocks=1 00:10:57.400 00:10:57.400 ' 00:10:57.400 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:57.400 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:57.400 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:57.400 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:57.400 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:57.400 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:57.400 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:57.400 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:57.400 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:57.400 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:57.400 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:57.400 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:57.400 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:57.400 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:57.400 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:57.400 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:57.400 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:57.400 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:57.400 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:57.400 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:10:57.400 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:57.400 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:57.400 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:57.400 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.400 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.400 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.400 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:57.400 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.400 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:10:57.400 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:57.400 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:57.400 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:57.400 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:57.400 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:57.400 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:57.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:57.400 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:57.400 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:57.400 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:57.400 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:57.400 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:57.400 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:57.400 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:57.400 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:57.400 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:57.400 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:57.400 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:57.400 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:57.400 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:57.400 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:57.400 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:57.400 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:57.400 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:10:57.400 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:10:57.400 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:10:57.400 08:25:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:05.540 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:05.540 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:11:05.540 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:05.540 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:05.540 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:05.540 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:05.540 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:05.540 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:11:05.540 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:05.540 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:11:05.540 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:11:05.540 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:11:05.540 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:11:05.540 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:11:05.540 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:11:05.540 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:05.540 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:05.540 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:05.540 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:05.540 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:05.540 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:05.540 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:05.540 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:05.540 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:05.540 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:05.540 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:05.540 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:11:05.540 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:11:05.540 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:11:05.540 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:11:05.540 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:11:05.540 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:11:05.540 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:05.540 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:05.540 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:05.540 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:05.540 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:05.540 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:05.540 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:05.540 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:05.540 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:05.540 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:05.540 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:05.540 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:05.540 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:05.540 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:05.540 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:05.540 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:05.540 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:11:05.540 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:11:05.540 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:11:05.540 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:05.540 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:05.540 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:05.540 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:05.540 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:05.540 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:05.540 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:05.540 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:05.540 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:05.541 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:05.541 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:05.541 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:05.541 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:05.541 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:05.541 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:05.541 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:05.541 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:05.541 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:05.541 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:05.541 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:05.541 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:11:05.541 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # is_hw=yes 00:11:05.541 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:11:05.541 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:11:05.541 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:11:05.541 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:05.541 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:05.541 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:05.541 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:05.541 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:05.541 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:05.541 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:05.541 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:05.541 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:05.541 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:05.541 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:05.541 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:05.541 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:05.541 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:05.541 08:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:05.541 08:25:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:05.541 08:25:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:05.541 08:25:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:05.541 08:25:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:05.541 08:25:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:05.541 08:25:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:05.541 08:25:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:05.541 08:25:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:05.541 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:05.541 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.614 ms 00:11:05.541 00:11:05.541 --- 10.0.0.2 ping statistics --- 00:11:05.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:05.541 rtt min/avg/max/mdev = 0.614/0.614/0.614/0.000 ms 00:11:05.541 08:25:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:05.541 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:05.541 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:11:05.541 00:11:05.541 --- 10.0.0.1 ping statistics --- 00:11:05.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:05.541 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:11:05.541 08:25:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:05.541 08:25:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # return 0 00:11:05.541 08:25:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:05.541 08:25:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:05.541 08:25:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:05.541 08:25:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:05.541 08:25:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:05.541 08:25:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:05.541 08:25:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:05.541 08:25:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:11:05.541 08:25:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:11:05.541 only one NIC for nvmf test 00:11:05.541 08:25:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:11:05.541 08:25:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:05.541 08:25:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:11:05.541 08:25:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:05.541 08:25:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:11:05.541 08:25:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:05.541 08:25:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:05.541 rmmod nvme_tcp 00:11:05.541 rmmod nvme_fabrics 00:11:05.541 rmmod nvme_keyring 00:11:05.541 08:25:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:05.541 08:25:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:11:05.541 08:25:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:11:05.541 08:25:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:11:05.541 08:25:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:05.541 08:25:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:05.541 08:25:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:05.541 08:25:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:11:05.541 08:25:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:11:05.541 08:25:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:05.541 08:25:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:11:05.541 08:25:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:05.541 08:25:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:05.541 08:25:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:05.541 08:25:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:05.541 08:25:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:06.926 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:06.926 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:11:06.926 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:11:06.926 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:06.926 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:11:06.926 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:06.926 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:11:06.926 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:06.926 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:06.926 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:06.926 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:11:06.926 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:11:06.926 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:11:06.926 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:06.926 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:06.926 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:06.926 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:11:06.926 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:11:06.926 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:06.926 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:11:06.926 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:06.926 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:06.926 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:06.926 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:06.926 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:06.926 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:06.926 00:11:06.926 real 0m9.697s 00:11:06.926 user 0m2.125s 00:11:06.926 sys 0m5.537s 00:11:06.926 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:06.926 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:06.926 ************************************ 00:11:06.926 END TEST nvmf_target_multipath 00:11:06.926 ************************************ 00:11:06.926 08:25:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:06.926 08:25:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:06.926 08:25:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:06.926 08:25:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:06.926 ************************************ 00:11:06.926 START TEST nvmf_zcopy 00:11:06.926 ************************************ 00:11:06.926 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:06.926 * Looking for test storage... 00:11:06.926 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:06.926 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:06.926 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:11:06.926 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:06.926 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:06.926 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:06.926 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:06.926 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:06.926 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:11:06.926 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:11:06.926 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:11:06.926 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:11:06.926 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:11:06.926 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:11:06.926 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:11:06.926 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:06.926 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:11:06.926 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:11:06.926 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:06.926 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:06.926 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:11:06.926 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:11:06.926 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:06.926 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:11:06.926 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:11:06.926 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:11:06.926 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:11:06.926 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:06.926 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:11:06.926 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:11:06.926 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:06.926 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:06.926 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:11:06.926 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:06.926 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:06.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.926 --rc genhtml_branch_coverage=1 00:11:06.926 --rc genhtml_function_coverage=1 00:11:06.926 --rc genhtml_legend=1 00:11:06.926 --rc geninfo_all_blocks=1 00:11:06.926 --rc geninfo_unexecuted_blocks=1 00:11:06.926 00:11:06.926 ' 00:11:06.926 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:06.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.926 --rc genhtml_branch_coverage=1 00:11:06.926 --rc genhtml_function_coverage=1 00:11:06.926 --rc genhtml_legend=1 00:11:06.926 --rc geninfo_all_blocks=1 00:11:06.926 --rc geninfo_unexecuted_blocks=1 00:11:06.926 00:11:06.926 ' 00:11:06.926 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:06.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.927 --rc genhtml_branch_coverage=1 00:11:06.927 --rc genhtml_function_coverage=1 00:11:06.927 --rc genhtml_legend=1 00:11:06.927 --rc geninfo_all_blocks=1 00:11:06.927 --rc geninfo_unexecuted_blocks=1 00:11:06.927 00:11:06.927 ' 00:11:06.927 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:06.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.927 --rc genhtml_branch_coverage=1 00:11:06.927 --rc genhtml_function_coverage=1 00:11:06.927 --rc genhtml_legend=1 00:11:06.927 --rc geninfo_all_blocks=1 00:11:06.927 --rc geninfo_unexecuted_blocks=1 00:11:06.927 00:11:06.927 ' 00:11:06.927 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:06.927 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:11:06.927 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:06.927 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:06.927 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:06.927 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:06.927 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:06.927 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:06.927 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:06.927 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:06.927 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:06.927 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:06.927 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:06.927 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:06.927 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:06.927 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:06.927 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:06.927 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:06.927 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:06.927 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:11:06.927 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:06.927 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:06.927 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:06.927 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.927 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.927 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.927 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:11:06.927 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.927 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:11:06.927 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:06.927 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:06.927 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:06.927 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:06.927 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:06.927 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:06.927 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:06.927 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:06.927 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:06.927 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:06.927 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:11:06.927 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:06.927 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:06.927 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:06.927 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:06.927 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:06.927 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:06.927 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:06.927 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:06.927 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:11:06.927 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:11:06.927 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:11:06.927 08:25:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:15.075 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:15.075 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:11:15.075 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:15.075 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:15.075 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:15.075 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:15.075 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:15.075 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:11:15.075 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:15.075 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:11:15.075 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:11:15.075 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:11:15.075 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:11:15.075 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:11:15.075 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:11:15.075 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:15.075 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:15.075 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:15.075 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:15.075 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:15.075 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:15.075 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:15.075 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:15.075 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:15.076 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:15.076 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:15.076 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:15.076 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # is_hw=yes 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:15.076 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:15.076 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.443 ms 00:11:15.076 00:11:15.076 --- 10.0.0.2 ping statistics --- 00:11:15.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:15.076 rtt min/avg/max/mdev = 0.443/0.443/0.443/0.000 ms 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:15.076 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:15.076 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:11:15.076 00:11:15.076 --- 10.0.0.1 ping statistics --- 00:11:15.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:15.076 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # return 0 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # nvmfpid=3601825 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # waitforlisten 3601825 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 3601825 ']' 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:15.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:15.076 08:26:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:15.076 [2024-10-01 08:26:05.987332] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:11:15.076 [2024-10-01 08:26:05.987400] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:15.076 [2024-10-01 08:26:06.076253] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:15.076 [2024-10-01 08:26:06.168237] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:15.076 [2024-10-01 08:26:06.168298] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:15.076 [2024-10-01 08:26:06.168307] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:15.077 [2024-10-01 08:26:06.168314] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:15.077 [2024-10-01 08:26:06.168320] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:15.077 [2024-10-01 08:26:06.169096] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:15.077 08:26:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:15.077 08:26:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:11:15.077 08:26:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:15.077 08:26:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:15.077 08:26:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:15.077 08:26:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:15.077 08:26:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:11:15.077 08:26:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:11:15.077 08:26:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.077 08:26:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:15.077 [2024-10-01 08:26:06.850626] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:15.077 08:26:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.077 08:26:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:15.077 08:26:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.077 08:26:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:15.077 08:26:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.077 08:26:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:15.077 08:26:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.077 08:26:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:15.077 [2024-10-01 08:26:06.874891] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:15.077 08:26:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.077 08:26:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:15.077 08:26:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.077 08:26:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:15.077 08:26:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.077 08:26:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:11:15.077 08:26:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.077 08:26:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:15.338 malloc0 00:11:15.338 08:26:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.338 08:26:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:15.338 08:26:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.338 08:26:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:15.338 08:26:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.338 08:26:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:11:15.338 08:26:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:11:15.338 08:26:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:11:15.338 08:26:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:11:15.338 08:26:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:11:15.338 08:26:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:11:15.338 { 00:11:15.338 "params": { 00:11:15.338 "name": "Nvme$subsystem", 00:11:15.338 "trtype": "$TEST_TRANSPORT", 00:11:15.338 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:15.338 "adrfam": "ipv4", 00:11:15.338 "trsvcid": "$NVMF_PORT", 00:11:15.338 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:15.338 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:15.338 "hdgst": ${hdgst:-false}, 00:11:15.338 "ddgst": ${ddgst:-false} 00:11:15.338 }, 00:11:15.338 "method": "bdev_nvme_attach_controller" 00:11:15.338 } 00:11:15.338 EOF 00:11:15.338 )") 00:11:15.338 08:26:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:11:15.338 08:26:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:11:15.338 08:26:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:11:15.339 08:26:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:11:15.339 "params": { 00:11:15.339 "name": "Nvme1", 00:11:15.339 "trtype": "tcp", 00:11:15.339 "traddr": "10.0.0.2", 00:11:15.339 "adrfam": "ipv4", 00:11:15.339 "trsvcid": "4420", 00:11:15.339 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:15.339 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:15.339 "hdgst": false, 00:11:15.339 "ddgst": false 00:11:15.339 }, 00:11:15.339 "method": "bdev_nvme_attach_controller" 00:11:15.339 }' 00:11:15.339 [2024-10-01 08:26:06.988738] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:11:15.339 [2024-10-01 08:26:06.988790] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3601906 ] 00:11:15.339 [2024-10-01 08:26:07.049230] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:15.339 [2024-10-01 08:26:07.115187] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.600 Running I/O for 10 seconds... 00:11:25.906 6645.00 IOPS, 51.91 MiB/s 6703.00 IOPS, 52.37 MiB/s 6724.00 IOPS, 52.53 MiB/s 7235.00 IOPS, 56.52 MiB/s 7737.80 IOPS, 60.45 MiB/s 8068.67 IOPS, 63.04 MiB/s 8307.29 IOPS, 64.90 MiB/s 8488.12 IOPS, 66.31 MiB/s 8628.44 IOPS, 67.41 MiB/s 8742.60 IOPS, 68.30 MiB/s 00:11:25.906 Latency(us) 00:11:25.906 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:25.906 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:11:25.906 Verification LBA range: start 0x0 length 0x1000 00:11:25.906 Nvme1n1 : 10.01 8745.20 68.32 0.00 0.00 14584.30 2034.35 29054.29 00:11:25.906 =================================================================================================================== 00:11:25.906 Total : 8745.20 68.32 0.00 0.00 14584.30 2034.35 29054.29 00:11:25.906 08:26:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3603992 00:11:25.906 08:26:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:11:25.906 08:26:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:25.906 08:26:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:11:25.906 08:26:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:11:25.906 08:26:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:11:25.906 08:26:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:11:25.906 08:26:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:11:25.906 08:26:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:11:25.906 { 00:11:25.906 "params": { 00:11:25.906 "name": "Nvme$subsystem", 00:11:25.906 "trtype": "$TEST_TRANSPORT", 00:11:25.906 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:25.906 "adrfam": "ipv4", 00:11:25.906 "trsvcid": "$NVMF_PORT", 00:11:25.906 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:25.906 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:25.906 "hdgst": ${hdgst:-false}, 00:11:25.906 "ddgst": ${ddgst:-false} 00:11:25.906 }, 00:11:25.906 "method": "bdev_nvme_attach_controller" 00:11:25.906 } 00:11:25.906 EOF 00:11:25.906 )") 00:11:25.906 08:26:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:11:25.906 [2024-10-01 08:26:17.474801] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.906 [2024-10-01 08:26:17.474830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.906 08:26:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:11:25.906 08:26:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:11:25.906 08:26:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:11:25.906 "params": { 00:11:25.906 "name": "Nvme1", 00:11:25.906 "trtype": "tcp", 00:11:25.906 "traddr": "10.0.0.2", 00:11:25.906 "adrfam": "ipv4", 00:11:25.906 "trsvcid": "4420", 00:11:25.906 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:25.906 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:25.906 "hdgst": false, 00:11:25.906 "ddgst": false 00:11:25.906 }, 00:11:25.906 "method": "bdev_nvme_attach_controller" 00:11:25.906 }' 00:11:25.906 [2024-10-01 08:26:17.486801] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.906 [2024-10-01 08:26:17.486810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.906 [2024-10-01 08:26:17.498829] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.906 [2024-10-01 08:26:17.498837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.906 [2024-10-01 08:26:17.510862] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.906 [2024-10-01 08:26:17.510869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.906 [2024-10-01 08:26:17.520090] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:11:25.906 [2024-10-01 08:26:17.520165] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3603992 ] 00:11:25.906 [2024-10-01 08:26:17.522891] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.906 [2024-10-01 08:26:17.522900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.906 [2024-10-01 08:26:17.534921] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.906 [2024-10-01 08:26:17.534930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.906 [2024-10-01 08:26:17.546952] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.906 [2024-10-01 08:26:17.546960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.906 [2024-10-01 08:26:17.558982] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.906 [2024-10-01 08:26:17.558989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.906 [2024-10-01 08:26:17.571017] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.906 [2024-10-01 08:26:17.571024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.906 [2024-10-01 08:26:17.583044] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.906 [2024-10-01 08:26:17.583051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.906 [2024-10-01 08:26:17.585347] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.906 [2024-10-01 08:26:17.595075] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.906 [2024-10-01 08:26:17.595090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.906 [2024-10-01 08:26:17.607106] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.906 [2024-10-01 08:26:17.607115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.906 [2024-10-01 08:26:17.619138] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.906 [2024-10-01 08:26:17.619148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.906 [2024-10-01 08:26:17.631167] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.906 [2024-10-01 08:26:17.631178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.906 [2024-10-01 08:26:17.643199] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.906 [2024-10-01 08:26:17.643208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.906 [2024-10-01 08:26:17.650439] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.906 [2024-10-01 08:26:17.655228] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.906 [2024-10-01 08:26:17.655235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.906 [2024-10-01 08:26:17.667264] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.906 [2024-10-01 08:26:17.667278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.906 [2024-10-01 08:26:17.679293] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.906 [2024-10-01 08:26:17.679305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.906 [2024-10-01 08:26:17.691323] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.906 [2024-10-01 08:26:17.691332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.906 [2024-10-01 08:26:17.703355] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.906 [2024-10-01 08:26:17.703366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.906 [2024-10-01 08:26:17.715385] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.906 [2024-10-01 08:26:17.715393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:25.906 [2024-10-01 08:26:17.727426] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:25.907 [2024-10-01 08:26:17.727443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.167 [2024-10-01 08:26:17.739450] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.167 [2024-10-01 08:26:17.739460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.167 [2024-10-01 08:26:17.751484] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.167 [2024-10-01 08:26:17.751494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.167 [2024-10-01 08:26:17.763513] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.167 [2024-10-01 08:26:17.763520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.167 [2024-10-01 08:26:17.775545] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.167 [2024-10-01 08:26:17.775553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.167 [2024-10-01 08:26:17.787577] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.167 [2024-10-01 08:26:17.787585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.168 [2024-10-01 08:26:17.799610] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.168 [2024-10-01 08:26:17.799619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.168 [2024-10-01 08:26:17.811642] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.168 [2024-10-01 08:26:17.811657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.168 [2024-10-01 08:26:17.823671] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.168 [2024-10-01 08:26:17.823679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.168 [2024-10-01 08:26:17.835709] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.168 [2024-10-01 08:26:17.835724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.168 Running I/O for 5 seconds... 00:11:26.168 [2024-10-01 08:26:17.851277] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.168 [2024-10-01 08:26:17.851294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.168 [2024-10-01 08:26:17.863854] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.168 [2024-10-01 08:26:17.863870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.168 [2024-10-01 08:26:17.876800] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.168 [2024-10-01 08:26:17.876816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.168 [2024-10-01 08:26:17.890075] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.168 [2024-10-01 08:26:17.890091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.168 [2024-10-01 08:26:17.903123] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.168 [2024-10-01 08:26:17.903139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.168 [2024-10-01 08:26:17.915708] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.168 [2024-10-01 08:26:17.915724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.168 [2024-10-01 08:26:17.928211] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.168 [2024-10-01 08:26:17.928226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.168 [2024-10-01 08:26:17.940969] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.168 [2024-10-01 08:26:17.940984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.168 [2024-10-01 08:26:17.954087] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.168 [2024-10-01 08:26:17.954102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.168 [2024-10-01 08:26:17.967207] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.168 [2024-10-01 08:26:17.967222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.168 [2024-10-01 08:26:17.980282] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.168 [2024-10-01 08:26:17.980297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.429 [2024-10-01 08:26:17.993888] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.429 [2024-10-01 08:26:17.993903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.429 [2024-10-01 08:26:18.007424] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.429 [2024-10-01 08:26:18.007440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.429 [2024-10-01 08:26:18.020089] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.429 [2024-10-01 08:26:18.020104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.429 [2024-10-01 08:26:18.033247] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.429 [2024-10-01 08:26:18.033261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.429 [2024-10-01 08:26:18.046159] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.429 [2024-10-01 08:26:18.046173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.429 [2024-10-01 08:26:18.059342] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.429 [2024-10-01 08:26:18.059361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.429 [2024-10-01 08:26:18.072863] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.429 [2024-10-01 08:26:18.072878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.429 [2024-10-01 08:26:18.086109] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.429 [2024-10-01 08:26:18.086124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.429 [2024-10-01 08:26:18.098985] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.429 [2024-10-01 08:26:18.099003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.429 [2024-10-01 08:26:18.111388] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.429 [2024-10-01 08:26:18.111403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.429 [2024-10-01 08:26:18.124103] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.429 [2024-10-01 08:26:18.124117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.429 [2024-10-01 08:26:18.136783] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.429 [2024-10-01 08:26:18.136797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.429 [2024-10-01 08:26:18.149807] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.429 [2024-10-01 08:26:18.149822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.429 [2024-10-01 08:26:18.163236] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.429 [2024-10-01 08:26:18.163250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.429 [2024-10-01 08:26:18.176719] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.429 [2024-10-01 08:26:18.176734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.429 [2024-10-01 08:26:18.189364] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.429 [2024-10-01 08:26:18.189378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.429 [2024-10-01 08:26:18.202384] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.430 [2024-10-01 08:26:18.202399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.430 [2024-10-01 08:26:18.215540] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.430 [2024-10-01 08:26:18.215555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.430 [2024-10-01 08:26:18.229194] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.430 [2024-10-01 08:26:18.229209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.430 [2024-10-01 08:26:18.242871] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.430 [2024-10-01 08:26:18.242886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.691 [2024-10-01 08:26:18.255297] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.691 [2024-10-01 08:26:18.255312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.691 [2024-10-01 08:26:18.267974] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.691 [2024-10-01 08:26:18.267989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.691 [2024-10-01 08:26:18.280971] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.691 [2024-10-01 08:26:18.280985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.691 [2024-10-01 08:26:18.294495] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.691 [2024-10-01 08:26:18.294509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.691 [2024-10-01 08:26:18.307968] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.691 [2024-10-01 08:26:18.307987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.691 [2024-10-01 08:26:18.321531] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.691 [2024-10-01 08:26:18.321545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.691 [2024-10-01 08:26:18.334655] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.691 [2024-10-01 08:26:18.334670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.691 [2024-10-01 08:26:18.347540] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.691 [2024-10-01 08:26:18.347555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.691 [2024-10-01 08:26:18.360052] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.691 [2024-10-01 08:26:18.360066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.691 [2024-10-01 08:26:18.373314] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.691 [2024-10-01 08:26:18.373329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.691 [2024-10-01 08:26:18.386386] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.691 [2024-10-01 08:26:18.386401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.691 [2024-10-01 08:26:18.399890] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.691 [2024-10-01 08:26:18.399905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.691 [2024-10-01 08:26:18.412914] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.691 [2024-10-01 08:26:18.412928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.691 [2024-10-01 08:26:18.424867] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.691 [2024-10-01 08:26:18.424881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.691 [2024-10-01 08:26:18.438365] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.691 [2024-10-01 08:26:18.438379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.691 [2024-10-01 08:26:18.451522] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.691 [2024-10-01 08:26:18.451537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.691 [2024-10-01 08:26:18.464681] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.691 [2024-10-01 08:26:18.464695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.691 [2024-10-01 08:26:18.477835] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.691 [2024-10-01 08:26:18.477849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.691 [2024-10-01 08:26:18.490500] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.691 [2024-10-01 08:26:18.490514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.691 [2024-10-01 08:26:18.504018] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.691 [2024-10-01 08:26:18.504033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.952 [2024-10-01 08:26:18.517321] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.952 [2024-10-01 08:26:18.517336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.952 [2024-10-01 08:26:18.530953] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.952 [2024-10-01 08:26:18.530968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.952 [2024-10-01 08:26:18.544173] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.952 [2024-10-01 08:26:18.544188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.952 [2024-10-01 08:26:18.556984] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.952 [2024-10-01 08:26:18.557004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.952 [2024-10-01 08:26:18.570449] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.952 [2024-10-01 08:26:18.570465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.952 [2024-10-01 08:26:18.582883] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.952 [2024-10-01 08:26:18.582899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.952 [2024-10-01 08:26:18.596735] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.952 [2024-10-01 08:26:18.596750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.953 [2024-10-01 08:26:18.609208] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.953 [2024-10-01 08:26:18.609223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.953 [2024-10-01 08:26:18.621802] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.953 [2024-10-01 08:26:18.621816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.953 [2024-10-01 08:26:18.635129] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.953 [2024-10-01 08:26:18.635144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.953 [2024-10-01 08:26:18.647807] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.953 [2024-10-01 08:26:18.647822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.953 [2024-10-01 08:26:18.660464] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.953 [2024-10-01 08:26:18.660479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.953 [2024-10-01 08:26:18.673014] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.953 [2024-10-01 08:26:18.673028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.953 [2024-10-01 08:26:18.685975] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.953 [2024-10-01 08:26:18.685990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.953 [2024-10-01 08:26:18.698716] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.953 [2024-10-01 08:26:18.698730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.953 [2024-10-01 08:26:18.711463] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.953 [2024-10-01 08:26:18.711478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.953 [2024-10-01 08:26:18.725406] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.953 [2024-10-01 08:26:18.725420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.953 [2024-10-01 08:26:18.736698] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.953 [2024-10-01 08:26:18.736713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.953 [2024-10-01 08:26:18.749861] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.953 [2024-10-01 08:26:18.749875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:26.953 [2024-10-01 08:26:18.763581] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:26.953 [2024-10-01 08:26:18.763595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.215 [2024-10-01 08:26:18.776026] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.215 [2024-10-01 08:26:18.776041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.215 [2024-10-01 08:26:18.789535] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.215 [2024-10-01 08:26:18.789550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.215 [2024-10-01 08:26:18.802884] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.215 [2024-10-01 08:26:18.802899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.215 [2024-10-01 08:26:18.815709] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.215 [2024-10-01 08:26:18.815723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.215 [2024-10-01 08:26:18.829046] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.215 [2024-10-01 08:26:18.829061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.215 [2024-10-01 08:26:18.842109] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.215 [2024-10-01 08:26:18.842124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.215 19126.00 IOPS, 149.42 MiB/s [2024-10-01 08:26:18.855225] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.215 [2024-10-01 08:26:18.855239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.215 [2024-10-01 08:26:18.868810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.215 [2024-10-01 08:26:18.868825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.215 [2024-10-01 08:26:18.881616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.215 [2024-10-01 08:26:18.881631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.215 [2024-10-01 08:26:18.895295] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.215 [2024-10-01 08:26:18.895309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.215 [2024-10-01 08:26:18.908720] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.215 [2024-10-01 08:26:18.908734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.215 [2024-10-01 08:26:18.922230] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.215 [2024-10-01 08:26:18.922244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.215 [2024-10-01 08:26:18.935422] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.215 [2024-10-01 08:26:18.935437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.215 [2024-10-01 08:26:18.949074] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.215 [2024-10-01 08:26:18.949089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.215 [2024-10-01 08:26:18.961642] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.215 [2024-10-01 08:26:18.961657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.215 [2024-10-01 08:26:18.974466] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.215 [2024-10-01 08:26:18.974481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.215 [2024-10-01 08:26:18.987250] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.215 [2024-10-01 08:26:18.987264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.215 [2024-10-01 08:26:19.000821] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.215 [2024-10-01 08:26:19.000837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.215 [2024-10-01 08:26:19.013441] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.215 [2024-10-01 08:26:19.013456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.215 [2024-10-01 08:26:19.025749] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.215 [2024-10-01 08:26:19.025764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.476 [2024-10-01 08:26:19.039627] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.476 [2024-10-01 08:26:19.039646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.476 [2024-10-01 08:26:19.052532] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.476 [2024-10-01 08:26:19.052548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.476 [2024-10-01 08:26:19.066047] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.476 [2024-10-01 08:26:19.066062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.476 [2024-10-01 08:26:19.079681] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.476 [2024-10-01 08:26:19.079695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.476 [2024-10-01 08:26:19.093173] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.476 [2024-10-01 08:26:19.093188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.476 [2024-10-01 08:26:19.106454] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.476 [2024-10-01 08:26:19.106470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.476 [2024-10-01 08:26:19.118704] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.476 [2024-10-01 08:26:19.118719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.476 [2024-10-01 08:26:19.131811] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.476 [2024-10-01 08:26:19.131826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.476 [2024-10-01 08:26:19.145565] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.476 [2024-10-01 08:26:19.145581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.476 [2024-10-01 08:26:19.158734] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.476 [2024-10-01 08:26:19.158750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.476 [2024-10-01 08:26:19.172173] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.476 [2024-10-01 08:26:19.172189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.476 [2024-10-01 08:26:19.184729] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.476 [2024-10-01 08:26:19.184744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.476 [2024-10-01 08:26:19.197776] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.476 [2024-10-01 08:26:19.197791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.476 [2024-10-01 08:26:19.210907] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.476 [2024-10-01 08:26:19.210923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.476 [2024-10-01 08:26:19.224082] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.476 [2024-10-01 08:26:19.224096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.476 [2024-10-01 08:26:19.236954] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.476 [2024-10-01 08:26:19.236969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.476 [2024-10-01 08:26:19.250105] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.476 [2024-10-01 08:26:19.250120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.476 [2024-10-01 08:26:19.263595] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.476 [2024-10-01 08:26:19.263611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.476 [2024-10-01 08:26:19.276224] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.476 [2024-10-01 08:26:19.276240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.476 [2024-10-01 08:26:19.289476] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.476 [2024-10-01 08:26:19.289495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.738 [2024-10-01 08:26:19.302819] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.738 [2024-10-01 08:26:19.302834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.738 [2024-10-01 08:26:19.315860] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.738 [2024-10-01 08:26:19.315875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.738 [2024-10-01 08:26:19.328856] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.738 [2024-10-01 08:26:19.328871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.738 [2024-10-01 08:26:19.341910] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.738 [2024-10-01 08:26:19.341925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.738 [2024-10-01 08:26:19.354812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.738 [2024-10-01 08:26:19.354826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.738 [2024-10-01 08:26:19.367293] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.738 [2024-10-01 08:26:19.367308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.738 [2024-10-01 08:26:19.380196] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.738 [2024-10-01 08:26:19.380211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.738 [2024-10-01 08:26:19.393977] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.738 [2024-10-01 08:26:19.393992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.738 [2024-10-01 08:26:19.406628] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.738 [2024-10-01 08:26:19.406643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.738 [2024-10-01 08:26:19.420376] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.738 [2024-10-01 08:26:19.420391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.738 [2024-10-01 08:26:19.433459] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.738 [2024-10-01 08:26:19.433474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.738 [2024-10-01 08:26:19.446814] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.738 [2024-10-01 08:26:19.446829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.738 [2024-10-01 08:26:19.459945] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.738 [2024-10-01 08:26:19.459960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.738 [2024-10-01 08:26:19.472576] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.738 [2024-10-01 08:26:19.472590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.738 [2024-10-01 08:26:19.485092] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.738 [2024-10-01 08:26:19.485107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.738 [2024-10-01 08:26:19.498280] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.738 [2024-10-01 08:26:19.498295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.738 [2024-10-01 08:26:19.511670] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.738 [2024-10-01 08:26:19.511686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.738 [2024-10-01 08:26:19.524106] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.738 [2024-10-01 08:26:19.524121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.738 [2024-10-01 08:26:19.536682] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.738 [2024-10-01 08:26:19.536701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:27.738 [2024-10-01 08:26:19.550013] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:27.738 [2024-10-01 08:26:19.550028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.000 [2024-10-01 08:26:19.563480] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.000 [2024-10-01 08:26:19.563496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.000 [2024-10-01 08:26:19.576974] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.000 [2024-10-01 08:26:19.576989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.000 [2024-10-01 08:26:19.589741] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.000 [2024-10-01 08:26:19.589756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.000 [2024-10-01 08:26:19.602249] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.000 [2024-10-01 08:26:19.602264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.000 [2024-10-01 08:26:19.615020] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.000 [2024-10-01 08:26:19.615034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.001 [2024-10-01 08:26:19.627610] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.001 [2024-10-01 08:26:19.627625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.001 [2024-10-01 08:26:19.639933] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.001 [2024-10-01 08:26:19.639948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.001 [2024-10-01 08:26:19.653097] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.001 [2024-10-01 08:26:19.653112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.001 [2024-10-01 08:26:19.666874] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.001 [2024-10-01 08:26:19.666890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.001 [2024-10-01 08:26:19.680383] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.001 [2024-10-01 08:26:19.680397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.001 [2024-10-01 08:26:19.693184] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.001 [2024-10-01 08:26:19.693199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.001 [2024-10-01 08:26:19.706391] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.001 [2024-10-01 08:26:19.706406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.001 [2024-10-01 08:26:19.720068] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.001 [2024-10-01 08:26:19.720083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.001 [2024-10-01 08:26:19.732845] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.001 [2024-10-01 08:26:19.732860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.001 [2024-10-01 08:26:19.746060] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.001 [2024-10-01 08:26:19.746074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.001 [2024-10-01 08:26:19.759566] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.001 [2024-10-01 08:26:19.759581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.001 [2024-10-01 08:26:19.773243] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.001 [2024-10-01 08:26:19.773257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.001 [2024-10-01 08:26:19.786657] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.001 [2024-10-01 08:26:19.786676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.001 [2024-10-01 08:26:19.800285] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.001 [2024-10-01 08:26:19.800299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.001 [2024-10-01 08:26:19.813390] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.001 [2024-10-01 08:26:19.813404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.262 [2024-10-01 08:26:19.825872] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.262 [2024-10-01 08:26:19.825887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.262 [2024-10-01 08:26:19.838394] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.262 [2024-10-01 08:26:19.838409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.262 19195.50 IOPS, 149.96 MiB/s [2024-10-01 08:26:19.850817] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.262 [2024-10-01 08:26:19.850832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.262 [2024-10-01 08:26:19.863700] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.262 [2024-10-01 08:26:19.863715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.262 [2024-10-01 08:26:19.877049] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.262 [2024-10-01 08:26:19.877064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.262 [2024-10-01 08:26:19.890261] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.262 [2024-10-01 08:26:19.890275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.262 [2024-10-01 08:26:19.903116] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.262 [2024-10-01 08:26:19.903131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.262 [2024-10-01 08:26:19.915842] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.262 [2024-10-01 08:26:19.915856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.262 [2024-10-01 08:26:19.929037] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.262 [2024-10-01 08:26:19.929051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.262 [2024-10-01 08:26:19.942243] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.262 [2024-10-01 08:26:19.942257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.262 [2024-10-01 08:26:19.955841] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.262 [2024-10-01 08:26:19.955855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.262 [2024-10-01 08:26:19.968803] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.262 [2024-10-01 08:26:19.968817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.262 [2024-10-01 08:26:19.981547] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.262 [2024-10-01 08:26:19.981561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.262 [2024-10-01 08:26:19.995090] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.262 [2024-10-01 08:26:19.995104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.262 [2024-10-01 08:26:20.007991] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.262 [2024-10-01 08:26:20.008013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.262 [2024-10-01 08:26:20.020470] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.262 [2024-10-01 08:26:20.020485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.262 [2024-10-01 08:26:20.033107] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.262 [2024-10-01 08:26:20.033122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.262 [2024-10-01 08:26:20.045825] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.262 [2024-10-01 08:26:20.045840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.262 [2024-10-01 08:26:20.058977] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.262 [2024-10-01 08:26:20.058993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.262 [2024-10-01 08:26:20.071426] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.262 [2024-10-01 08:26:20.071442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.262 [2024-10-01 08:26:20.084348] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.262 [2024-10-01 08:26:20.084363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.526 [2024-10-01 08:26:20.096734] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.526 [2024-10-01 08:26:20.096750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.526 [2024-10-01 08:26:20.109834] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.526 [2024-10-01 08:26:20.109850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.526 [2024-10-01 08:26:20.122649] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.526 [2024-10-01 08:26:20.122663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.526 [2024-10-01 08:26:20.135817] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.526 [2024-10-01 08:26:20.135832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.526 [2024-10-01 08:26:20.149074] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.526 [2024-10-01 08:26:20.149089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.526 [2024-10-01 08:26:20.162698] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.526 [2024-10-01 08:26:20.162713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.526 [2024-10-01 08:26:20.175289] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.526 [2024-10-01 08:26:20.175304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.526 [2024-10-01 08:26:20.187736] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.526 [2024-10-01 08:26:20.187750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.526 [2024-10-01 08:26:20.200591] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.526 [2024-10-01 08:26:20.200605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.526 [2024-10-01 08:26:20.213579] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.526 [2024-10-01 08:26:20.213594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.526 [2024-10-01 08:26:20.226946] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.526 [2024-10-01 08:26:20.226960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.526 [2024-10-01 08:26:20.240258] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.526 [2024-10-01 08:26:20.240273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.526 [2024-10-01 08:26:20.253232] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.526 [2024-10-01 08:26:20.253247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.526 [2024-10-01 08:26:20.266721] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.526 [2024-10-01 08:26:20.266736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.526 [2024-10-01 08:26:20.280196] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.526 [2024-10-01 08:26:20.280211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.526 [2024-10-01 08:26:20.293227] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.526 [2024-10-01 08:26:20.293242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.526 [2024-10-01 08:26:20.306025] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.526 [2024-10-01 08:26:20.306040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.526 [2024-10-01 08:26:20.318735] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.526 [2024-10-01 08:26:20.318749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.526 [2024-10-01 08:26:20.332003] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.526 [2024-10-01 08:26:20.332018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.526 [2024-10-01 08:26:20.345578] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.526 [2024-10-01 08:26:20.345593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.788 [2024-10-01 08:26:20.358719] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.788 [2024-10-01 08:26:20.358734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.788 [2024-10-01 08:26:20.371227] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.788 [2024-10-01 08:26:20.371242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.788 [2024-10-01 08:26:20.384843] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.788 [2024-10-01 08:26:20.384858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.788 [2024-10-01 08:26:20.397517] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.788 [2024-10-01 08:26:20.397531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.788 [2024-10-01 08:26:20.410367] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.788 [2024-10-01 08:26:20.410382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.788 [2024-10-01 08:26:20.423200] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.788 [2024-10-01 08:26:20.423214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.788 [2024-10-01 08:26:20.436449] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.788 [2024-10-01 08:26:20.436464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.788 [2024-10-01 08:26:20.448910] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.788 [2024-10-01 08:26:20.448924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.788 [2024-10-01 08:26:20.461742] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.788 [2024-10-01 08:26:20.461756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.788 [2024-10-01 08:26:20.474932] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.788 [2024-10-01 08:26:20.474946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.788 [2024-10-01 08:26:20.488356] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.788 [2024-10-01 08:26:20.488371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.788 [2024-10-01 08:26:20.501425] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.788 [2024-10-01 08:26:20.501440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.788 [2024-10-01 08:26:20.514916] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.788 [2024-10-01 08:26:20.514930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.788 [2024-10-01 08:26:20.528394] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.788 [2024-10-01 08:26:20.528409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.788 [2024-10-01 08:26:20.541164] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.788 [2024-10-01 08:26:20.541179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.788 [2024-10-01 08:26:20.554849] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.788 [2024-10-01 08:26:20.554864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.788 [2024-10-01 08:26:20.567684] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.788 [2024-10-01 08:26:20.567698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.788 [2024-10-01 08:26:20.581194] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.788 [2024-10-01 08:26:20.581209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.788 [2024-10-01 08:26:20.593720] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.788 [2024-10-01 08:26:20.593735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:28.788 [2024-10-01 08:26:20.607250] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:28.788 [2024-10-01 08:26:20.607265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.049 [2024-10-01 08:26:20.620597] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.049 [2024-10-01 08:26:20.620612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.049 [2024-10-01 08:26:20.634176] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.049 [2024-10-01 08:26:20.634190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.049 [2024-10-01 08:26:20.647639] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.049 [2024-10-01 08:26:20.647654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.049 [2024-10-01 08:26:20.661037] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.049 [2024-10-01 08:26:20.661052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.049 [2024-10-01 08:26:20.673812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.049 [2024-10-01 08:26:20.673828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.049 [2024-10-01 08:26:20.686898] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.049 [2024-10-01 08:26:20.686914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.049 [2024-10-01 08:26:20.700180] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.049 [2024-10-01 08:26:20.700196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.049 [2024-10-01 08:26:20.713651] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.049 [2024-10-01 08:26:20.713666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.049 [2024-10-01 08:26:20.726479] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.049 [2024-10-01 08:26:20.726495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.049 [2024-10-01 08:26:20.739917] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.049 [2024-10-01 08:26:20.739933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.049 [2024-10-01 08:26:20.753203] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.049 [2024-10-01 08:26:20.753219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.049 [2024-10-01 08:26:20.766112] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.049 [2024-10-01 08:26:20.766131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.049 [2024-10-01 08:26:20.779359] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.049 [2024-10-01 08:26:20.779373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.049 [2024-10-01 08:26:20.792079] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.049 [2024-10-01 08:26:20.792094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.049 [2024-10-01 08:26:20.804708] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.049 [2024-10-01 08:26:20.804722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.049 [2024-10-01 08:26:20.818170] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.049 [2024-10-01 08:26:20.818184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.049 [2024-10-01 08:26:20.830576] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.049 [2024-10-01 08:26:20.830591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.049 [2024-10-01 08:26:20.844159] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.049 [2024-10-01 08:26:20.844174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.049 19193.33 IOPS, 149.95 MiB/s [2024-10-01 08:26:20.857008] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.049 [2024-10-01 08:26:20.857023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.049 [2024-10-01 08:26:20.870587] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.049 [2024-10-01 08:26:20.870603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.311 [2024-10-01 08:26:20.884129] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.311 [2024-10-01 08:26:20.884144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.311 [2024-10-01 08:26:20.897804] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.311 [2024-10-01 08:26:20.897819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.311 [2024-10-01 08:26:20.910642] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.311 [2024-10-01 08:26:20.910657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.311 [2024-10-01 08:26:20.923855] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.311 [2024-10-01 08:26:20.923870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.311 [2024-10-01 08:26:20.936889] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.311 [2024-10-01 08:26:20.936904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.311 [2024-10-01 08:26:20.950331] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.311 [2024-10-01 08:26:20.950347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.311 [2024-10-01 08:26:20.963953] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.311 [2024-10-01 08:26:20.963969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.311 [2024-10-01 08:26:20.977719] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.311 [2024-10-01 08:26:20.977734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.311 [2024-10-01 08:26:20.990381] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.311 [2024-10-01 08:26:20.990396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.311 [2024-10-01 08:26:21.003572] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.311 [2024-10-01 08:26:21.003587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.311 [2024-10-01 08:26:21.016344] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.311 [2024-10-01 08:26:21.016363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.311 [2024-10-01 08:26:21.029162] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.311 [2024-10-01 08:26:21.029177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.311 [2024-10-01 08:26:21.043159] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.311 [2024-10-01 08:26:21.043174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.311 [2024-10-01 08:26:21.055570] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.311 [2024-10-01 08:26:21.055585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.311 [2024-10-01 08:26:21.069042] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.311 [2024-10-01 08:26:21.069057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.311 [2024-10-01 08:26:21.082569] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.311 [2024-10-01 08:26:21.082584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.311 [2024-10-01 08:26:21.095981] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.311 [2024-10-01 08:26:21.096001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.311 [2024-10-01 08:26:21.109692] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.311 [2024-10-01 08:26:21.109706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.311 [2024-10-01 08:26:21.122841] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.311 [2024-10-01 08:26:21.122857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.572 [2024-10-01 08:26:21.135875] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.572 [2024-10-01 08:26:21.135890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.572 [2024-10-01 08:26:21.148541] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.572 [2024-10-01 08:26:21.148556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.572 [2024-10-01 08:26:21.161333] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.572 [2024-10-01 08:26:21.161348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.572 [2024-10-01 08:26:21.174608] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.572 [2024-10-01 08:26:21.174623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.572 [2024-10-01 08:26:21.187876] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.572 [2024-10-01 08:26:21.187891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.572 [2024-10-01 08:26:21.201351] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.572 [2024-10-01 08:26:21.201366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.572 [2024-10-01 08:26:21.214899] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.572 [2024-10-01 08:26:21.214914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.572 [2024-10-01 08:26:21.228273] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.572 [2024-10-01 08:26:21.228287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.572 [2024-10-01 08:26:21.241547] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.572 [2024-10-01 08:26:21.241562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.572 [2024-10-01 08:26:21.254950] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.572 [2024-10-01 08:26:21.254965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.572 [2024-10-01 08:26:21.267507] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.572 [2024-10-01 08:26:21.267526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.572 [2024-10-01 08:26:21.280155] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.572 [2024-10-01 08:26:21.280170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.572 [2024-10-01 08:26:21.292787] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.572 [2024-10-01 08:26:21.292802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.572 [2024-10-01 08:26:21.305103] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.572 [2024-10-01 08:26:21.305119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.572 [2024-10-01 08:26:21.317915] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.572 [2024-10-01 08:26:21.317930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.572 [2024-10-01 08:26:21.331438] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.572 [2024-10-01 08:26:21.331454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.572 [2024-10-01 08:26:21.344340] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.572 [2024-10-01 08:26:21.344355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.572 [2024-10-01 08:26:21.357553] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.572 [2024-10-01 08:26:21.357568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.572 [2024-10-01 08:26:21.370250] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.572 [2024-10-01 08:26:21.370264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.572 [2024-10-01 08:26:21.383739] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.572 [2024-10-01 08:26:21.383753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.833 [2024-10-01 08:26:21.397211] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.833 [2024-10-01 08:26:21.397226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.833 [2024-10-01 08:26:21.409822] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.833 [2024-10-01 08:26:21.409837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.833 [2024-10-01 08:26:21.422548] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.833 [2024-10-01 08:26:21.422563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.833 [2024-10-01 08:26:21.434917] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.833 [2024-10-01 08:26:21.434932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.833 [2024-10-01 08:26:21.448792] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.833 [2024-10-01 08:26:21.448806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.833 [2024-10-01 08:26:21.461890] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.833 [2024-10-01 08:26:21.461904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.833 [2024-10-01 08:26:21.475201] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.833 [2024-10-01 08:26:21.475215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.833 [2024-10-01 08:26:21.488441] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.833 [2024-10-01 08:26:21.488456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.833 [2024-10-01 08:26:21.501286] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.833 [2024-10-01 08:26:21.501301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.833 [2024-10-01 08:26:21.513997] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.833 [2024-10-01 08:26:21.514012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.833 [2024-10-01 08:26:21.526599] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.833 [2024-10-01 08:26:21.526614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.833 [2024-10-01 08:26:21.539955] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.833 [2024-10-01 08:26:21.539970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.833 [2024-10-01 08:26:21.552900] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.833 [2024-10-01 08:26:21.552914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.833 [2024-10-01 08:26:21.566492] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.833 [2024-10-01 08:26:21.566506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.833 [2024-10-01 08:26:21.579276] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.833 [2024-10-01 08:26:21.579291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.833 [2024-10-01 08:26:21.592620] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.833 [2024-10-01 08:26:21.592634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.833 [2024-10-01 08:26:21.605781] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.833 [2024-10-01 08:26:21.605796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.833 [2024-10-01 08:26:21.619385] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.833 [2024-10-01 08:26:21.619400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.833 [2024-10-01 08:26:21.632769] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.833 [2024-10-01 08:26:21.632784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.833 [2024-10-01 08:26:21.646760] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.833 [2024-10-01 08:26:21.646774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.094 [2024-10-01 08:26:21.659624] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.094 [2024-10-01 08:26:21.659639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.094 [2024-10-01 08:26:21.673083] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.094 [2024-10-01 08:26:21.673097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.094 [2024-10-01 08:26:21.686774] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.094 [2024-10-01 08:26:21.686789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.094 [2024-10-01 08:26:21.700272] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.094 [2024-10-01 08:26:21.700287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.094 [2024-10-01 08:26:21.713205] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.094 [2024-10-01 08:26:21.713220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.094 [2024-10-01 08:26:21.725880] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.094 [2024-10-01 08:26:21.725895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.094 [2024-10-01 08:26:21.738458] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.094 [2024-10-01 08:26:21.738473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.094 [2024-10-01 08:26:21.751303] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.094 [2024-10-01 08:26:21.751318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.094 [2024-10-01 08:26:21.764154] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.094 [2024-10-01 08:26:21.764169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.094 [2024-10-01 08:26:21.776781] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.094 [2024-10-01 08:26:21.776795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.094 [2024-10-01 08:26:21.789184] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.094 [2024-10-01 08:26:21.789199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.094 [2024-10-01 08:26:21.802863] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.094 [2024-10-01 08:26:21.802878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.094 [2024-10-01 08:26:21.816282] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.094 [2024-10-01 08:26:21.816297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.095 [2024-10-01 08:26:21.829633] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.095 [2024-10-01 08:26:21.829647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.095 [2024-10-01 08:26:21.842453] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.095 [2024-10-01 08:26:21.842468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.095 19200.00 IOPS, 150.00 MiB/s [2024-10-01 08:26:21.855867] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.095 [2024-10-01 08:26:21.855882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.095 [2024-10-01 08:26:21.868801] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.095 [2024-10-01 08:26:21.868816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.095 [2024-10-01 08:26:21.882184] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.095 [2024-10-01 08:26:21.882199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.095 [2024-10-01 08:26:21.894892] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.095 [2024-10-01 08:26:21.894907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.095 [2024-10-01 08:26:21.907705] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.095 [2024-10-01 08:26:21.907720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.355 [2024-10-01 08:26:21.921024] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.355 [2024-10-01 08:26:21.921039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.355 [2024-10-01 08:26:21.934345] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.355 [2024-10-01 08:26:21.934359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.355 [2024-10-01 08:26:21.948028] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.355 [2024-10-01 08:26:21.948043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.355 [2024-10-01 08:26:21.961100] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.355 [2024-10-01 08:26:21.961115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.355 [2024-10-01 08:26:21.974195] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.355 [2024-10-01 08:26:21.974210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.355 [2024-10-01 08:26:21.987647] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.355 [2024-10-01 08:26:21.987662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.355 [2024-10-01 08:26:22.001164] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.355 [2024-10-01 08:26:22.001179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.355 [2024-10-01 08:26:22.014002] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.355 [2024-10-01 08:26:22.014017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.355 [2024-10-01 08:26:22.027444] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.355 [2024-10-01 08:26:22.027459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.355 [2024-10-01 08:26:22.040360] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.355 [2024-10-01 08:26:22.040375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.355 [2024-10-01 08:26:22.053432] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.355 [2024-10-01 08:26:22.053447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.355 [2024-10-01 08:26:22.066155] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.355 [2024-10-01 08:26:22.066170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.355 [2024-10-01 08:26:22.079094] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.355 [2024-10-01 08:26:22.079108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.355 [2024-10-01 08:26:22.092327] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.355 [2024-10-01 08:26:22.092342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.355 [2024-10-01 08:26:22.105925] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.355 [2024-10-01 08:26:22.105939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.355 [2024-10-01 08:26:22.119165] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.355 [2024-10-01 08:26:22.119179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.355 [2024-10-01 08:26:22.132415] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.355 [2024-10-01 08:26:22.132430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.355 [2024-10-01 08:26:22.145405] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.355 [2024-10-01 08:26:22.145420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.355 [2024-10-01 08:26:22.158745] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.355 [2024-10-01 08:26:22.158760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.355 [2024-10-01 08:26:22.171213] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.355 [2024-10-01 08:26:22.171228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.615 [2024-10-01 08:26:22.183784] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.615 [2024-10-01 08:26:22.183799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.615 [2024-10-01 08:26:22.196925] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.615 [2024-10-01 08:26:22.196941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.615 [2024-10-01 08:26:22.210125] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.616 [2024-10-01 08:26:22.210140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.616 [2024-10-01 08:26:22.223408] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.616 [2024-10-01 08:26:22.223423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.616 [2024-10-01 08:26:22.237113] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.616 [2024-10-01 08:26:22.237128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.616 [2024-10-01 08:26:22.250734] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.616 [2024-10-01 08:26:22.250755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.616 [2024-10-01 08:26:22.263477] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.616 [2024-10-01 08:26:22.263492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.616 [2024-10-01 08:26:22.276484] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.616 [2024-10-01 08:26:22.276499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.616 [2024-10-01 08:26:22.290254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.616 [2024-10-01 08:26:22.290268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.616 [2024-10-01 08:26:22.303553] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.616 [2024-10-01 08:26:22.303568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.616 [2024-10-01 08:26:22.317119] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.616 [2024-10-01 08:26:22.317134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.616 [2024-10-01 08:26:22.329988] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.616 [2024-10-01 08:26:22.330007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.616 [2024-10-01 08:26:22.343433] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.616 [2024-10-01 08:26:22.343448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.616 [2024-10-01 08:26:22.355783] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.616 [2024-10-01 08:26:22.355798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.616 [2024-10-01 08:26:22.369469] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.616 [2024-10-01 08:26:22.369484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.616 [2024-10-01 08:26:22.382879] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.616 [2024-10-01 08:26:22.382894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.616 [2024-10-01 08:26:22.395639] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.616 [2024-10-01 08:26:22.395654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.616 [2024-10-01 08:26:22.408806] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.616 [2024-10-01 08:26:22.408821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.616 [2024-10-01 08:26:22.421764] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.616 [2024-10-01 08:26:22.421779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.616 [2024-10-01 08:26:22.435013] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.616 [2024-10-01 08:26:22.435028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.877 [2024-10-01 08:26:22.447746] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.877 [2024-10-01 08:26:22.447761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.877 [2024-10-01 08:26:22.460400] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.877 [2024-10-01 08:26:22.460416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.877 [2024-10-01 08:26:22.473378] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.877 [2024-10-01 08:26:22.473393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.877 [2024-10-01 08:26:22.486263] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.877 [2024-10-01 08:26:22.486277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.877 [2024-10-01 08:26:22.499283] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.877 [2024-10-01 08:26:22.499303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.877 [2024-10-01 08:26:22.512663] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.877 [2024-10-01 08:26:22.512679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.877 [2024-10-01 08:26:22.526107] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.877 [2024-10-01 08:26:22.526122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.877 [2024-10-01 08:26:22.539714] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.877 [2024-10-01 08:26:22.539729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.877 [2024-10-01 08:26:22.552194] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.878 [2024-10-01 08:26:22.552209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.878 [2024-10-01 08:26:22.564833] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.878 [2024-10-01 08:26:22.564849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.878 [2024-10-01 08:26:22.578287] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.878 [2024-10-01 08:26:22.578302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.878 [2024-10-01 08:26:22.591409] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.878 [2024-10-01 08:26:22.591424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.878 [2024-10-01 08:26:22.605010] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.878 [2024-10-01 08:26:22.605025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.878 [2024-10-01 08:26:22.617422] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.878 [2024-10-01 08:26:22.617437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.878 [2024-10-01 08:26:22.630393] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.878 [2024-10-01 08:26:22.630408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.878 [2024-10-01 08:26:22.643782] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.878 [2024-10-01 08:26:22.643797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.878 [2024-10-01 08:26:22.656822] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.878 [2024-10-01 08:26:22.656837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.878 [2024-10-01 08:26:22.669416] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.878 [2024-10-01 08:26:22.669431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.878 [2024-10-01 08:26:22.682354] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.878 [2024-10-01 08:26:22.682369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.878 [2024-10-01 08:26:22.695633] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.878 [2024-10-01 08:26:22.695648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.138 [2024-10-01 08:26:22.708981] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.138 [2024-10-01 08:26:22.709002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.138 [2024-10-01 08:26:22.722727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.138 [2024-10-01 08:26:22.722743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.138 [2024-10-01 08:26:22.736019] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.138 [2024-10-01 08:26:22.736034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.138 [2024-10-01 08:26:22.749320] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.138 [2024-10-01 08:26:22.749340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.138 [2024-10-01 08:26:22.762578] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.138 [2024-10-01 08:26:22.762593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.138 [2024-10-01 08:26:22.775908] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.138 [2024-10-01 08:26:22.775924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.138 [2024-10-01 08:26:22.789479] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.138 [2024-10-01 08:26:22.789494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.138 [2024-10-01 08:26:22.802403] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.138 [2024-10-01 08:26:22.802418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.138 [2024-10-01 08:26:22.815755] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.138 [2024-10-01 08:26:22.815770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.138 [2024-10-01 08:26:22.829071] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.138 [2024-10-01 08:26:22.829086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.138 [2024-10-01 08:26:22.842412] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.138 [2024-10-01 08:26:22.842427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.138 19212.00 IOPS, 150.09 MiB/s [2024-10-01 08:26:22.854373] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.138 [2024-10-01 08:26:22.854388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.138 00:11:31.138 Latency(us) 00:11:31.138 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:31.138 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:11:31.138 Nvme1n1 : 5.01 19213.85 150.11 0.00 0.00 6654.95 2512.21 16274.77 00:11:31.138 =================================================================================================================== 00:11:31.138 Total : 19213.85 150.11 0.00 0.00 6654.95 2512.21 16274.77 00:11:31.138 [2024-10-01 08:26:22.864300] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.138 [2024-10-01 08:26:22.864314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.138 [2024-10-01 08:26:22.876330] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.138 [2024-10-01 08:26:22.876343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.138 [2024-10-01 08:26:22.888363] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.138 [2024-10-01 08:26:22.888376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.138 [2024-10-01 08:26:22.900393] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.138 [2024-10-01 08:26:22.900406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.138 [2024-10-01 08:26:22.912420] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.138 [2024-10-01 08:26:22.912431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.138 [2024-10-01 08:26:22.924448] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.138 [2024-10-01 08:26:22.924460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.138 [2024-10-01 08:26:22.936478] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.138 [2024-10-01 08:26:22.936486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.138 [2024-10-01 08:26:22.948512] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.138 [2024-10-01 08:26:22.948523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.138 [2024-10-01 08:26:22.960541] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.138 [2024-10-01 08:26:22.960551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.398 [2024-10-01 08:26:22.972572] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.398 [2024-10-01 08:26:22.972584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.398 [2024-10-01 08:26:22.984600] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.398 [2024-10-01 08:26:22.984608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.398 [2024-10-01 08:26:22.996630] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.398 [2024-10-01 08:26:22.996638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3603992) - No such process 00:11:31.398 08:26:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3603992 00:11:31.398 08:26:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:31.398 08:26:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.399 08:26:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:31.399 08:26:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.399 08:26:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:31.399 08:26:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.399 08:26:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:31.399 delay0 00:11:31.399 08:26:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.399 08:26:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:11:31.399 08:26:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.399 08:26:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:31.399 08:26:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.399 08:26:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:11:31.399 [2024-10-01 08:26:23.138385] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:39.609 Initializing NVMe Controllers 00:11:39.609 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:39.609 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:39.609 Initialization complete. Launching workers. 00:11:39.609 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 235, failed: 30940 00:11:39.609 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 31055, failed to submit 120 00:11:39.609 success 30973, unsuccessful 82, failed 0 00:11:39.609 08:26:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:11:39.609 08:26:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:11:39.609 08:26:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:39.609 08:26:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:11:39.609 08:26:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:39.609 08:26:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:11:39.609 08:26:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:39.609 08:26:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:39.609 rmmod nvme_tcp 00:11:39.609 rmmod nvme_fabrics 00:11:39.609 rmmod nvme_keyring 00:11:39.609 08:26:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:39.609 08:26:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:11:39.609 08:26:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:11:39.609 08:26:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@513 -- # '[' -n 3601825 ']' 00:11:39.609 08:26:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # killprocess 3601825 00:11:39.609 08:26:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 3601825 ']' 00:11:39.609 08:26:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 3601825 00:11:39.609 08:26:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:11:39.609 08:26:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:39.609 08:26:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3601825 00:11:39.609 08:26:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:39.609 08:26:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:39.609 08:26:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3601825' 00:11:39.609 killing process with pid 3601825 00:11:39.609 08:26:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 3601825 00:11:39.609 08:26:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 3601825 00:11:39.609 08:26:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:39.609 08:26:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:39.609 08:26:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:39.609 08:26:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:11:39.609 08:26:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-save 00:11:39.609 08:26:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:39.609 08:26:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-restore 00:11:39.609 08:26:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:39.609 08:26:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:39.609 08:26:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:39.609 08:26:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:39.609 08:26:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:40.998 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:40.998 00:11:40.998 real 0m34.102s 00:11:40.998 user 0m45.657s 00:11:40.998 sys 0m11.090s 00:11:40.998 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:40.998 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:40.998 ************************************ 00:11:40.998 END TEST nvmf_zcopy 00:11:40.998 ************************************ 00:11:40.998 08:26:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:40.998 08:26:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:40.998 08:26:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:40.998 08:26:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:40.998 ************************************ 00:11:40.998 START TEST nvmf_nmic 00:11:40.998 ************************************ 00:11:40.998 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:40.998 * Looking for test storage... 00:11:40.998 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:40.998 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:40.998 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:11:40.998 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:41.260 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:41.260 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:41.260 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:41.260 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:41.260 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:11:41.260 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:11:41.260 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:11:41.260 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:11:41.260 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:11:41.260 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:11:41.260 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:11:41.260 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:41.260 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:11:41.260 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:11:41.260 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:41.260 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:41.260 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:11:41.260 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:11:41.260 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:41.260 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:11:41.260 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:11:41.260 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:11:41.260 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:11:41.260 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:41.260 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:11:41.260 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:11:41.260 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:41.260 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:41.260 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:11:41.260 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:41.260 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:41.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.260 --rc genhtml_branch_coverage=1 00:11:41.260 --rc genhtml_function_coverage=1 00:11:41.260 --rc genhtml_legend=1 00:11:41.260 --rc geninfo_all_blocks=1 00:11:41.260 --rc geninfo_unexecuted_blocks=1 00:11:41.260 00:11:41.260 ' 00:11:41.260 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:41.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.260 --rc genhtml_branch_coverage=1 00:11:41.260 --rc genhtml_function_coverage=1 00:11:41.260 --rc genhtml_legend=1 00:11:41.260 --rc geninfo_all_blocks=1 00:11:41.260 --rc geninfo_unexecuted_blocks=1 00:11:41.260 00:11:41.260 ' 00:11:41.260 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:41.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.260 --rc genhtml_branch_coverage=1 00:11:41.260 --rc genhtml_function_coverage=1 00:11:41.260 --rc genhtml_legend=1 00:11:41.260 --rc geninfo_all_blocks=1 00:11:41.260 --rc geninfo_unexecuted_blocks=1 00:11:41.260 00:11:41.260 ' 00:11:41.260 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:41.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.260 --rc genhtml_branch_coverage=1 00:11:41.260 --rc genhtml_function_coverage=1 00:11:41.260 --rc genhtml_legend=1 00:11:41.260 --rc geninfo_all_blocks=1 00:11:41.260 --rc geninfo_unexecuted_blocks=1 00:11:41.260 00:11:41.260 ' 00:11:41.260 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:41.261 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:11:41.261 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:41.261 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:41.261 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:41.261 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:41.261 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:41.261 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:41.261 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:41.261 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:41.261 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:41.261 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:41.261 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:41.261 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:41.261 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:41.261 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:41.261 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:41.261 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:41.261 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:41.261 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:11:41.261 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:41.261 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:41.261 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:41.261 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.261 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.261 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.261 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:11:41.261 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.261 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:11:41.261 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:41.261 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:41.261 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:41.261 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:41.261 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:41.261 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:41.261 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:41.261 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:41.261 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:41.261 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:41.261 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:41.261 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:41.261 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:11:41.261 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:41.261 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:41.261 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:41.261 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:41.261 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:41.261 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:41.261 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:41.261 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:41.261 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:11:41.261 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:11:41.261 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:11:41.261 08:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:47.853 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:47.853 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:11:47.853 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:47.853 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:47.853 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:47.853 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:47.853 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:47.853 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:11:47.853 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:47.853 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:11:47.853 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:11:47.853 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:47.854 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:47.854 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:47.854 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:47.854 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # is_hw=yes 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:47.854 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:47.854 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.586 ms 00:11:47.854 00:11:47.854 --- 10.0.0.2 ping statistics --- 00:11:47.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.854 rtt min/avg/max/mdev = 0.586/0.586/0.586/0.000 ms 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:47.854 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:47.854 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:11:47.854 00:11:47.854 --- 10.0.0.1 ping statistics --- 00:11:47.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.854 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # return 0 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:47.854 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:47.855 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:47.855 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:47.855 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:47.855 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # nvmfpid=3610604 00:11:47.855 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # waitforlisten 3610604 00:11:47.855 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:47.855 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 3610604 ']' 00:11:47.855 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:47.855 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:47.855 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:47.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:47.855 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:47.855 08:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:48.115 [2024-10-01 08:26:39.705764] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:11:48.115 [2024-10-01 08:26:39.705819] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:48.115 [2024-10-01 08:26:39.773836] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:48.115 [2024-10-01 08:26:39.839001] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:48.115 [2024-10-01 08:26:39.839037] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:48.115 [2024-10-01 08:26:39.839045] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:48.115 [2024-10-01 08:26:39.839052] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:48.115 [2024-10-01 08:26:39.839058] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:48.115 [2024-10-01 08:26:39.840576] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:48.115 [2024-10-01 08:26:39.840688] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:48.115 [2024-10-01 08:26:39.840845] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.115 [2024-10-01 08:26:39.840846] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:48.686 08:26:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:48.686 08:26:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:11:48.686 08:26:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:48.686 08:26:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:48.686 08:26:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:48.946 08:26:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:48.946 08:26:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:48.946 08:26:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.946 08:26:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:48.946 [2024-10-01 08:26:40.539791] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:48.946 08:26:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.946 08:26:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:48.946 08:26:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.946 08:26:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:48.946 Malloc0 00:11:48.946 08:26:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.946 08:26:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:48.946 08:26:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.946 08:26:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:48.946 08:26:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.946 08:26:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:48.946 08:26:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.946 08:26:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:48.946 08:26:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.946 08:26:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:48.946 08:26:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.946 08:26:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:48.946 [2024-10-01 08:26:40.598923] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:48.946 08:26:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.946 08:26:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:48.946 test case1: single bdev can't be used in multiple subsystems 00:11:48.946 08:26:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:48.946 08:26:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.946 08:26:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:48.946 08:26:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.946 08:26:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:48.946 08:26:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.946 08:26:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:48.946 08:26:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.946 08:26:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:11:48.946 08:26:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:48.946 08:26:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.946 08:26:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:48.946 [2024-10-01 08:26:40.634830] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:48.946 [2024-10-01 08:26:40.634849] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:48.946 [2024-10-01 08:26:40.634857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.946 request: 00:11:48.946 { 00:11:48.946 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:48.946 "namespace": { 00:11:48.946 "bdev_name": "Malloc0", 00:11:48.946 "no_auto_visible": false 00:11:48.946 }, 00:11:48.946 "method": "nvmf_subsystem_add_ns", 00:11:48.946 "req_id": 1 00:11:48.946 } 00:11:48.946 Got JSON-RPC error response 00:11:48.946 response: 00:11:48.946 { 00:11:48.946 "code": -32602, 00:11:48.946 "message": "Invalid parameters" 00:11:48.946 } 00:11:48.946 08:26:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:48.946 08:26:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:11:48.946 08:26:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:48.946 08:26:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:48.946 Adding namespace failed - expected result. 00:11:48.946 08:26:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:48.946 test case2: host connect to nvmf target in multiple paths 00:11:48.946 08:26:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:11:48.946 08:26:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.947 08:26:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:48.947 [2024-10-01 08:26:40.646983] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:11:48.947 08:26:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.947 08:26:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:50.331 08:26:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:11:52.244 08:26:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:52.244 08:26:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:11:52.244 08:26:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:52.244 08:26:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:52.244 08:26:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:11:54.167 08:26:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:54.167 08:26:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:54.167 08:26:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:54.167 08:26:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:54.167 08:26:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:54.167 08:26:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:11:54.167 08:26:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:54.167 [global] 00:11:54.167 thread=1 00:11:54.167 invalidate=1 00:11:54.167 rw=write 00:11:54.167 time_based=1 00:11:54.167 runtime=1 00:11:54.167 ioengine=libaio 00:11:54.167 direct=1 00:11:54.167 bs=4096 00:11:54.167 iodepth=1 00:11:54.167 norandommap=0 00:11:54.167 numjobs=1 00:11:54.167 00:11:54.167 verify_dump=1 00:11:54.167 verify_backlog=512 00:11:54.167 verify_state_save=0 00:11:54.167 do_verify=1 00:11:54.167 verify=crc32c-intel 00:11:54.167 [job0] 00:11:54.167 filename=/dev/nvme0n1 00:11:54.167 Could not set queue depth (nvme0n1) 00:11:54.427 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:54.427 fio-3.35 00:11:54.427 Starting 1 thread 00:11:55.370 00:11:55.370 job0: (groupid=0, jobs=1): err= 0: pid=3612151: Tue Oct 1 08:26:47 2024 00:11:55.370 read: IOPS=17, BW=70.4KiB/s (72.1kB/s)(72.0KiB/1023msec) 00:11:55.370 slat (nsec): min=8389, max=26752, avg=24112.44, stdev=4156.92 00:11:55.370 clat (usec): min=731, max=42038, avg=39533.11, stdev=9688.66 00:11:55.370 lat (usec): min=757, max=42063, avg=39557.22, stdev=9688.41 00:11:55.370 clat percentiles (usec): 00:11:55.370 | 1.00th=[ 734], 5.00th=[ 734], 10.00th=[41157], 20.00th=[41681], 00:11:55.370 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:11:55.370 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:55.370 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:55.370 | 99.99th=[42206] 00:11:55.370 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:11:55.370 slat (nsec): min=9526, max=65397, avg=26352.26, stdev=10223.56 00:11:55.370 clat (usec): min=229, max=792, avg=575.48, stdev=103.72 00:11:55.370 lat (usec): min=239, max=824, avg=601.83, stdev=109.63 00:11:55.370 clat percentiles (usec): 00:11:55.370 | 1.00th=[ 330], 5.00th=[ 388], 10.00th=[ 433], 20.00th=[ 478], 00:11:55.370 | 30.00th=[ 537], 40.00th=[ 570], 50.00th=[ 586], 60.00th=[ 611], 00:11:55.370 | 70.00th=[ 644], 80.00th=[ 668], 90.00th=[ 701], 95.00th=[ 717], 00:11:55.370 | 99.00th=[ 758], 99.50th=[ 783], 99.90th=[ 791], 99.95th=[ 791], 00:11:55.370 | 99.99th=[ 791] 00:11:55.370 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:11:55.370 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:55.370 lat (usec) : 250=0.57%, 500=23.40%, 750=71.70%, 1000=1.13% 00:11:55.370 lat (msec) : 50=3.21% 00:11:55.370 cpu : usr=0.49%, sys=1.47%, ctx=530, majf=0, minf=1 00:11:55.370 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:55.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:55.370 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:55.370 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:55.370 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:55.370 00:11:55.370 Run status group 0 (all jobs): 00:11:55.370 READ: bw=70.4KiB/s (72.1kB/s), 70.4KiB/s-70.4KiB/s (72.1kB/s-72.1kB/s), io=72.0KiB (73.7kB), run=1023-1023msec 00:11:55.370 WRITE: bw=2002KiB/s (2050kB/s), 2002KiB/s-2002KiB/s (2050kB/s-2050kB/s), io=2048KiB (2097kB), run=1023-1023msec 00:11:55.370 00:11:55.370 Disk stats (read/write): 00:11:55.370 nvme0n1: ios=65/512, merge=0/0, ticks=656/284, in_queue=940, util=93.89% 00:11:55.370 08:26:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:55.632 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:55.632 08:26:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:55.632 08:26:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:11:55.632 08:26:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:55.632 08:26:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:55.632 08:26:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:55.632 08:26:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:55.632 08:26:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:11:55.632 08:26:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:55.632 08:26:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:55.632 08:26:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:55.632 08:26:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:11:55.632 08:26:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:55.632 08:26:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:11:55.632 08:26:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:55.632 08:26:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:55.632 rmmod nvme_tcp 00:11:55.632 rmmod nvme_fabrics 00:11:55.632 rmmod nvme_keyring 00:11:55.632 08:26:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:55.632 08:26:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:11:55.632 08:26:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:11:55.632 08:26:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@513 -- # '[' -n 3610604 ']' 00:11:55.632 08:26:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # killprocess 3610604 00:11:55.632 08:26:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 3610604 ']' 00:11:55.632 08:26:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 3610604 00:11:55.632 08:26:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:11:55.632 08:26:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:55.632 08:26:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3610604 00:11:55.893 08:26:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:55.893 08:26:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:55.893 08:26:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3610604' 00:11:55.893 killing process with pid 3610604 00:11:55.893 08:26:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 3610604 00:11:55.893 08:26:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 3610604 00:11:55.893 08:26:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:55.893 08:26:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:55.893 08:26:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:55.893 08:26:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:11:55.893 08:26:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-save 00:11:55.893 08:26:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:55.893 08:26:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-restore 00:11:55.893 08:26:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:55.893 08:26:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:55.893 08:26:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:55.893 08:26:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:55.893 08:26:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:58.438 08:26:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:58.438 00:11:58.438 real 0m17.044s 00:11:58.438 user 0m47.976s 00:11:58.438 sys 0m5.894s 00:11:58.438 08:26:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:58.438 08:26:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:58.438 ************************************ 00:11:58.438 END TEST nvmf_nmic 00:11:58.438 ************************************ 00:11:58.438 08:26:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:58.438 08:26:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:58.438 08:26:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:58.438 08:26:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:58.438 ************************************ 00:11:58.438 START TEST nvmf_fio_target 00:11:58.438 ************************************ 00:11:58.438 08:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:58.438 * Looking for test storage... 00:11:58.438 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:58.438 08:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:58.438 08:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:11:58.438 08:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:58.438 08:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:58.438 08:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:58.438 08:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:58.438 08:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:58.438 08:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:58.438 08:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:58.438 08:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:58.438 08:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:58.438 08:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:58.438 08:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:58.438 08:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:58.438 08:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:58.438 08:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:11:58.438 08:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:11:58.438 08:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:58.438 08:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:58.438 08:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:11:58.438 08:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:11:58.438 08:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:58.438 08:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:11:58.438 08:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:58.438 08:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:11:58.438 08:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:11:58.438 08:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:58.438 08:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:11:58.438 08:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:58.438 08:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:58.438 08:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:58.438 08:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:11:58.438 08:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:58.438 08:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:58.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.438 --rc genhtml_branch_coverage=1 00:11:58.438 --rc genhtml_function_coverage=1 00:11:58.438 --rc genhtml_legend=1 00:11:58.438 --rc geninfo_all_blocks=1 00:11:58.438 --rc geninfo_unexecuted_blocks=1 00:11:58.438 00:11:58.438 ' 00:11:58.438 08:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:58.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.438 --rc genhtml_branch_coverage=1 00:11:58.438 --rc genhtml_function_coverage=1 00:11:58.438 --rc genhtml_legend=1 00:11:58.438 --rc geninfo_all_blocks=1 00:11:58.438 --rc geninfo_unexecuted_blocks=1 00:11:58.438 00:11:58.438 ' 00:11:58.438 08:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:58.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.438 --rc genhtml_branch_coverage=1 00:11:58.438 --rc genhtml_function_coverage=1 00:11:58.438 --rc genhtml_legend=1 00:11:58.438 --rc geninfo_all_blocks=1 00:11:58.438 --rc geninfo_unexecuted_blocks=1 00:11:58.438 00:11:58.438 ' 00:11:58.438 08:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:58.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.438 --rc genhtml_branch_coverage=1 00:11:58.438 --rc genhtml_function_coverage=1 00:11:58.438 --rc genhtml_legend=1 00:11:58.438 --rc geninfo_all_blocks=1 00:11:58.438 --rc geninfo_unexecuted_blocks=1 00:11:58.438 00:11:58.438 ' 00:11:58.438 08:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:58.438 08:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:58.438 08:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:58.438 08:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:58.438 08:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:58.438 08:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:58.438 08:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:58.438 08:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:58.438 08:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:58.438 08:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:58.438 08:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:58.438 08:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:58.438 08:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:58.438 08:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:58.438 08:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:58.438 08:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:58.438 08:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:58.438 08:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:58.438 08:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:58.438 08:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:58.438 08:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:58.438 08:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:58.438 08:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:58.438 08:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.438 08:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.439 08:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.439 08:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:58.439 08:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.439 08:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:11:58.439 08:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:58.439 08:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:58.439 08:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:58.439 08:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:58.439 08:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:58.439 08:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:58.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:58.439 08:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:58.439 08:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:58.439 08:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:58.439 08:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:58.439 08:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:58.439 08:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:58.439 08:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:58.439 08:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:58.439 08:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:58.439 08:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:58.439 08:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:58.439 08:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:58.439 08:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:58.439 08:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:58.439 08:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:58.439 08:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:11:58.439 08:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:11:58.439 08:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:11:58.439 08:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.581 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:06.581 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:12:06.581 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:06.581 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:06.581 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:06.581 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:06.581 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:06.581 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:12:06.581 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:06.581 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:12:06.581 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:12:06.581 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:12:06.581 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:12:06.581 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:12:06.581 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:06.582 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:06.582 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:06.582 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:06.582 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # is_hw=yes 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:06.582 08:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:06.582 08:26:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:06.582 08:26:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:06.582 08:26:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:06.582 08:26:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:06.582 08:26:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:06.582 08:26:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:06.582 08:26:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:06.582 08:26:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:06.582 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:06.582 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.642 ms 00:12:06.582 00:12:06.582 --- 10.0.0.2 ping statistics --- 00:12:06.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:06.582 rtt min/avg/max/mdev = 0.642/0.642/0.642/0.000 ms 00:12:06.582 08:26:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:06.582 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:06.582 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:12:06.582 00:12:06.582 --- 10.0.0.1 ping statistics --- 00:12:06.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:06.582 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:12:06.582 08:26:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:06.582 08:26:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # return 0 00:12:06.582 08:26:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:12:06.582 08:26:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:06.582 08:26:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:12:06.582 08:26:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:12:06.582 08:26:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:06.582 08:26:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:12:06.582 08:26:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:12:06.582 08:26:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:12:06.582 08:26:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:12:06.582 08:26:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:06.582 08:26:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.582 08:26:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # nvmfpid=3616536 00:12:06.582 08:26:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # waitforlisten 3616536 00:12:06.582 08:26:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:06.582 08:26:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 3616536 ']' 00:12:06.583 08:26:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:06.583 08:26:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:06.583 08:26:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:06.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:06.583 08:26:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:06.583 08:26:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.583 [2024-10-01 08:26:57.347641] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:12:06.583 [2024-10-01 08:26:57.347711] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:06.583 [2024-10-01 08:26:57.420404] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:06.583 [2024-10-01 08:26:57.494057] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:06.583 [2024-10-01 08:26:57.494094] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:06.583 [2024-10-01 08:26:57.494101] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:06.583 [2024-10-01 08:26:57.494108] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:06.583 [2024-10-01 08:26:57.494114] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:06.583 [2024-10-01 08:26:57.495668] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:06.583 [2024-10-01 08:26:57.495784] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:06.583 [2024-10-01 08:26:57.495921] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.583 [2024-10-01 08:26:57.495922] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:12:06.583 08:26:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:06.583 08:26:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:12:06.583 08:26:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:12:06.583 08:26:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:06.583 08:26:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.583 08:26:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:06.583 08:26:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:06.583 [2024-10-01 08:26:58.353904] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:06.583 08:26:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:06.844 08:26:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:12:06.844 08:26:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:07.105 08:26:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:12:07.105 08:26:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:07.365 08:26:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:12:07.365 08:26:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:07.365 08:26:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:12:07.365 08:26:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:12:07.625 08:26:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:07.884 08:26:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:12:07.884 08:26:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:08.144 08:26:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:12:08.144 08:26:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:08.144 08:26:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:12:08.144 08:26:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:12:08.403 08:27:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:08.663 08:27:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:08.663 08:27:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:08.663 08:27:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:08.663 08:27:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:08.922 08:27:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:09.182 [2024-10-01 08:27:00.804332] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:09.182 08:27:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:12:09.443 08:27:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:12:09.443 08:27:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:11.357 08:27:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:12:11.357 08:27:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:12:11.357 08:27:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:11.357 08:27:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:12:11.357 08:27:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:12:11.357 08:27:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:12:13.321 08:27:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:13.321 08:27:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:13.321 08:27:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:13.321 08:27:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:12:13.321 08:27:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:13.321 08:27:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:12:13.321 08:27:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:13.321 [global] 00:12:13.321 thread=1 00:12:13.321 invalidate=1 00:12:13.321 rw=write 00:12:13.321 time_based=1 00:12:13.321 runtime=1 00:12:13.321 ioengine=libaio 00:12:13.321 direct=1 00:12:13.321 bs=4096 00:12:13.321 iodepth=1 00:12:13.321 norandommap=0 00:12:13.321 numjobs=1 00:12:13.321 00:12:13.321 verify_dump=1 00:12:13.321 verify_backlog=512 00:12:13.321 verify_state_save=0 00:12:13.321 do_verify=1 00:12:13.321 verify=crc32c-intel 00:12:13.321 [job0] 00:12:13.321 filename=/dev/nvme0n1 00:12:13.321 [job1] 00:12:13.321 filename=/dev/nvme0n2 00:12:13.321 [job2] 00:12:13.321 filename=/dev/nvme0n3 00:12:13.321 [job3] 00:12:13.321 filename=/dev/nvme0n4 00:12:13.321 Could not set queue depth (nvme0n1) 00:12:13.321 Could not set queue depth (nvme0n2) 00:12:13.321 Could not set queue depth (nvme0n3) 00:12:13.321 Could not set queue depth (nvme0n4) 00:12:13.586 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:13.586 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:13.586 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:13.586 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:13.586 fio-3.35 00:12:13.586 Starting 4 threads 00:12:15.001 00:12:15.001 job0: (groupid=0, jobs=1): err= 0: pid=3618526: Tue Oct 1 08:27:06 2024 00:12:15.001 read: IOPS=592, BW=2370KiB/s (2427kB/s)(2372KiB/1001msec) 00:12:15.001 slat (nsec): min=7037, max=57976, avg=24152.69, stdev=7539.98 00:12:15.001 clat (usec): min=289, max=41895, avg=831.08, stdev=1691.48 00:12:15.001 lat (usec): min=315, max=41922, avg=855.23, stdev=1691.64 00:12:15.001 clat percentiles (usec): 00:12:15.001 | 1.00th=[ 529], 5.00th=[ 627], 10.00th=[ 652], 20.00th=[ 709], 00:12:15.001 | 30.00th=[ 742], 40.00th=[ 758], 50.00th=[ 766], 60.00th=[ 783], 00:12:15.001 | 70.00th=[ 799], 80.00th=[ 816], 90.00th=[ 840], 95.00th=[ 865], 00:12:15.001 | 99.00th=[ 1057], 99.50th=[ 1139], 99.90th=[41681], 99.95th=[41681], 00:12:15.001 | 99.99th=[41681] 00:12:15.001 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:12:15.001 slat (nsec): min=9322, max=65962, avg=27226.01, stdev=10917.95 00:12:15.001 clat (usec): min=152, max=959, avg=443.77, stdev=117.15 00:12:15.001 lat (usec): min=163, max=993, avg=470.99, stdev=121.73 00:12:15.001 clat percentiles (usec): 00:12:15.001 | 1.00th=[ 249], 5.00th=[ 285], 10.00th=[ 314], 20.00th=[ 347], 00:12:15.001 | 30.00th=[ 375], 40.00th=[ 424], 50.00th=[ 449], 60.00th=[ 461], 00:12:15.001 | 70.00th=[ 478], 80.00th=[ 494], 90.00th=[ 545], 95.00th=[ 701], 00:12:15.001 | 99.00th=[ 840], 99.50th=[ 898], 99.90th=[ 955], 99.95th=[ 963], 00:12:15.001 | 99.99th=[ 963] 00:12:15.001 bw ( KiB/s): min= 4096, max= 4096, per=37.34%, avg=4096.00, stdev= 0.00, samples=1 00:12:15.001 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:15.001 lat (usec) : 250=0.68%, 500=51.52%, 750=22.51%, 1000=24.74% 00:12:15.001 lat (msec) : 2=0.49%, 50=0.06% 00:12:15.001 cpu : usr=2.80%, sys=3.90%, ctx=1617, majf=0, minf=1 00:12:15.001 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:15.001 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:15.001 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:15.001 issued rwts: total=593,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:15.001 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:15.001 job1: (groupid=0, jobs=1): err= 0: pid=3618527: Tue Oct 1 08:27:06 2024 00:12:15.001 read: IOPS=36, BW=148KiB/s (151kB/s)(148KiB/1001msec) 00:12:15.001 slat (nsec): min=7924, max=44272, avg=26636.59, stdev=5795.11 00:12:15.001 clat (usec): min=656, max=42081, avg=18978.60, stdev=20360.31 00:12:15.001 lat (usec): min=682, max=42108, avg=19005.24, stdev=20359.86 00:12:15.001 clat percentiles (usec): 00:12:15.001 | 1.00th=[ 660], 5.00th=[ 725], 10.00th=[ 734], 20.00th=[ 791], 00:12:15.001 | 30.00th=[ 807], 40.00th=[ 857], 50.00th=[ 1156], 60.00th=[41157], 00:12:15.001 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:12:15.001 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:15.001 | 99.99th=[42206] 00:12:15.001 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:12:15.001 slat (nsec): min=8048, max=67228, avg=28627.29, stdev=11055.36 00:12:15.001 clat (usec): min=124, max=974, avg=546.35, stdev=171.86 00:12:15.001 lat (usec): min=134, max=1008, avg=574.98, stdev=177.56 00:12:15.001 clat percentiles (usec): 00:12:15.001 | 1.00th=[ 262], 5.00th=[ 293], 10.00th=[ 326], 20.00th=[ 383], 00:12:15.001 | 30.00th=[ 445], 40.00th=[ 474], 50.00th=[ 515], 60.00th=[ 578], 00:12:15.001 | 70.00th=[ 660], 80.00th=[ 717], 90.00th=[ 783], 95.00th=[ 840], 00:12:15.001 | 99.00th=[ 922], 99.50th=[ 963], 99.90th=[ 971], 99.95th=[ 971], 00:12:15.001 | 99.99th=[ 971] 00:12:15.001 bw ( KiB/s): min= 4096, max= 4096, per=37.34%, avg=4096.00, stdev= 0.00, samples=1 00:12:15.001 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:15.001 lat (usec) : 250=0.55%, 500=42.44%, 750=38.62%, 1000=14.94% 00:12:15.001 lat (msec) : 2=0.36%, 20=0.18%, 50=2.91% 00:12:15.001 cpu : usr=0.50%, sys=2.20%, ctx=549, majf=0, minf=1 00:12:15.001 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:15.001 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:15.001 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:15.001 issued rwts: total=37,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:15.001 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:15.001 job2: (groupid=0, jobs=1): err= 0: pid=3618528: Tue Oct 1 08:27:06 2024 00:12:15.001 read: IOPS=500, BW=2002KiB/s (2050kB/s)(2004KiB/1001msec) 00:12:15.001 slat (nsec): min=8558, max=50082, avg=27680.57, stdev=3254.32 00:12:15.001 clat (usec): min=569, max=41988, avg=1317.66, stdev=3628.62 00:12:15.001 lat (usec): min=597, max=42015, avg=1345.34, stdev=3628.25 00:12:15.001 clat percentiles (usec): 00:12:15.001 | 1.00th=[ 644], 5.00th=[ 816], 10.00th=[ 865], 20.00th=[ 930], 00:12:15.001 | 30.00th=[ 955], 40.00th=[ 979], 50.00th=[ 996], 60.00th=[ 1012], 00:12:15.001 | 70.00th=[ 1029], 80.00th=[ 1057], 90.00th=[ 1090], 95.00th=[ 1123], 00:12:15.001 | 99.00th=[ 1467], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:12:15.001 | 99.99th=[42206] 00:12:15.001 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:12:15.001 slat (nsec): min=9222, max=83299, avg=31650.60, stdev=9034.34 00:12:15.001 clat (usec): min=170, max=862, avg=590.04, stdev=120.18 00:12:15.001 lat (usec): min=206, max=899, avg=621.69, stdev=123.27 00:12:15.001 clat percentiles (usec): 00:12:15.001 | 1.00th=[ 277], 5.00th=[ 367], 10.00th=[ 441], 20.00th=[ 486], 00:12:15.001 | 30.00th=[ 529], 40.00th=[ 570], 50.00th=[ 603], 60.00th=[ 627], 00:12:15.001 | 70.00th=[ 668], 80.00th=[ 701], 90.00th=[ 742], 95.00th=[ 766], 00:12:15.001 | 99.00th=[ 824], 99.50th=[ 848], 99.90th=[ 865], 99.95th=[ 865], 00:12:15.001 | 99.99th=[ 865] 00:12:15.001 bw ( KiB/s): min= 4096, max= 4096, per=37.34%, avg=4096.00, stdev= 0.00, samples=1 00:12:15.001 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:15.001 lat (usec) : 250=0.20%, 500=12.04%, 750=36.33%, 1000=29.12% 00:12:15.001 lat (msec) : 2=21.82%, 10=0.10%, 50=0.39% 00:12:15.001 cpu : usr=1.90%, sys=4.30%, ctx=1013, majf=0, minf=1 00:12:15.001 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:15.001 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:15.001 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:15.001 issued rwts: total=501,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:15.001 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:15.001 job3: (groupid=0, jobs=1): err= 0: pid=3618529: Tue Oct 1 08:27:06 2024 00:12:15.001 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:12:15.001 slat (nsec): min=5997, max=60286, avg=27070.77, stdev=2859.48 00:12:15.001 clat (usec): min=595, max=41520, avg=1055.73, stdev=1794.18 00:12:15.001 lat (usec): min=622, max=41547, avg=1082.80, stdev=1794.19 00:12:15.001 clat percentiles (usec): 00:12:15.001 | 1.00th=[ 701], 5.00th=[ 799], 10.00th=[ 857], 20.00th=[ 914], 00:12:15.001 | 30.00th=[ 947], 40.00th=[ 971], 50.00th=[ 988], 60.00th=[ 1004], 00:12:15.001 | 70.00th=[ 1020], 80.00th=[ 1045], 90.00th=[ 1057], 95.00th=[ 1090], 00:12:15.001 | 99.00th=[ 1172], 99.50th=[ 1401], 99.90th=[41681], 99.95th=[41681], 00:12:15.001 | 99.99th=[41681] 00:12:15.001 write: IOPS=696, BW=2785KiB/s (2852kB/s)(2788KiB/1001msec); 0 zone resets 00:12:15.001 slat (nsec): min=9144, max=58250, avg=30245.92, stdev=10023.02 00:12:15.001 clat (usec): min=229, max=852, avg=596.53, stdev=115.09 00:12:15.001 lat (usec): min=239, max=879, avg=626.77, stdev=119.38 00:12:15.001 clat percentiles (usec): 00:12:15.001 | 1.00th=[ 285], 5.00th=[ 383], 10.00th=[ 445], 20.00th=[ 494], 00:12:15.001 | 30.00th=[ 537], 40.00th=[ 578], 50.00th=[ 603], 60.00th=[ 635], 00:12:15.001 | 70.00th=[ 668], 80.00th=[ 701], 90.00th=[ 742], 95.00th=[ 766], 00:12:15.001 | 99.00th=[ 807], 99.50th=[ 832], 99.90th=[ 857], 99.95th=[ 857], 00:12:15.001 | 99.99th=[ 857] 00:12:15.001 bw ( KiB/s): min= 4096, max= 4096, per=37.34%, avg=4096.00, stdev= 0.00, samples=1 00:12:15.001 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:15.001 lat (usec) : 250=0.17%, 500=12.16%, 750=42.10%, 1000=26.88% 00:12:15.001 lat (msec) : 2=18.61%, 50=0.08% 00:12:15.001 cpu : usr=2.00%, sys=5.20%, ctx=1209, majf=0, minf=1 00:12:15.001 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:15.001 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:15.001 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:15.001 issued rwts: total=512,697,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:15.001 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:15.001 00:12:15.001 Run status group 0 (all jobs): 00:12:15.001 READ: bw=6565KiB/s (6723kB/s), 148KiB/s-2370KiB/s (151kB/s-2427kB/s), io=6572KiB (6730kB), run=1001-1001msec 00:12:15.001 WRITE: bw=10.7MiB/s (11.2MB/s), 2046KiB/s-4092KiB/s (2095kB/s-4190kB/s), io=10.7MiB (11.2MB), run=1001-1001msec 00:12:15.001 00:12:15.001 Disk stats (read/write): 00:12:15.001 nvme0n1: ios=562/896, merge=0/0, ticks=428/379, in_queue=807, util=86.97% 00:12:15.001 nvme0n2: ios=50/512, merge=0/0, ticks=559/236, in_queue=795, util=88.33% 00:12:15.001 nvme0n3: ios=335/512, merge=0/0, ticks=463/237, in_queue=700, util=88.34% 00:12:15.002 nvme0n4: ios=481/512, merge=0/0, ticks=950/250, in_queue=1200, util=91.20% 00:12:15.002 08:27:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:12:15.002 [global] 00:12:15.002 thread=1 00:12:15.002 invalidate=1 00:12:15.002 rw=randwrite 00:12:15.002 time_based=1 00:12:15.002 runtime=1 00:12:15.002 ioengine=libaio 00:12:15.002 direct=1 00:12:15.002 bs=4096 00:12:15.002 iodepth=1 00:12:15.002 norandommap=0 00:12:15.002 numjobs=1 00:12:15.002 00:12:15.002 verify_dump=1 00:12:15.002 verify_backlog=512 00:12:15.002 verify_state_save=0 00:12:15.002 do_verify=1 00:12:15.002 verify=crc32c-intel 00:12:15.002 [job0] 00:12:15.002 filename=/dev/nvme0n1 00:12:15.002 [job1] 00:12:15.002 filename=/dev/nvme0n2 00:12:15.002 [job2] 00:12:15.002 filename=/dev/nvme0n3 00:12:15.002 [job3] 00:12:15.002 filename=/dev/nvme0n4 00:12:15.002 Could not set queue depth (nvme0n1) 00:12:15.002 Could not set queue depth (nvme0n2) 00:12:15.002 Could not set queue depth (nvme0n3) 00:12:15.002 Could not set queue depth (nvme0n4) 00:12:15.263 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:15.263 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:15.263 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:15.263 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:15.263 fio-3.35 00:12:15.263 Starting 4 threads 00:12:16.673 00:12:16.673 job0: (groupid=0, jobs=1): err= 0: pid=3619056: Tue Oct 1 08:27:08 2024 00:12:16.673 read: IOPS=18, BW=73.4KiB/s (75.2kB/s)(76.0KiB/1035msec) 00:12:16.673 slat (nsec): min=26989, max=28055, avg=27522.11, stdev=335.65 00:12:16.673 clat (usec): min=1044, max=42042, avg=38998.45, stdev=9197.29 00:12:16.673 lat (usec): min=1072, max=42069, avg=39025.97, stdev=9197.34 00:12:16.673 clat percentiles (usec): 00:12:16.673 | 1.00th=[ 1045], 5.00th=[ 1045], 10.00th=[40633], 20.00th=[40633], 00:12:16.673 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:12:16.673 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:12:16.673 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:16.673 | 99.99th=[42206] 00:12:16.673 write: IOPS=494, BW=1979KiB/s (2026kB/s)(2048KiB/1035msec); 0 zone resets 00:12:16.673 slat (nsec): min=9196, max=53677, avg=33565.12, stdev=6727.02 00:12:16.673 clat (usec): min=178, max=929, avg=530.90, stdev=167.94 00:12:16.673 lat (usec): min=212, max=964, avg=564.47, stdev=169.39 00:12:16.673 clat percentiles (usec): 00:12:16.673 | 1.00th=[ 235], 5.00th=[ 289], 10.00th=[ 314], 20.00th=[ 347], 00:12:16.673 | 30.00th=[ 416], 40.00th=[ 465], 50.00th=[ 537], 60.00th=[ 586], 00:12:16.673 | 70.00th=[ 635], 80.00th=[ 693], 90.00th=[ 758], 95.00th=[ 807], 00:12:16.673 | 99.00th=[ 873], 99.50th=[ 881], 99.90th=[ 930], 99.95th=[ 930], 00:12:16.673 | 99.99th=[ 930] 00:12:16.673 bw ( KiB/s): min= 4096, max= 4096, per=37.84%, avg=4096.00, stdev= 0.00, samples=1 00:12:16.673 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:16.673 lat (usec) : 250=1.13%, 500=41.05%, 750=43.13%, 1000=11.11% 00:12:16.673 lat (msec) : 2=0.19%, 50=3.39% 00:12:16.673 cpu : usr=1.35%, sys=1.93%, ctx=533, majf=0, minf=1 00:12:16.673 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:16.673 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:16.673 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:16.673 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:16.673 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:16.673 job1: (groupid=0, jobs=1): err= 0: pid=3619057: Tue Oct 1 08:27:08 2024 00:12:16.673 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:12:16.673 slat (nsec): min=25388, max=59823, avg=26430.22, stdev=2492.22 00:12:16.673 clat (usec): min=668, max=1237, avg=1000.54, stdev=93.33 00:12:16.673 lat (usec): min=695, max=1263, avg=1026.97, stdev=93.21 00:12:16.673 clat percentiles (usec): 00:12:16.673 | 1.00th=[ 766], 5.00th=[ 832], 10.00th=[ 881], 20.00th=[ 922], 00:12:16.673 | 30.00th=[ 955], 40.00th=[ 988], 50.00th=[ 1012], 60.00th=[ 1037], 00:12:16.673 | 70.00th=[ 1057], 80.00th=[ 1074], 90.00th=[ 1106], 95.00th=[ 1139], 00:12:16.673 | 99.00th=[ 1188], 99.50th=[ 1221], 99.90th=[ 1237], 99.95th=[ 1237], 00:12:16.673 | 99.99th=[ 1237] 00:12:16.673 write: IOPS=733, BW=2933KiB/s (3003kB/s)(2936KiB/1001msec); 0 zone resets 00:12:16.673 slat (nsec): min=9738, max=53589, avg=30554.14, stdev=8643.80 00:12:16.673 clat (usec): min=194, max=968, avg=601.33, stdev=124.11 00:12:16.673 lat (usec): min=229, max=1001, avg=631.88, stdev=127.52 00:12:16.673 clat percentiles (usec): 00:12:16.673 | 1.00th=[ 281], 5.00th=[ 388], 10.00th=[ 449], 20.00th=[ 490], 00:12:16.673 | 30.00th=[ 537], 40.00th=[ 586], 50.00th=[ 611], 60.00th=[ 635], 00:12:16.673 | 70.00th=[ 660], 80.00th=[ 701], 90.00th=[ 758], 95.00th=[ 807], 00:12:16.673 | 99.00th=[ 881], 99.50th=[ 922], 99.90th=[ 971], 99.95th=[ 971], 00:12:16.673 | 99.99th=[ 971] 00:12:16.673 bw ( KiB/s): min= 4096, max= 4096, per=37.84%, avg=4096.00, stdev= 0.00, samples=1 00:12:16.673 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:16.673 lat (usec) : 250=0.16%, 500=12.92%, 750=39.49%, 1000=25.20% 00:12:16.673 lat (msec) : 2=22.23% 00:12:16.673 cpu : usr=2.30%, sys=3.30%, ctx=1247, majf=0, minf=1 00:12:16.673 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:16.673 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:16.673 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:16.673 issued rwts: total=512,734,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:16.673 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:16.673 job2: (groupid=0, jobs=1): err= 0: pid=3619058: Tue Oct 1 08:27:08 2024 00:12:16.673 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:12:16.673 slat (nsec): min=6803, max=63471, avg=27359.53, stdev=5165.56 00:12:16.673 clat (usec): min=474, max=1300, avg=955.44, stdev=109.72 00:12:16.673 lat (usec): min=481, max=1327, avg=982.80, stdev=110.40 00:12:16.673 clat percentiles (usec): 00:12:16.673 | 1.00th=[ 685], 5.00th=[ 758], 10.00th=[ 807], 20.00th=[ 865], 00:12:16.673 | 30.00th=[ 906], 40.00th=[ 947], 50.00th=[ 971], 60.00th=[ 1004], 00:12:16.673 | 70.00th=[ 1020], 80.00th=[ 1045], 90.00th=[ 1074], 95.00th=[ 1106], 00:12:16.673 | 99.00th=[ 1188], 99.50th=[ 1237], 99.90th=[ 1303], 99.95th=[ 1303], 00:12:16.673 | 99.99th=[ 1303] 00:12:16.673 write: IOPS=810, BW=3241KiB/s (3319kB/s)(3244KiB/1001msec); 0 zone resets 00:12:16.673 slat (nsec): min=9112, max=54123, avg=30738.13, stdev=8516.68 00:12:16.673 clat (usec): min=201, max=955, avg=568.97, stdev=134.98 00:12:16.673 lat (usec): min=213, max=987, avg=599.71, stdev=137.03 00:12:16.673 clat percentiles (usec): 00:12:16.673 | 1.00th=[ 262], 5.00th=[ 334], 10.00th=[ 388], 20.00th=[ 441], 00:12:16.673 | 30.00th=[ 502], 40.00th=[ 545], 50.00th=[ 586], 60.00th=[ 611], 00:12:16.673 | 70.00th=[ 652], 80.00th=[ 693], 90.00th=[ 734], 95.00th=[ 766], 00:12:16.673 | 99.00th=[ 857], 99.50th=[ 881], 99.90th=[ 955], 99.95th=[ 955], 00:12:16.674 | 99.99th=[ 955] 00:12:16.674 bw ( KiB/s): min= 4096, max= 4096, per=37.84%, avg=4096.00, stdev= 0.00, samples=1 00:12:16.674 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:16.674 lat (usec) : 250=0.38%, 500=17.69%, 750=40.51%, 1000=25.77% 00:12:16.674 lat (msec) : 2=15.65% 00:12:16.674 cpu : usr=2.50%, sys=5.50%, ctx=1323, majf=0, minf=2 00:12:16.674 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:16.674 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:16.674 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:16.674 issued rwts: total=512,811,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:16.674 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:16.674 job3: (groupid=0, jobs=1): err= 0: pid=3619059: Tue Oct 1 08:27:08 2024 00:12:16.674 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:12:16.674 slat (nsec): min=7207, max=58781, avg=26399.53, stdev=4051.60 00:12:16.674 clat (usec): min=688, max=1572, avg=1040.91, stdev=94.13 00:12:16.674 lat (usec): min=714, max=1597, avg=1067.30, stdev=94.15 00:12:16.674 clat percentiles (usec): 00:12:16.674 | 1.00th=[ 783], 5.00th=[ 865], 10.00th=[ 914], 20.00th=[ 971], 00:12:16.674 | 30.00th=[ 1004], 40.00th=[ 1037], 50.00th=[ 1057], 60.00th=[ 1074], 00:12:16.674 | 70.00th=[ 1090], 80.00th=[ 1106], 90.00th=[ 1139], 95.00th=[ 1172], 00:12:16.674 | 99.00th=[ 1205], 99.50th=[ 1237], 99.90th=[ 1565], 99.95th=[ 1565], 00:12:16.674 | 99.99th=[ 1565] 00:12:16.674 write: IOPS=743, BW=2973KiB/s (3044kB/s)(2976KiB/1001msec); 0 zone resets 00:12:16.674 slat (nsec): min=9416, max=60871, avg=28387.27, stdev=9076.82 00:12:16.674 clat (usec): min=208, max=958, avg=568.19, stdev=125.54 00:12:16.674 lat (usec): min=218, max=990, avg=596.58, stdev=128.93 00:12:16.674 clat percentiles (usec): 00:12:16.674 | 1.00th=[ 258], 5.00th=[ 338], 10.00th=[ 388], 20.00th=[ 461], 00:12:16.674 | 30.00th=[ 502], 40.00th=[ 553], 50.00th=[ 586], 60.00th=[ 611], 00:12:16.674 | 70.00th=[ 644], 80.00th=[ 676], 90.00th=[ 717], 95.00th=[ 758], 00:12:16.674 | 99.00th=[ 824], 99.50th=[ 840], 99.90th=[ 955], 99.95th=[ 955], 00:12:16.674 | 99.99th=[ 955] 00:12:16.674 bw ( KiB/s): min= 4096, max= 4096, per=37.84%, avg=4096.00, stdev= 0.00, samples=1 00:12:16.674 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:16.674 lat (usec) : 250=0.32%, 500=17.20%, 750=38.14%, 1000=14.73% 00:12:16.674 lat (msec) : 2=29.62% 00:12:16.674 cpu : usr=1.60%, sys=3.90%, ctx=1256, majf=0, minf=1 00:12:16.674 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:16.674 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:16.674 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:16.674 issued rwts: total=512,744,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:16.674 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:16.674 00:12:16.674 Run status group 0 (all jobs): 00:12:16.674 READ: bw=6010KiB/s (6154kB/s), 73.4KiB/s-2046KiB/s (75.2kB/s-2095kB/s), io=6220KiB (6369kB), run=1001-1035msec 00:12:16.674 WRITE: bw=10.6MiB/s (11.1MB/s), 1979KiB/s-3241KiB/s (2026kB/s-3319kB/s), io=10.9MiB (11.5MB), run=1001-1035msec 00:12:16.674 00:12:16.674 Disk stats (read/write): 00:12:16.674 nvme0n1: ios=41/512, merge=0/0, ticks=1528/197, in_queue=1725, util=96.69% 00:12:16.674 nvme0n2: ios=511/512, merge=0/0, ticks=1454/298, in_queue=1752, util=97.15% 00:12:16.674 nvme0n3: ios=512/534, merge=0/0, ticks=441/237, in_queue=678, util=88.49% 00:12:16.674 nvme0n4: ios=493/512, merge=0/0, ticks=501/279, in_queue=780, util=89.53% 00:12:16.674 08:27:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:12:16.674 [global] 00:12:16.674 thread=1 00:12:16.674 invalidate=1 00:12:16.674 rw=write 00:12:16.674 time_based=1 00:12:16.674 runtime=1 00:12:16.674 ioengine=libaio 00:12:16.674 direct=1 00:12:16.674 bs=4096 00:12:16.674 iodepth=128 00:12:16.674 norandommap=0 00:12:16.674 numjobs=1 00:12:16.674 00:12:16.674 verify_dump=1 00:12:16.674 verify_backlog=512 00:12:16.674 verify_state_save=0 00:12:16.674 do_verify=1 00:12:16.674 verify=crc32c-intel 00:12:16.674 [job0] 00:12:16.674 filename=/dev/nvme0n1 00:12:16.674 [job1] 00:12:16.674 filename=/dev/nvme0n2 00:12:16.674 [job2] 00:12:16.674 filename=/dev/nvme0n3 00:12:16.674 [job3] 00:12:16.674 filename=/dev/nvme0n4 00:12:16.674 Could not set queue depth (nvme0n1) 00:12:16.674 Could not set queue depth (nvme0n2) 00:12:16.674 Could not set queue depth (nvme0n3) 00:12:16.674 Could not set queue depth (nvme0n4) 00:12:16.936 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:16.936 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:16.936 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:16.936 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:16.936 fio-3.35 00:12:16.936 Starting 4 threads 00:12:18.341 00:12:18.341 job0: (groupid=0, jobs=1): err= 0: pid=3619575: Tue Oct 1 08:27:09 2024 00:12:18.341 read: IOPS=5069, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1010msec) 00:12:18.341 slat (nsec): min=938, max=10243k, avg=68598.85, stdev=566942.33 00:12:18.341 clat (usec): min=1394, max=28517, avg=9948.60, stdev=3653.39 00:12:18.341 lat (usec): min=1401, max=28534, avg=10017.20, stdev=3701.65 00:12:18.341 clat percentiles (usec): 00:12:18.341 | 1.00th=[ 2474], 5.00th=[ 5407], 10.00th=[ 6521], 20.00th=[ 6980], 00:12:18.341 | 30.00th=[ 7504], 40.00th=[ 8586], 50.00th=[ 9110], 60.00th=[ 9896], 00:12:18.341 | 70.00th=[10945], 80.00th=[13042], 90.00th=[14484], 95.00th=[17957], 00:12:18.341 | 99.00th=[19792], 99.50th=[20841], 99.90th=[23462], 99.95th=[23462], 00:12:18.341 | 99.99th=[28443] 00:12:18.341 write: IOPS=5407, BW=21.1MiB/s (22.1MB/s)(21.3MiB/1010msec); 0 zone resets 00:12:18.341 slat (nsec): min=1667, max=11432k, avg=103926.18, stdev=652003.38 00:12:18.341 clat (usec): min=496, max=64001, avg=14140.09, stdev=14653.32 00:12:18.342 lat (usec): min=504, max=64031, avg=14244.02, stdev=14751.82 00:12:18.342 clat percentiles (usec): 00:12:18.342 | 1.00th=[ 1811], 5.00th=[ 3785], 10.00th=[ 4424], 20.00th=[ 5866], 00:12:18.342 | 30.00th=[ 6259], 40.00th=[ 7111], 50.00th=[ 8094], 60.00th=[ 9765], 00:12:18.342 | 70.00th=[11994], 80.00th=[17957], 90.00th=[38011], 95.00th=[55313], 00:12:18.342 | 99.00th=[61604], 99.50th=[62653], 99.90th=[64226], 99.95th=[64226], 00:12:18.342 | 99.99th=[64226] 00:12:18.342 bw ( KiB/s): min=18104, max=24568, per=26.14%, avg=21336.00, stdev=4570.74, samples=2 00:12:18.342 iops : min= 4526, max= 6142, avg=5334.00, stdev=1142.68, samples=2 00:12:18.342 lat (usec) : 500=0.03% 00:12:18.342 lat (msec) : 2=1.02%, 4=2.72%, 10=57.82%, 20=29.01%, 50=5.67% 00:12:18.342 lat (msec) : 100=3.72% 00:12:18.342 cpu : usr=4.26%, sys=6.34%, ctx=317, majf=0, minf=1 00:12:18.342 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:12:18.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:18.342 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:18.342 issued rwts: total=5120,5462,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:18.342 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:18.342 job1: (groupid=0, jobs=1): err= 0: pid=3619576: Tue Oct 1 08:27:09 2024 00:12:18.342 read: IOPS=4454, BW=17.4MiB/s (18.2MB/s)(17.5MiB/1006msec) 00:12:18.342 slat (nsec): min=969, max=11086k, avg=107480.01, stdev=716549.57 00:12:18.342 clat (usec): min=4120, max=74961, avg=11947.19, stdev=9827.42 00:12:18.342 lat (usec): min=4125, max=74965, avg=12054.67, stdev=9936.02 00:12:18.342 clat percentiles (usec): 00:12:18.342 | 1.00th=[ 4490], 5.00th=[ 6390], 10.00th=[ 6587], 20.00th=[ 6980], 00:12:18.342 | 30.00th=[ 7308], 40.00th=[ 8225], 50.00th=[ 9372], 60.00th=[ 9896], 00:12:18.342 | 70.00th=[11994], 80.00th=[13304], 90.00th=[16450], 95.00th=[29492], 00:12:18.342 | 99.00th=[58983], 99.50th=[67634], 99.90th=[74974], 99.95th=[74974], 00:12:18.342 | 99.99th=[74974] 00:12:18.342 write: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec); 0 zone resets 00:12:18.342 slat (nsec): min=1646, max=15120k, avg=106622.83, stdev=669882.59 00:12:18.342 clat (usec): min=1121, max=74963, avg=16071.94, stdev=17452.20 00:12:18.342 lat (usec): min=1131, max=74972, avg=16178.56, stdev=17554.84 00:12:18.342 clat percentiles (usec): 00:12:18.342 | 1.00th=[ 3490], 5.00th=[ 4228], 10.00th=[ 4883], 20.00th=[ 5604], 00:12:18.342 | 30.00th=[ 6128], 40.00th=[ 8291], 50.00th=[ 9765], 60.00th=[11469], 00:12:18.342 | 70.00th=[14222], 80.00th=[16581], 90.00th=[46924], 95.00th=[64750], 00:12:18.342 | 99.00th=[70779], 99.50th=[72877], 99.90th=[74974], 99.95th=[74974], 00:12:18.342 | 99.99th=[74974] 00:12:18.342 bw ( KiB/s): min=17936, max=18928, per=22.58%, avg=18432.00, stdev=701.45, samples=2 00:12:18.342 iops : min= 4484, max= 4732, avg=4608.00, stdev=175.36, samples=2 00:12:18.342 lat (msec) : 2=0.02%, 4=0.76%, 10=56.29%, 20=31.41%, 50=5.24% 00:12:18.342 lat (msec) : 100=6.28% 00:12:18.342 cpu : usr=5.17%, sys=3.88%, ctx=311, majf=0, minf=1 00:12:18.342 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:12:18.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:18.342 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:18.342 issued rwts: total=4481,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:18.342 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:18.342 job2: (groupid=0, jobs=1): err= 0: pid=3619577: Tue Oct 1 08:27:09 2024 00:12:18.342 read: IOPS=6006, BW=23.5MiB/s (24.6MB/s)(23.6MiB/1006msec) 00:12:18.342 slat (nsec): min=1021, max=9948.1k, avg=87048.55, stdev=665852.37 00:12:18.342 clat (usec): min=2398, max=23136, avg=10995.34, stdev=2857.86 00:12:18.342 lat (usec): min=4183, max=23143, avg=11082.39, stdev=2903.51 00:12:18.342 clat percentiles (usec): 00:12:18.342 | 1.00th=[ 4490], 5.00th=[ 7504], 10.00th=[ 8586], 20.00th=[ 9110], 00:12:18.342 | 30.00th=[ 9372], 40.00th=[ 9765], 50.00th=[10552], 60.00th=[10945], 00:12:18.342 | 70.00th=[11338], 80.00th=[12518], 90.00th=[15533], 95.00th=[16581], 00:12:18.342 | 99.00th=[20579], 99.50th=[21627], 99.90th=[23200], 99.95th=[23200], 00:12:18.342 | 99.99th=[23200] 00:12:18.342 write: IOPS=6107, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1006msec); 0 zone resets 00:12:18.342 slat (nsec): min=1735, max=9999.0k, avg=72106.90, stdev=432208.21 00:12:18.342 clat (usec): min=1184, max=32109, avg=9903.18, stdev=3889.57 00:12:18.342 lat (usec): min=1195, max=32115, avg=9975.28, stdev=3915.59 00:12:18.342 clat percentiles (usec): 00:12:18.342 | 1.00th=[ 3326], 5.00th=[ 5080], 10.00th=[ 5800], 20.00th=[ 7832], 00:12:18.342 | 30.00th=[ 8848], 40.00th=[ 9372], 50.00th=[ 9503], 60.00th=[ 9634], 00:12:18.342 | 70.00th=[10814], 80.00th=[11207], 90.00th=[12256], 95.00th=[13173], 00:12:18.342 | 99.00th=[30016], 99.50th=[31327], 99.90th=[32113], 99.95th=[32113], 00:12:18.342 | 99.99th=[32113] 00:12:18.342 bw ( KiB/s): min=22000, max=27152, per=30.10%, avg=24576.00, stdev=3643.01, samples=2 00:12:18.342 iops : min= 5500, max= 6788, avg=6144.00, stdev=910.75, samples=2 00:12:18.342 lat (msec) : 2=0.02%, 4=1.24%, 10=52.88%, 20=43.60%, 50=2.26% 00:12:18.342 cpu : usr=3.58%, sys=6.97%, ctx=628, majf=0, minf=1 00:12:18.342 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:12:18.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:18.342 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:18.342 issued rwts: total=6043,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:18.342 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:18.342 job3: (groupid=0, jobs=1): err= 0: pid=3619578: Tue Oct 1 08:27:09 2024 00:12:18.342 read: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec) 00:12:18.342 slat (nsec): min=939, max=32537k, avg=113341.26, stdev=865813.68 00:12:18.342 clat (usec): min=2394, max=48649, avg=14529.64, stdev=9983.71 00:12:18.342 lat (usec): min=2398, max=52009, avg=14642.98, stdev=10080.00 00:12:18.342 clat percentiles (usec): 00:12:18.342 | 1.00th=[ 3589], 5.00th=[ 4555], 10.00th=[ 6456], 20.00th=[ 7570], 00:12:18.342 | 30.00th=[ 7963], 40.00th=[ 8979], 50.00th=[ 9765], 60.00th=[10814], 00:12:18.342 | 70.00th=[14484], 80.00th=[25035], 90.00th=[33162], 95.00th=[34866], 00:12:18.342 | 99.00th=[38536], 99.50th=[41681], 99.90th=[45876], 99.95th=[48497], 00:12:18.342 | 99.99th=[48497] 00:12:18.342 write: IOPS=4364, BW=17.0MiB/s (17.9MB/s)(17.2MiB/1008msec); 0 zone resets 00:12:18.342 slat (nsec): min=1583, max=16126k, avg=109741.20, stdev=618536.12 00:12:18.342 clat (usec): min=540, max=74694, avg=15536.31, stdev=17220.39 00:12:18.342 lat (usec): min=549, max=74700, avg=15646.06, stdev=17331.50 00:12:18.342 clat percentiles (usec): 00:12:18.342 | 1.00th=[ 1532], 5.00th=[ 4113], 10.00th=[ 4686], 20.00th=[ 5342], 00:12:18.342 | 30.00th=[ 6915], 40.00th=[ 7504], 50.00th=[ 7767], 60.00th=[ 7963], 00:12:18.342 | 70.00th=[10028], 80.00th=[24773], 90.00th=[48497], 95.00th=[56886], 00:12:18.342 | 99.00th=[63701], 99.50th=[71828], 99.90th=[74974], 99.95th=[74974], 00:12:18.342 | 99.99th=[74974] 00:12:18.342 bw ( KiB/s): min= 6352, max=27816, per=20.93%, avg=17084.00, stdev=15177.34, samples=2 00:12:18.342 iops : min= 1588, max= 6954, avg=4271.00, stdev=3794.33, samples=2 00:12:18.342 lat (usec) : 750=0.04%, 1000=0.01% 00:12:18.342 lat (msec) : 2=0.71%, 4=2.24%, 10=58.10%, 20=15.13%, 50=19.07% 00:12:18.342 lat (msec) : 100=4.71% 00:12:18.342 cpu : usr=2.58%, sys=5.46%, ctx=415, majf=0, minf=1 00:12:18.342 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:12:18.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:18.342 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:18.342 issued rwts: total=4096,4399,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:18.342 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:18.342 00:12:18.342 Run status group 0 (all jobs): 00:12:18.342 READ: bw=76.3MiB/s (80.1MB/s), 15.9MiB/s-23.5MiB/s (16.6MB/s-24.6MB/s), io=77.1MiB (80.9MB), run=1006-1010msec 00:12:18.342 WRITE: bw=79.7MiB/s (83.6MB/s), 17.0MiB/s-23.9MiB/s (17.9MB/s-25.0MB/s), io=80.5MiB (84.4MB), run=1006-1010msec 00:12:18.342 00:12:18.342 Disk stats (read/write): 00:12:18.342 nvme0n1: ios=4285/4608, merge=0/0, ticks=41551/60320, in_queue=101871, util=86.77% 00:12:18.342 nvme0n2: ios=3625/3911, merge=0/0, ticks=42059/56770, in_queue=98829, util=95.92% 00:12:18.342 nvme0n3: ios=4792/5120, merge=0/0, ticks=51667/50312, in_queue=101979, util=100.00% 00:12:18.342 nvme0n4: ios=3741/4096, merge=0/0, ticks=38394/37770, in_queue=76164, util=95.41% 00:12:18.342 08:27:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:12:18.342 [global] 00:12:18.342 thread=1 00:12:18.342 invalidate=1 00:12:18.342 rw=randwrite 00:12:18.342 time_based=1 00:12:18.342 runtime=1 00:12:18.342 ioengine=libaio 00:12:18.342 direct=1 00:12:18.342 bs=4096 00:12:18.342 iodepth=128 00:12:18.342 norandommap=0 00:12:18.342 numjobs=1 00:12:18.342 00:12:18.342 verify_dump=1 00:12:18.342 verify_backlog=512 00:12:18.342 verify_state_save=0 00:12:18.342 do_verify=1 00:12:18.343 verify=crc32c-intel 00:12:18.343 [job0] 00:12:18.343 filename=/dev/nvme0n1 00:12:18.343 [job1] 00:12:18.343 filename=/dev/nvme0n2 00:12:18.343 [job2] 00:12:18.343 filename=/dev/nvme0n3 00:12:18.343 [job3] 00:12:18.343 filename=/dev/nvme0n4 00:12:18.343 Could not set queue depth (nvme0n1) 00:12:18.343 Could not set queue depth (nvme0n2) 00:12:18.343 Could not set queue depth (nvme0n3) 00:12:18.343 Could not set queue depth (nvme0n4) 00:12:18.607 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:18.607 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:18.607 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:18.607 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:18.607 fio-3.35 00:12:18.607 Starting 4 threads 00:12:20.014 00:12:20.014 job0: (groupid=0, jobs=1): err= 0: pid=3620375: Tue Oct 1 08:27:11 2024 00:12:20.014 read: IOPS=6077, BW=23.7MiB/s (24.9MB/s)(24.0MiB/1011msec) 00:12:20.014 slat (nsec): min=961, max=12220k, avg=61129.89, stdev=484905.92 00:12:20.014 clat (usec): min=2420, max=51156, avg=8830.33, stdev=6435.30 00:12:20.014 lat (usec): min=2426, max=52740, avg=8891.46, stdev=6491.72 00:12:20.014 clat percentiles (usec): 00:12:20.014 | 1.00th=[ 4178], 5.00th=[ 4555], 10.00th=[ 5342], 20.00th=[ 5800], 00:12:20.014 | 30.00th=[ 6128], 40.00th=[ 6783], 50.00th=[ 7046], 60.00th=[ 7439], 00:12:20.014 | 70.00th=[ 8356], 80.00th=[ 9634], 90.00th=[12256], 95.00th=[21627], 00:12:20.014 | 99.00th=[41681], 99.50th=[42206], 99.90th=[51119], 99.95th=[51119], 00:12:20.014 | 99.99th=[51119] 00:12:20.014 write: IOPS=6406, BW=25.0MiB/s (26.2MB/s)(25.3MiB/1011msec); 0 zone resets 00:12:20.014 slat (nsec): min=1581, max=12800k, avg=87142.68, stdev=547587.71 00:12:20.014 clat (usec): min=1124, max=90025, avg=11432.48, stdev=16148.12 00:12:20.014 lat (usec): min=1134, max=90069, avg=11519.62, stdev=16262.65 00:12:20.014 clat percentiles (usec): 00:12:20.014 | 1.00th=[ 2180], 5.00th=[ 3490], 10.00th=[ 3916], 20.00th=[ 5211], 00:12:20.014 | 30.00th=[ 5604], 40.00th=[ 5866], 50.00th=[ 6128], 60.00th=[ 6521], 00:12:20.014 | 70.00th=[ 6980], 80.00th=[ 8848], 90.00th=[31851], 95.00th=[54789], 00:12:20.014 | 99.00th=[79168], 99.50th=[83362], 99.90th=[86508], 99.95th=[88605], 00:12:20.014 | 99.99th=[89654] 00:12:20.014 bw ( KiB/s): min=13928, max=36864, per=28.39%, avg=25396.00, stdev=16218.20, samples=2 00:12:20.014 iops : min= 3482, max= 9216, avg=6349.00, stdev=4054.55, samples=2 00:12:20.014 lat (msec) : 2=0.36%, 4=6.51%, 10=77.14%, 20=7.56%, 50=5.43% 00:12:20.014 lat (msec) : 100=3.00% 00:12:20.014 cpu : usr=3.56%, sys=6.63%, ctx=572, majf=0, minf=1 00:12:20.014 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:12:20.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:20.014 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:20.014 issued rwts: total=6144,6477,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:20.014 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:20.014 job1: (groupid=0, jobs=1): err= 0: pid=3620391: Tue Oct 1 08:27:11 2024 00:12:20.014 read: IOPS=1525, BW=6101KiB/s (6248kB/s)(6144KiB/1007msec) 00:12:20.014 slat (nsec): min=953, max=17934k, avg=266860.79, stdev=1496489.08 00:12:20.014 clat (usec): min=10952, max=75705, avg=32124.44, stdev=18147.04 00:12:20.014 lat (usec): min=10954, max=76024, avg=32391.31, stdev=18289.28 00:12:20.014 clat percentiles (usec): 00:12:20.014 | 1.00th=[11600], 5.00th=[12518], 10.00th=[13304], 20.00th=[14091], 00:12:20.014 | 30.00th=[15008], 40.00th=[22938], 50.00th=[25822], 60.00th=[32900], 00:12:20.014 | 70.00th=[43254], 80.00th=[51643], 90.00th=[61080], 95.00th=[63701], 00:12:20.014 | 99.00th=[67634], 99.50th=[68682], 99.90th=[72877], 99.95th=[76022], 00:12:20.014 | 99.99th=[76022] 00:12:20.014 write: IOPS=1996, BW=7984KiB/s (8176kB/s)(8040KiB/1007msec); 0 zone resets 00:12:20.014 slat (nsec): min=1654, max=23440k, avg=288404.04, stdev=1401113.44 00:12:20.014 clat (usec): min=5233, max=77991, avg=37760.42, stdev=20136.16 00:12:20.014 lat (usec): min=8696, max=77999, avg=38048.82, stdev=20278.32 00:12:20.014 clat percentiles (usec): 00:12:20.014 | 1.00th=[10814], 5.00th=[12125], 10.00th=[12387], 20.00th=[15533], 00:12:20.014 | 30.00th=[19792], 40.00th=[30540], 50.00th=[34866], 60.00th=[43254], 00:12:20.014 | 70.00th=[52167], 80.00th=[60031], 90.00th=[66323], 95.00th=[71828], 00:12:20.014 | 99.00th=[76022], 99.50th=[76022], 99.90th=[78119], 99.95th=[78119], 00:12:20.014 | 99.99th=[78119] 00:12:20.014 bw ( KiB/s): min= 6864, max= 8192, per=8.42%, avg=7528.00, stdev=939.04, samples=2 00:12:20.014 iops : min= 1716, max= 2048, avg=1882.00, stdev=234.76, samples=2 00:12:20.014 lat (msec) : 10=0.34%, 20=32.06%, 50=40.52%, 100=27.07% 00:12:20.014 cpu : usr=1.79%, sys=1.89%, ctx=258, majf=0, minf=1 00:12:20.014 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:12:20.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:20.014 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:20.014 issued rwts: total=1536,2010,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:20.014 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:20.014 job2: (groupid=0, jobs=1): err= 0: pid=3620407: Tue Oct 1 08:27:11 2024 00:12:20.014 read: IOPS=6682, BW=26.1MiB/s (27.4MB/s)(26.3MiB/1008msec) 00:12:20.014 slat (nsec): min=918, max=9330.4k, avg=54120.80, stdev=451084.11 00:12:20.014 clat (usec): min=1636, max=24161, avg=7950.11, stdev=2748.91 00:12:20.014 lat (usec): min=1642, max=24189, avg=8004.23, stdev=2786.80 00:12:20.014 clat percentiles (usec): 00:12:20.014 | 1.00th=[ 3097], 5.00th=[ 4293], 10.00th=[ 5342], 20.00th=[ 5932], 00:12:20.014 | 30.00th=[ 6587], 40.00th=[ 6915], 50.00th=[ 7373], 60.00th=[ 7963], 00:12:20.014 | 70.00th=[ 8455], 80.00th=[ 9765], 90.00th=[11731], 95.00th=[14746], 00:12:20.014 | 99.00th=[15401], 99.50th=[16450], 99.90th=[19792], 99.95th=[20055], 00:12:20.014 | 99.99th=[24249] 00:12:20.014 write: IOPS=7619, BW=29.8MiB/s (31.2MB/s)(30.0MiB/1008msec); 0 zone resets 00:12:20.014 slat (nsec): min=1599, max=14601k, avg=57888.36, stdev=467321.94 00:12:20.014 clat (usec): min=524, max=83037, avg=9664.57, stdev=12680.50 00:12:20.014 lat (usec): min=549, max=83044, avg=9722.46, stdev=12751.29 00:12:20.014 clat percentiles (usec): 00:12:20.014 | 1.00th=[ 1254], 5.00th=[ 2769], 10.00th=[ 3916], 20.00th=[ 5014], 00:12:20.014 | 30.00th=[ 5604], 40.00th=[ 6194], 50.00th=[ 6456], 60.00th=[ 6718], 00:12:20.014 | 70.00th=[ 7242], 80.00th=[ 8586], 90.00th=[12780], 95.00th=[40109], 00:12:20.014 | 99.00th=[77071], 99.50th=[80217], 99.90th=[81265], 99.95th=[83362], 00:12:20.014 | 99.99th=[83362] 00:12:20.014 bw ( KiB/s): min=28664, max=32392, per=34.13%, avg=30528.00, stdev=2636.09, samples=2 00:12:20.014 iops : min= 7166, max= 8098, avg=7632.00, stdev=659.02, samples=2 00:12:20.014 lat (usec) : 750=0.08%, 1000=0.20% 00:12:20.014 lat (msec) : 2=1.22%, 4=6.05%, 10=76.36%, 20=12.39%, 50=1.82% 00:12:20.014 lat (msec) : 100=1.88% 00:12:20.014 cpu : usr=5.36%, sys=8.14%, ctx=510, majf=0, minf=1 00:12:20.014 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:12:20.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:20.014 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:20.014 issued rwts: total=6736,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:20.014 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:20.014 job3: (groupid=0, jobs=1): err= 0: pid=3620413: Tue Oct 1 08:27:11 2024 00:12:20.014 read: IOPS=6113, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1005msec) 00:12:20.014 slat (nsec): min=959, max=14928k, avg=82586.69, stdev=597439.53 00:12:20.014 clat (usec): min=3865, max=36286, avg=10528.15, stdev=3861.82 00:12:20.014 lat (usec): min=3868, max=36294, avg=10610.74, stdev=3910.04 00:12:20.014 clat percentiles (usec): 00:12:20.014 | 1.00th=[ 5669], 5.00th=[ 6652], 10.00th=[ 7308], 20.00th=[ 7963], 00:12:20.014 | 30.00th=[ 8291], 40.00th=[ 8979], 50.00th=[ 9634], 60.00th=[10028], 00:12:20.014 | 70.00th=[10945], 80.00th=[12911], 90.00th=[14877], 95.00th=[17171], 00:12:20.014 | 99.00th=[27919], 99.50th=[29230], 99.90th=[35914], 99.95th=[36439], 00:12:20.014 | 99.99th=[36439] 00:12:20.014 write: IOPS=6409, BW=25.0MiB/s (26.3MB/s)(25.2MiB/1005msec); 0 zone resets 00:12:20.015 slat (nsec): min=1599, max=8757.0k, avg=70745.57, stdev=461851.32 00:12:20.015 clat (usec): min=1090, max=36294, avg=9757.17, stdev=5782.88 00:12:20.015 lat (usec): min=1101, max=36319, avg=9827.92, stdev=5820.88 00:12:20.015 clat percentiles (usec): 00:12:20.015 | 1.00th=[ 3785], 5.00th=[ 4621], 10.00th=[ 4883], 20.00th=[ 5932], 00:12:20.015 | 30.00th=[ 6521], 40.00th=[ 6915], 50.00th=[ 7308], 60.00th=[ 8979], 00:12:20.015 | 70.00th=[10945], 80.00th=[12387], 90.00th=[16319], 95.00th=[20579], 00:12:20.015 | 99.00th=[33817], 99.50th=[34341], 99.90th=[35914], 99.95th=[35914], 00:12:20.015 | 99.99th=[36439] 00:12:20.015 bw ( KiB/s): min=20480, max=30032, per=28.23%, avg=25256.00, stdev=6754.28, samples=2 00:12:20.015 iops : min= 5120, max= 7508, avg=6314.00, stdev=1688.57, samples=2 00:12:20.015 lat (msec) : 2=0.06%, 4=0.78%, 10=61.62%, 20=33.60%, 50=3.94% 00:12:20.015 cpu : usr=4.98%, sys=7.27%, ctx=376, majf=0, minf=2 00:12:20.015 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:12:20.015 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:20.015 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:20.015 issued rwts: total=6144,6442,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:20.015 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:20.015 00:12:20.015 Run status group 0 (all jobs): 00:12:20.015 READ: bw=79.4MiB/s (83.3MB/s), 6101KiB/s-26.1MiB/s (6248kB/s-27.4MB/s), io=80.3MiB (84.2MB), run=1005-1011msec 00:12:20.015 WRITE: bw=87.4MiB/s (91.6MB/s), 7984KiB/s-29.8MiB/s (8176kB/s-31.2MB/s), io=88.3MiB (92.6MB), run=1005-1011msec 00:12:20.015 00:12:20.015 Disk stats (read/write): 00:12:20.015 nvme0n1: ios=5170/5589, merge=0/0, ticks=36951/42054, in_queue=79005, util=86.67% 00:12:20.015 nvme0n2: ios=1219/1536, merge=0/0, ticks=13301/21439, in_queue=34740, util=98.57% 00:12:20.015 nvme0n3: ios=5682/6263, merge=0/0, ticks=42818/55730, in_queue=98548, util=100.00% 00:12:20.015 nvme0n4: ios=5120/5289, merge=0/0, ticks=51349/49474, in_queue=100823, util=89.41% 00:12:20.015 08:27:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:12:20.015 08:27:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3620860 00:12:20.015 08:27:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:12:20.015 08:27:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:12:20.015 [global] 00:12:20.015 thread=1 00:12:20.015 invalidate=1 00:12:20.015 rw=read 00:12:20.015 time_based=1 00:12:20.015 runtime=10 00:12:20.015 ioengine=libaio 00:12:20.015 direct=1 00:12:20.015 bs=4096 00:12:20.015 iodepth=1 00:12:20.015 norandommap=1 00:12:20.015 numjobs=1 00:12:20.015 00:12:20.015 [job0] 00:12:20.015 filename=/dev/nvme0n1 00:12:20.015 [job1] 00:12:20.015 filename=/dev/nvme0n2 00:12:20.015 [job2] 00:12:20.015 filename=/dev/nvme0n3 00:12:20.015 [job3] 00:12:20.015 filename=/dev/nvme0n4 00:12:20.015 Could not set queue depth (nvme0n1) 00:12:20.015 Could not set queue depth (nvme0n2) 00:12:20.015 Could not set queue depth (nvme0n3) 00:12:20.015 Could not set queue depth (nvme0n4) 00:12:20.278 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:20.278 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:20.278 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:20.278 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:20.278 fio-3.35 00:12:20.278 Starting 4 threads 00:12:22.824 08:27:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:12:23.085 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=724992, buflen=4096 00:12:23.085 fio: pid=3621096, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:23.085 08:27:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:12:23.085 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=10215424, buflen=4096 00:12:23.085 fio: pid=3621094, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:23.085 08:27:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:23.085 08:27:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:12:23.345 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=11399168, buflen=4096 00:12:23.345 fio: pid=3621089, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:23.345 08:27:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:23.345 08:27:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:12:23.607 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=12128256, buflen=4096 00:12:23.607 fio: pid=3621090, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:23.607 08:27:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:23.607 08:27:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:12:23.607 00:12:23.608 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3621089: Tue Oct 1 08:27:15 2024 00:12:23.608 read: IOPS=944, BW=3776KiB/s (3867kB/s)(10.9MiB/2948msec) 00:12:23.608 slat (usec): min=7, max=35134, avg=42.67, stdev=691.49 00:12:23.608 clat (usec): min=320, max=1349, avg=1001.98, stdev=89.92 00:12:23.608 lat (usec): min=346, max=36225, avg=1044.65, stdev=699.74 00:12:23.608 clat percentiles (usec): 00:12:23.608 | 1.00th=[ 734], 5.00th=[ 824], 10.00th=[ 881], 20.00th=[ 938], 00:12:23.608 | 30.00th=[ 979], 40.00th=[ 1004], 50.00th=[ 1020], 60.00th=[ 1037], 00:12:23.608 | 70.00th=[ 1057], 80.00th=[ 1074], 90.00th=[ 1090], 95.00th=[ 1123], 00:12:23.608 | 99.00th=[ 1172], 99.50th=[ 1205], 99.90th=[ 1254], 99.95th=[ 1254], 00:12:23.608 | 99.99th=[ 1352] 00:12:23.608 bw ( KiB/s): min= 3808, max= 3912, per=36.06%, avg=3872.00, stdev=41.18, samples=5 00:12:23.608 iops : min= 952, max= 978, avg=968.00, stdev=10.30, samples=5 00:12:23.608 lat (usec) : 500=0.07%, 750=1.08%, 1000=38.29% 00:12:23.608 lat (msec) : 2=60.52% 00:12:23.608 cpu : usr=0.92%, sys=3.02%, ctx=2788, majf=0, minf=1 00:12:23.608 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:23.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.608 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.608 issued rwts: total=2784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:23.608 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:23.608 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3621090: Tue Oct 1 08:27:15 2024 00:12:23.608 read: IOPS=944, BW=3778KiB/s (3869kB/s)(11.6MiB/3135msec) 00:12:23.608 slat (usec): min=6, max=14332, avg=38.51, stdev=380.20 00:12:23.608 clat (usec): min=472, max=1221, avg=1006.15, stdev=83.28 00:12:23.608 lat (usec): min=479, max=15452, avg=1044.66, stdev=391.18 00:12:23.608 clat percentiles (usec): 00:12:23.608 | 1.00th=[ 758], 5.00th=[ 840], 10.00th=[ 898], 20.00th=[ 947], 00:12:23.608 | 30.00th=[ 979], 40.00th=[ 1004], 50.00th=[ 1020], 60.00th=[ 1037], 00:12:23.608 | 70.00th=[ 1057], 80.00th=[ 1074], 90.00th=[ 1090], 95.00th=[ 1123], 00:12:23.608 | 99.00th=[ 1172], 99.50th=[ 1188], 99.90th=[ 1205], 99.95th=[ 1205], 00:12:23.608 | 99.99th=[ 1221] 00:12:23.608 bw ( KiB/s): min= 3780, max= 3936, per=35.71%, avg=3834.00, stdev=65.96, samples=6 00:12:23.608 iops : min= 945, max= 984, avg=958.50, stdev=16.49, samples=6 00:12:23.608 lat (usec) : 500=0.03%, 750=0.88%, 1000=37.85% 00:12:23.608 lat (msec) : 2=61.21% 00:12:23.608 cpu : usr=0.67%, sys=3.16%, ctx=2967, majf=0, minf=2 00:12:23.608 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:23.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.608 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.608 issued rwts: total=2962,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:23.608 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:23.608 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3621094: Tue Oct 1 08:27:15 2024 00:12:23.608 read: IOPS=904, BW=3616KiB/s (3703kB/s)(9976KiB/2759msec) 00:12:23.608 slat (usec): min=6, max=21809, avg=41.76, stdev=516.32 00:12:23.608 clat (usec): min=496, max=1393, avg=1049.01, stdev=115.89 00:12:23.608 lat (usec): min=524, max=22960, avg=1090.77, stdev=531.39 00:12:23.608 clat percentiles (usec): 00:12:23.608 | 1.00th=[ 701], 5.00th=[ 832], 10.00th=[ 914], 20.00th=[ 971], 00:12:23.608 | 30.00th=[ 1004], 40.00th=[ 1029], 50.00th=[ 1057], 60.00th=[ 1090], 00:12:23.608 | 70.00th=[ 1106], 80.00th=[ 1139], 90.00th=[ 1188], 95.00th=[ 1221], 00:12:23.608 | 99.00th=[ 1270], 99.50th=[ 1303], 99.90th=[ 1369], 99.95th=[ 1385], 00:12:23.608 | 99.99th=[ 1401] 00:12:23.608 bw ( KiB/s): min= 3632, max= 3768, per=34.31%, avg=3684.80, stdev=52.95, samples=5 00:12:23.608 iops : min= 908, max= 942, avg=921.20, stdev=13.24, samples=5 00:12:23.608 lat (usec) : 500=0.04%, 750=1.92%, 1000=27.58% 00:12:23.608 lat (msec) : 2=70.42% 00:12:23.608 cpu : usr=1.49%, sys=3.84%, ctx=2497, majf=0, minf=2 00:12:23.608 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:23.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.608 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.608 issued rwts: total=2495,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:23.608 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:23.608 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3621096: Tue Oct 1 08:27:15 2024 00:12:23.608 read: IOPS=69, BW=275KiB/s (281kB/s)(708KiB/2578msec) 00:12:23.608 slat (nsec): min=7261, max=46210, avg=26702.10, stdev=5663.07 00:12:23.608 clat (usec): min=555, max=42164, avg=14378.34, stdev=19272.14 00:12:23.608 lat (usec): min=563, max=42192, avg=14405.04, stdev=19273.49 00:12:23.608 clat percentiles (usec): 00:12:23.608 | 1.00th=[ 603], 5.00th=[ 766], 10.00th=[ 848], 20.00th=[ 922], 00:12:23.608 | 30.00th=[ 979], 40.00th=[ 1012], 50.00th=[ 1037], 60.00th=[ 1090], 00:12:23.608 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:12:23.608 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:23.608 | 99.99th=[42206] 00:12:23.608 bw ( KiB/s): min= 96, max= 656, per=1.94%, avg=208.00, stdev=250.44, samples=5 00:12:23.608 iops : min= 24, max= 164, avg=52.00, stdev=62.61, samples=5 00:12:23.608 lat (usec) : 750=3.93%, 1000=33.15% 00:12:23.608 lat (msec) : 2=29.78%, 50=32.58% 00:12:23.608 cpu : usr=0.04%, sys=0.27%, ctx=179, majf=0, minf=2 00:12:23.608 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:23.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.608 complete : 0=0.6%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.608 issued rwts: total=178,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:23.608 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:23.608 00:12:23.608 Run status group 0 (all jobs): 00:12:23.608 READ: bw=10.5MiB/s (11.0MB/s), 275KiB/s-3778KiB/s (281kB/s-3869kB/s), io=32.9MiB (34.5MB), run=2578-3135msec 00:12:23.608 00:12:23.608 Disk stats (read/write): 00:12:23.608 nvme0n1: ios=2727/0, merge=0/0, ticks=3708/0, in_queue=3708, util=98.00% 00:12:23.608 nvme0n2: ios=2955/0, merge=0/0, ticks=2900/0, in_queue=2900, util=95.04% 00:12:23.608 nvme0n3: ios=2388/0, merge=0/0, ticks=2278/0, in_queue=2278, util=96.03% 00:12:23.608 nvme0n4: ios=100/0, merge=0/0, ticks=3132/0, in_queue=3132, util=99.26% 00:12:23.869 08:27:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:23.869 08:27:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:12:23.869 08:27:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:23.869 08:27:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:12:24.129 08:27:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:24.129 08:27:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:12:24.390 08:27:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:24.390 08:27:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:12:24.390 08:27:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:12:24.390 08:27:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 3620860 00:12:24.390 08:27:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:12:24.390 08:27:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:24.652 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.652 08:27:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:24.652 08:27:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:12:24.652 08:27:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:24.652 08:27:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:24.652 08:27:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:24.652 08:27:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:24.652 08:27:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:12:24.652 08:27:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:12:24.652 08:27:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:12:24.652 nvmf hotplug test: fio failed as expected 00:12:24.652 08:27:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:24.652 08:27:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:12:24.652 08:27:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:12:24.652 08:27:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:12:24.652 08:27:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:12:24.652 08:27:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:12:24.652 08:27:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:12:24.652 08:27:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:12:24.652 08:27:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:24.652 08:27:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:12:24.652 08:27:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:24.652 08:27:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:24.652 rmmod nvme_tcp 00:12:24.913 rmmod nvme_fabrics 00:12:24.913 rmmod nvme_keyring 00:12:24.913 08:27:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:24.913 08:27:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:12:24.913 08:27:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:12:24.913 08:27:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@513 -- # '[' -n 3616536 ']' 00:12:24.913 08:27:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # killprocess 3616536 00:12:24.913 08:27:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 3616536 ']' 00:12:24.913 08:27:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 3616536 00:12:24.913 08:27:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:12:24.913 08:27:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:24.913 08:27:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3616536 00:12:24.913 08:27:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:24.913 08:27:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:24.913 08:27:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3616536' 00:12:24.913 killing process with pid 3616536 00:12:24.913 08:27:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 3616536 00:12:24.913 08:27:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 3616536 00:12:25.175 08:27:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:12:25.175 08:27:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:12:25.175 08:27:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:12:25.175 08:27:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:12:25.175 08:27:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-save 00:12:25.175 08:27:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:12:25.175 08:27:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-restore 00:12:25.175 08:27:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:25.175 08:27:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:25.175 08:27:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:25.175 08:27:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:25.175 08:27:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:27.092 08:27:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:27.092 00:12:27.092 real 0m29.019s 00:12:27.092 user 2m38.551s 00:12:27.092 sys 0m9.522s 00:12:27.092 08:27:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:27.092 08:27:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.092 ************************************ 00:12:27.092 END TEST nvmf_fio_target 00:12:27.092 ************************************ 00:12:27.092 08:27:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:27.092 08:27:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:27.092 08:27:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:27.092 08:27:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:27.092 ************************************ 00:12:27.092 START TEST nvmf_bdevio 00:12:27.092 ************************************ 00:12:27.092 08:27:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:27.366 * Looking for test storage... 00:12:27.366 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:27.366 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:27.366 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:12:27.366 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:27.366 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:27.366 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:27.366 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:27.366 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:27.366 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:12:27.366 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:12:27.366 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:12:27.366 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:12:27.366 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:12:27.366 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:12:27.366 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:12:27.366 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:27.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.367 --rc genhtml_branch_coverage=1 00:12:27.367 --rc genhtml_function_coverage=1 00:12:27.367 --rc genhtml_legend=1 00:12:27.367 --rc geninfo_all_blocks=1 00:12:27.367 --rc geninfo_unexecuted_blocks=1 00:12:27.367 00:12:27.367 ' 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:27.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.367 --rc genhtml_branch_coverage=1 00:12:27.367 --rc genhtml_function_coverage=1 00:12:27.367 --rc genhtml_legend=1 00:12:27.367 --rc geninfo_all_blocks=1 00:12:27.367 --rc geninfo_unexecuted_blocks=1 00:12:27.367 00:12:27.367 ' 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:27.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.367 --rc genhtml_branch_coverage=1 00:12:27.367 --rc genhtml_function_coverage=1 00:12:27.367 --rc genhtml_legend=1 00:12:27.367 --rc geninfo_all_blocks=1 00:12:27.367 --rc geninfo_unexecuted_blocks=1 00:12:27.367 00:12:27.367 ' 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:27.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.367 --rc genhtml_branch_coverage=1 00:12:27.367 --rc genhtml_function_coverage=1 00:12:27.367 --rc genhtml_legend=1 00:12:27.367 --rc geninfo_all_blocks=1 00:12:27.367 --rc geninfo_unexecuted_blocks=1 00:12:27.367 00:12:27.367 ' 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:27.367 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # prepare_net_devs 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@434 -- # local -g is_hw=no 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # remove_spdk_ns 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:12:27.367 08:27:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:35.518 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:35.518 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:35.518 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:35.518 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # is_hw=yes 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:35.518 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:35.518 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.448 ms 00:12:35.518 00:12:35.518 --- 10.0.0.2 ping statistics --- 00:12:35.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.518 rtt min/avg/max/mdev = 0.448/0.448/0.448/0.000 ms 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:35.518 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:35.518 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:12:35.518 00:12:35.518 --- 10.0.0.1 ping statistics --- 00:12:35.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.518 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # return 0 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # nvmfpid=3626168 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # waitforlisten 3626168 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 3626168 ']' 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:35.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:35.518 08:27:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:35.518 [2024-10-01 08:27:26.617227] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:12:35.518 [2024-10-01 08:27:26.617296] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:35.518 [2024-10-01 08:27:26.704375] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:35.518 [2024-10-01 08:27:26.795862] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:35.518 [2024-10-01 08:27:26.795920] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:35.518 [2024-10-01 08:27:26.795929] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:35.518 [2024-10-01 08:27:26.795936] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:35.518 [2024-10-01 08:27:26.795942] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:35.518 [2024-10-01 08:27:26.798394] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:12:35.518 [2024-10-01 08:27:26.798553] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:12:35.518 [2024-10-01 08:27:26.798712] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:12:35.518 [2024-10-01 08:27:26.798713] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:12:35.778 08:27:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:35.778 08:27:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:12:35.778 08:27:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:12:35.778 08:27:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:35.778 08:27:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:35.778 08:27:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:35.778 08:27:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:35.778 08:27:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.778 08:27:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:35.778 [2024-10-01 08:27:27.501090] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:35.778 08:27:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.778 08:27:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:35.778 08:27:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.778 08:27:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:35.778 Malloc0 00:12:35.778 08:27:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.778 08:27:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:35.778 08:27:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.778 08:27:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:35.778 08:27:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.778 08:27:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:35.778 08:27:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.778 08:27:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:35.778 08:27:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.778 08:27:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:35.778 08:27:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.778 08:27:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:35.778 [2024-10-01 08:27:27.550427] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:35.778 08:27:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.778 08:27:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:12:35.778 08:27:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:35.778 08:27:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # config=() 00:12:35.778 08:27:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # local subsystem config 00:12:35.778 08:27:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:12:35.778 08:27:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:12:35.778 { 00:12:35.778 "params": { 00:12:35.778 "name": "Nvme$subsystem", 00:12:35.778 "trtype": "$TEST_TRANSPORT", 00:12:35.778 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:35.778 "adrfam": "ipv4", 00:12:35.778 "trsvcid": "$NVMF_PORT", 00:12:35.778 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:35.778 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:35.778 "hdgst": ${hdgst:-false}, 00:12:35.778 "ddgst": ${ddgst:-false} 00:12:35.778 }, 00:12:35.778 "method": "bdev_nvme_attach_controller" 00:12:35.778 } 00:12:35.778 EOF 00:12:35.778 )") 00:12:35.778 08:27:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@578 -- # cat 00:12:35.778 08:27:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # jq . 00:12:35.778 08:27:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@581 -- # IFS=, 00:12:35.778 08:27:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:12:35.778 "params": { 00:12:35.778 "name": "Nvme1", 00:12:35.778 "trtype": "tcp", 00:12:35.778 "traddr": "10.0.0.2", 00:12:35.778 "adrfam": "ipv4", 00:12:35.778 "trsvcid": "4420", 00:12:35.778 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:35.778 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:35.778 "hdgst": false, 00:12:35.778 "ddgst": false 00:12:35.778 }, 00:12:35.778 "method": "bdev_nvme_attach_controller" 00:12:35.778 }' 00:12:36.038 [2024-10-01 08:27:27.607829] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:12:36.038 [2024-10-01 08:27:27.607903] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3626485 ] 00:12:36.038 [2024-10-01 08:27:27.676148] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:36.038 [2024-10-01 08:27:27.751992] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:36.038 [2024-10-01 08:27:27.752109] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:36.038 [2024-10-01 08:27:27.752113] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.299 I/O targets: 00:12:36.299 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:36.299 00:12:36.299 00:12:36.299 CUnit - A unit testing framework for C - Version 2.1-3 00:12:36.299 http://cunit.sourceforge.net/ 00:12:36.299 00:12:36.299 00:12:36.299 Suite: bdevio tests on: Nvme1n1 00:12:36.299 Test: blockdev write read block ...passed 00:12:36.299 Test: blockdev write zeroes read block ...passed 00:12:36.299 Test: blockdev write zeroes read no split ...passed 00:12:36.299 Test: blockdev write zeroes read split ...passed 00:12:36.299 Test: blockdev write zeroes read split partial ...passed 00:12:36.299 Test: blockdev reset ...[2024-10-01 08:27:28.104319] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:12:36.299 [2024-10-01 08:27:28.104389] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf51270 (9): Bad file descriptor 00:12:36.559 [2024-10-01 08:27:28.172402] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:36.559 passed 00:12:36.559 Test: blockdev write read 8 blocks ...passed 00:12:36.559 Test: blockdev write read size > 128k ...passed 00:12:36.559 Test: blockdev write read invalid size ...passed 00:12:36.559 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:36.559 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:36.559 Test: blockdev write read max offset ...passed 00:12:36.559 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:36.820 Test: blockdev writev readv 8 blocks ...passed 00:12:36.820 Test: blockdev writev readv 30 x 1block ...passed 00:12:36.820 Test: blockdev writev readv block ...passed 00:12:36.820 Test: blockdev writev readv size > 128k ...passed 00:12:36.820 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:36.820 Test: blockdev comparev and writev ...[2024-10-01 08:27:28.430985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:36.820 [2024-10-01 08:27:28.431014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:36.820 [2024-10-01 08:27:28.431025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:36.820 [2024-10-01 08:27:28.431032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:36.820 [2024-10-01 08:27:28.431385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:36.820 [2024-10-01 08:27:28.431393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:36.820 [2024-10-01 08:27:28.431404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:36.820 [2024-10-01 08:27:28.431409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:36.820 [2024-10-01 08:27:28.431776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:36.820 [2024-10-01 08:27:28.431785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:36.820 [2024-10-01 08:27:28.431795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:36.820 [2024-10-01 08:27:28.431802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:36.820 [2024-10-01 08:27:28.432156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:36.820 [2024-10-01 08:27:28.432165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:36.820 [2024-10-01 08:27:28.432174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:36.820 [2024-10-01 08:27:28.432180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:36.820 passed 00:12:36.820 Test: blockdev nvme passthru rw ...passed 00:12:36.820 Test: blockdev nvme passthru vendor specific ...[2024-10-01 08:27:28.515536] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:36.820 [2024-10-01 08:27:28.515547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:36.820 [2024-10-01 08:27:28.515782] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:36.820 [2024-10-01 08:27:28.515790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:36.820 [2024-10-01 08:27:28.515970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:36.820 [2024-10-01 08:27:28.515977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:36.820 [2024-10-01 08:27:28.516229] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:36.820 [2024-10-01 08:27:28.516241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:36.820 passed 00:12:36.820 Test: blockdev nvme admin passthru ...passed 00:12:36.820 Test: blockdev copy ...passed 00:12:36.820 00:12:36.821 Run Summary: Type Total Ran Passed Failed Inactive 00:12:36.821 suites 1 1 n/a 0 0 00:12:36.821 tests 23 23 23 0 0 00:12:36.821 asserts 152 152 152 0 n/a 00:12:36.821 00:12:36.821 Elapsed time = 1.371 seconds 00:12:37.081 08:27:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:37.081 08:27:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.081 08:27:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:37.081 08:27:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.081 08:27:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:37.081 08:27:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:12:37.081 08:27:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # nvmfcleanup 00:12:37.081 08:27:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:12:37.081 08:27:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:37.081 08:27:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:12:37.081 08:27:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:37.081 08:27:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:37.081 rmmod nvme_tcp 00:12:37.081 rmmod nvme_fabrics 00:12:37.081 rmmod nvme_keyring 00:12:37.081 08:27:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:37.081 08:27:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:12:37.081 08:27:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:12:37.081 08:27:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@513 -- # '[' -n 3626168 ']' 00:12:37.081 08:27:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # killprocess 3626168 00:12:37.081 08:27:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 3626168 ']' 00:12:37.081 08:27:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 3626168 00:12:37.081 08:27:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:12:37.081 08:27:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:37.081 08:27:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3626168 00:12:37.081 08:27:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:12:37.081 08:27:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:12:37.081 08:27:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3626168' 00:12:37.081 killing process with pid 3626168 00:12:37.081 08:27:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 3626168 00:12:37.081 08:27:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 3626168 00:12:37.341 08:27:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:12:37.341 08:27:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:12:37.341 08:27:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:12:37.341 08:27:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:12:37.341 08:27:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:12:37.341 08:27:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-save 00:12:37.341 08:27:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-restore 00:12:37.341 08:27:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:37.341 08:27:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:37.341 08:27:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:37.341 08:27:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:37.341 08:27:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:39.374 08:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:39.374 00:12:39.374 real 0m12.161s 00:12:39.374 user 0m13.027s 00:12:39.374 sys 0m6.224s 00:12:39.374 08:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:39.374 08:27:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:39.374 ************************************ 00:12:39.374 END TEST nvmf_bdevio 00:12:39.374 ************************************ 00:12:39.374 08:27:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:12:39.374 00:12:39.374 real 4m59.740s 00:12:39.374 user 11m49.100s 00:12:39.374 sys 1m47.889s 00:12:39.374 08:27:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:39.374 08:27:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:39.374 ************************************ 00:12:39.374 END TEST nvmf_target_core 00:12:39.374 ************************************ 00:12:39.374 08:27:31 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:39.374 08:27:31 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:39.374 08:27:31 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:39.374 08:27:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:39.374 ************************************ 00:12:39.374 START TEST nvmf_target_extra 00:12:39.374 ************************************ 00:12:39.374 08:27:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:39.635 * Looking for test storage... 00:12:39.635 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:12:39.635 08:27:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:39.635 08:27:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lcov --version 00:12:39.635 08:27:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:39.635 08:27:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:39.635 08:27:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:39.635 08:27:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:39.635 08:27:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:39.635 08:27:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:12:39.635 08:27:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:12:39.635 08:27:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:12:39.635 08:27:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:12:39.635 08:27:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:12:39.635 08:27:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:12:39.635 08:27:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:12:39.635 08:27:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:39.635 08:27:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:12:39.635 08:27:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:12:39.635 08:27:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:39.635 08:27:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:39.635 08:27:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:12:39.635 08:27:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:12:39.635 08:27:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:39.635 08:27:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:12:39.635 08:27:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:12:39.635 08:27:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:12:39.635 08:27:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:12:39.635 08:27:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:39.635 08:27:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:12:39.635 08:27:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:12:39.635 08:27:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:39.635 08:27:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:39.635 08:27:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:12:39.636 08:27:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:39.636 08:27:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:39.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.636 --rc genhtml_branch_coverage=1 00:12:39.636 --rc genhtml_function_coverage=1 00:12:39.636 --rc genhtml_legend=1 00:12:39.636 --rc geninfo_all_blocks=1 00:12:39.636 --rc geninfo_unexecuted_blocks=1 00:12:39.636 00:12:39.636 ' 00:12:39.636 08:27:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:39.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.636 --rc genhtml_branch_coverage=1 00:12:39.636 --rc genhtml_function_coverage=1 00:12:39.636 --rc genhtml_legend=1 00:12:39.636 --rc geninfo_all_blocks=1 00:12:39.636 --rc geninfo_unexecuted_blocks=1 00:12:39.636 00:12:39.636 ' 00:12:39.636 08:27:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:39.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.636 --rc genhtml_branch_coverage=1 00:12:39.636 --rc genhtml_function_coverage=1 00:12:39.636 --rc genhtml_legend=1 00:12:39.636 --rc geninfo_all_blocks=1 00:12:39.636 --rc geninfo_unexecuted_blocks=1 00:12:39.636 00:12:39.636 ' 00:12:39.636 08:27:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:39.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.636 --rc genhtml_branch_coverage=1 00:12:39.636 --rc genhtml_function_coverage=1 00:12:39.636 --rc genhtml_legend=1 00:12:39.636 --rc geninfo_all_blocks=1 00:12:39.636 --rc geninfo_unexecuted_blocks=1 00:12:39.636 00:12:39.636 ' 00:12:39.636 08:27:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:39.636 08:27:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:12:39.636 08:27:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:39.636 08:27:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:39.636 08:27:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:39.636 08:27:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:39.636 08:27:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:39.636 08:27:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:39.636 08:27:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:39.636 08:27:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:39.636 08:27:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:39.636 08:27:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:39.636 08:27:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:39.636 08:27:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:39.636 08:27:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:39.636 08:27:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:39.636 08:27:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:39.636 08:27:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:39.636 08:27:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:39.636 08:27:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:12:39.636 08:27:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:39.636 08:27:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:39.636 08:27:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:39.636 08:27:31 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.636 08:27:31 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.636 08:27:31 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.636 08:27:31 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:12:39.636 08:27:31 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.636 08:27:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:12:39.636 08:27:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:39.636 08:27:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:39.636 08:27:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:39.636 08:27:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:39.636 08:27:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:39.636 08:27:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:39.636 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:39.636 08:27:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:39.636 08:27:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:39.636 08:27:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:39.636 08:27:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:12:39.636 08:27:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:12:39.636 08:27:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:12:39.636 08:27:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:12:39.636 08:27:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:39.636 08:27:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:39.636 08:27:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:39.898 ************************************ 00:12:39.898 START TEST nvmf_example 00:12:39.898 ************************************ 00:12:39.898 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:12:39.898 * Looking for test storage... 00:12:39.898 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:39.898 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:39.898 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lcov --version 00:12:39.898 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:39.898 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:39.898 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:39.898 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:39.898 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:39.898 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:12:39.898 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:12:39.898 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:12:39.898 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:12:39.898 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:12:39.898 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:12:39.898 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:12:39.898 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:39.898 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:12:39.898 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:12:39.898 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:39.898 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:39.898 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:12:39.898 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:12:39.898 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:39.898 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:12:39.898 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:12:39.898 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:12:39.898 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:12:39.898 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:39.898 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:12:39.898 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:12:39.898 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:39.898 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:39.898 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:12:39.899 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:39.899 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:39.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.899 --rc genhtml_branch_coverage=1 00:12:39.899 --rc genhtml_function_coverage=1 00:12:39.899 --rc genhtml_legend=1 00:12:39.899 --rc geninfo_all_blocks=1 00:12:39.899 --rc geninfo_unexecuted_blocks=1 00:12:39.899 00:12:39.899 ' 00:12:39.899 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:39.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.899 --rc genhtml_branch_coverage=1 00:12:39.899 --rc genhtml_function_coverage=1 00:12:39.899 --rc genhtml_legend=1 00:12:39.899 --rc geninfo_all_blocks=1 00:12:39.899 --rc geninfo_unexecuted_blocks=1 00:12:39.899 00:12:39.899 ' 00:12:39.899 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:39.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.899 --rc genhtml_branch_coverage=1 00:12:39.899 --rc genhtml_function_coverage=1 00:12:39.899 --rc genhtml_legend=1 00:12:39.899 --rc geninfo_all_blocks=1 00:12:39.899 --rc geninfo_unexecuted_blocks=1 00:12:39.899 00:12:39.899 ' 00:12:39.899 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:39.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.899 --rc genhtml_branch_coverage=1 00:12:39.899 --rc genhtml_function_coverage=1 00:12:39.899 --rc genhtml_legend=1 00:12:39.899 --rc geninfo_all_blocks=1 00:12:39.899 --rc geninfo_unexecuted_blocks=1 00:12:39.899 00:12:39.899 ' 00:12:39.899 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:39.899 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:12:39.899 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:39.899 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:39.899 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:39.899 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:39.899 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:39.899 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:39.899 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:39.899 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:39.899 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:39.899 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:39.899 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:39.899 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:39.899 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:39.899 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:39.899 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:39.899 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:39.899 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:39.899 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:12:39.899 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:39.899 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:39.899 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:39.899 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.899 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.899 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.899 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:12:39.899 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.899 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:12:39.899 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:39.899 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:39.899 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:39.899 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:39.899 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:39.899 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:39.899 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:39.899 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:39.899 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:39.899 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:39.899 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:12:39.899 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:12:39.899 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:12:39.899 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:12:39.899 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:12:39.899 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:12:39.899 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:12:39.899 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:12:39.899 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:39.899 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:39.899 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:12:39.899 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:12:39.899 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:39.899 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@472 -- # prepare_net_devs 00:12:39.899 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@434 -- # local -g is_hw=no 00:12:39.899 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@436 -- # remove_spdk_ns 00:12:39.899 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:39.899 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:39.899 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:39.899 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:12:39.899 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:12:39.899 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:12:39.899 08:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:48.044 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:48.044 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:12:48.044 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:48.044 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:48.044 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:48.044 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:48.044 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:48.044 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:12:48.044 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:48.044 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:12:48.044 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:12:48.044 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:12:48.044 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:12:48.044 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:12:48.044 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:12:48.044 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:48.044 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:48.044 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:48.044 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:48.045 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:48.045 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:48.045 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:48.045 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # is_hw=yes 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:48.045 08:27:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:48.045 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:48.045 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:48.045 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:48.045 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:48.045 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:48.045 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.517 ms 00:12:48.045 00:12:48.045 --- 10.0.0.2 ping statistics --- 00:12:48.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.045 rtt min/avg/max/mdev = 0.517/0.517/0.517/0.000 ms 00:12:48.045 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:48.045 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:48.046 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:12:48.046 00:12:48.046 --- 10.0.0.1 ping statistics --- 00:12:48.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.046 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:12:48.046 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:48.046 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # return 0 00:12:48.046 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:12:48.046 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:48.046 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:12:48.046 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:12:48.046 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:48.046 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:12:48.046 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:12:48.046 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:12:48.046 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:12:48.046 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:48.046 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:48.046 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:12:48.046 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:12:48.046 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3631165 00:12:48.046 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:48.046 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:12:48.046 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3631165 00:12:48.046 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 3631165 ']' 00:12:48.046 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:48.046 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:48.046 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:48.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:48.046 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:48.046 08:27:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:48.306 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:48.306 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:12:48.306 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:12:48.306 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:48.307 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:48.307 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:48.307 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.307 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:48.307 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.307 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:12:48.307 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.307 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:48.307 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.307 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:12:48.307 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:48.307 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.307 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:48.567 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.567 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:12:48.567 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:48.567 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.567 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:48.567 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.567 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:48.567 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.567 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:48.567 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.567 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:12:48.567 08:27:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:00.802 Initializing NVMe Controllers 00:13:00.802 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:00.802 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:00.802 Initialization complete. Launching workers. 00:13:00.802 ======================================================== 00:13:00.802 Latency(us) 00:13:00.802 Device Information : IOPS MiB/s Average min max 00:13:00.802 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19025.75 74.32 3363.42 645.32 15489.76 00:13:00.802 ======================================================== 00:13:00.802 Total : 19025.75 74.32 3363.42 645.32 15489.76 00:13:00.802 00:13:00.802 08:27:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:13:00.802 08:27:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:13:00.802 08:27:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@512 -- # nvmfcleanup 00:13:00.802 08:27:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:13:00.802 08:27:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:00.802 08:27:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:13:00.802 08:27:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:00.802 08:27:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:00.802 rmmod nvme_tcp 00:13:00.802 rmmod nvme_fabrics 00:13:00.802 rmmod nvme_keyring 00:13:00.802 08:27:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:00.802 08:27:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:13:00.802 08:27:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:13:00.802 08:27:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@513 -- # '[' -n 3631165 ']' 00:13:00.802 08:27:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@514 -- # killprocess 3631165 00:13:00.802 08:27:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 3631165 ']' 00:13:00.802 08:27:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 3631165 00:13:00.802 08:27:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:13:00.802 08:27:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:00.802 08:27:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3631165 00:13:00.802 08:27:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:13:00.802 08:27:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:13:00.802 08:27:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3631165' 00:13:00.802 killing process with pid 3631165 00:13:00.802 08:27:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 3631165 00:13:00.802 08:27:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 3631165 00:13:00.802 nvmf threads initialize successfully 00:13:00.802 bdev subsystem init successfully 00:13:00.802 created a nvmf target service 00:13:00.802 create targets's poll groups done 00:13:00.802 all subsystems of target started 00:13:00.802 nvmf target is running 00:13:00.802 all subsystems of target stopped 00:13:00.802 destroy targets's poll groups done 00:13:00.802 destroyed the nvmf target service 00:13:00.802 bdev subsystem finish successfully 00:13:00.802 nvmf threads destroy successfully 00:13:00.802 08:27:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:13:00.802 08:27:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:13:00.802 08:27:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:13:00.802 08:27:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:13:00.802 08:27:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:13:00.802 08:27:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@787 -- # iptables-save 00:13:00.802 08:27:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@787 -- # iptables-restore 00:13:00.802 08:27:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:00.802 08:27:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:00.802 08:27:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:00.802 08:27:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:00.802 08:27:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:01.063 08:27:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:01.063 08:27:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:13:01.063 08:27:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:01.063 08:27:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:01.063 00:13:01.063 real 0m21.365s 00:13:01.063 user 0m46.733s 00:13:01.063 sys 0m6.916s 00:13:01.063 08:27:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:01.063 08:27:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:01.063 ************************************ 00:13:01.063 END TEST nvmf_example 00:13:01.063 ************************************ 00:13:01.063 08:27:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:13:01.063 08:27:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:01.063 08:27:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:01.063 08:27:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:01.323 ************************************ 00:13:01.323 START TEST nvmf_filesystem 00:13:01.323 ************************************ 00:13:01.323 08:27:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:13:01.323 * Looking for test storage... 00:13:01.323 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:01.323 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:01.323 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:13:01.323 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:01.323 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:01.323 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:01.323 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:01.323 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:01.323 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:13:01.323 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:13:01.323 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:13:01.323 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:13:01.323 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:13:01.323 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:13:01.323 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:13:01.323 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:01.323 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:13:01.323 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:13:01.323 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:01.323 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:01.323 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:13:01.323 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:13:01.323 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:01.323 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:13:01.323 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:13:01.323 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:13:01.323 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:13:01.323 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:01.323 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:13:01.323 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:13:01.323 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:01.323 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:01.323 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:13:01.323 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:01.323 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:01.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:01.323 --rc genhtml_branch_coverage=1 00:13:01.323 --rc genhtml_function_coverage=1 00:13:01.323 --rc genhtml_legend=1 00:13:01.323 --rc geninfo_all_blocks=1 00:13:01.324 --rc geninfo_unexecuted_blocks=1 00:13:01.324 00:13:01.324 ' 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:01.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:01.324 --rc genhtml_branch_coverage=1 00:13:01.324 --rc genhtml_function_coverage=1 00:13:01.324 --rc genhtml_legend=1 00:13:01.324 --rc geninfo_all_blocks=1 00:13:01.324 --rc geninfo_unexecuted_blocks=1 00:13:01.324 00:13:01.324 ' 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:01.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:01.324 --rc genhtml_branch_coverage=1 00:13:01.324 --rc genhtml_function_coverage=1 00:13:01.324 --rc genhtml_legend=1 00:13:01.324 --rc geninfo_all_blocks=1 00:13:01.324 --rc geninfo_unexecuted_blocks=1 00:13:01.324 00:13:01.324 ' 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:01.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:01.324 --rc genhtml_branch_coverage=1 00:13:01.324 --rc genhtml_function_coverage=1 00:13:01.324 --rc genhtml_legend=1 00:13:01.324 --rc geninfo_all_blocks=1 00:13:01.324 --rc geninfo_unexecuted_blocks=1 00:13:01.324 00:13:01.324 ' 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR= 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=y 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR= 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_FC=n 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_URING=n 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:13:01.324 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:13:01.588 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:13:01.588 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:13:01.588 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:13:01.588 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:13:01.588 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:13:01.588 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:13:01.588 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:13:01.588 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:13:01.588 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:13:01.588 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:13:01.588 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:13:01.588 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:13:01.588 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:13:01.588 #define SPDK_CONFIG_H 00:13:01.588 #define SPDK_CONFIG_AIO_FSDEV 1 00:13:01.588 #define SPDK_CONFIG_APPS 1 00:13:01.588 #define SPDK_CONFIG_ARCH native 00:13:01.588 #undef SPDK_CONFIG_ASAN 00:13:01.588 #undef SPDK_CONFIG_AVAHI 00:13:01.589 #undef SPDK_CONFIG_CET 00:13:01.589 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:13:01.589 #define SPDK_CONFIG_COVERAGE 1 00:13:01.589 #define SPDK_CONFIG_CROSS_PREFIX 00:13:01.589 #undef SPDK_CONFIG_CRYPTO 00:13:01.589 #undef SPDK_CONFIG_CRYPTO_MLX5 00:13:01.589 #undef SPDK_CONFIG_CUSTOMOCF 00:13:01.589 #undef SPDK_CONFIG_DAOS 00:13:01.589 #define SPDK_CONFIG_DAOS_DIR 00:13:01.589 #define SPDK_CONFIG_DEBUG 1 00:13:01.589 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:13:01.589 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:13:01.589 #define SPDK_CONFIG_DPDK_INC_DIR 00:13:01.589 #define SPDK_CONFIG_DPDK_LIB_DIR 00:13:01.589 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:13:01.589 #undef SPDK_CONFIG_DPDK_UADK 00:13:01.589 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:13:01.589 #define SPDK_CONFIG_EXAMPLES 1 00:13:01.589 #undef SPDK_CONFIG_FC 00:13:01.589 #define SPDK_CONFIG_FC_PATH 00:13:01.589 #define SPDK_CONFIG_FIO_PLUGIN 1 00:13:01.589 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:13:01.589 #define SPDK_CONFIG_FSDEV 1 00:13:01.589 #undef SPDK_CONFIG_FUSE 00:13:01.589 #undef SPDK_CONFIG_FUZZER 00:13:01.589 #define SPDK_CONFIG_FUZZER_LIB 00:13:01.589 #undef SPDK_CONFIG_GOLANG 00:13:01.589 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:13:01.589 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:13:01.589 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:13:01.589 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:13:01.589 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:13:01.589 #undef SPDK_CONFIG_HAVE_LIBBSD 00:13:01.589 #undef SPDK_CONFIG_HAVE_LZ4 00:13:01.589 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:13:01.589 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:13:01.589 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:13:01.589 #define SPDK_CONFIG_IDXD 1 00:13:01.589 #define SPDK_CONFIG_IDXD_KERNEL 1 00:13:01.589 #undef SPDK_CONFIG_IPSEC_MB 00:13:01.589 #define SPDK_CONFIG_IPSEC_MB_DIR 00:13:01.589 #define SPDK_CONFIG_ISAL 1 00:13:01.589 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:13:01.589 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:13:01.589 #define SPDK_CONFIG_LIBDIR 00:13:01.589 #undef SPDK_CONFIG_LTO 00:13:01.589 #define SPDK_CONFIG_MAX_LCORES 128 00:13:01.589 #define SPDK_CONFIG_NVME_CUSE 1 00:13:01.589 #undef SPDK_CONFIG_OCF 00:13:01.589 #define SPDK_CONFIG_OCF_PATH 00:13:01.589 #define SPDK_CONFIG_OPENSSL_PATH 00:13:01.589 #undef SPDK_CONFIG_PGO_CAPTURE 00:13:01.589 #define SPDK_CONFIG_PGO_DIR 00:13:01.589 #undef SPDK_CONFIG_PGO_USE 00:13:01.589 #define SPDK_CONFIG_PREFIX /usr/local 00:13:01.589 #undef SPDK_CONFIG_RAID5F 00:13:01.589 #undef SPDK_CONFIG_RBD 00:13:01.589 #define SPDK_CONFIG_RDMA 1 00:13:01.589 #define SPDK_CONFIG_RDMA_PROV verbs 00:13:01.589 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:13:01.589 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:13:01.589 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:13:01.589 #define SPDK_CONFIG_SHARED 1 00:13:01.589 #undef SPDK_CONFIG_SMA 00:13:01.589 #define SPDK_CONFIG_TESTS 1 00:13:01.589 #undef SPDK_CONFIG_TSAN 00:13:01.589 #define SPDK_CONFIG_UBLK 1 00:13:01.589 #define SPDK_CONFIG_UBSAN 1 00:13:01.589 #undef SPDK_CONFIG_UNIT_TESTS 00:13:01.589 #undef SPDK_CONFIG_URING 00:13:01.589 #define SPDK_CONFIG_URING_PATH 00:13:01.589 #undef SPDK_CONFIG_URING_ZNS 00:13:01.589 #undef SPDK_CONFIG_USDT 00:13:01.589 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:13:01.589 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:13:01.589 #define SPDK_CONFIG_VFIO_USER 1 00:13:01.589 #define SPDK_CONFIG_VFIO_USER_DIR 00:13:01.589 #define SPDK_CONFIG_VHOST 1 00:13:01.589 #define SPDK_CONFIG_VIRTIO 1 00:13:01.589 #undef SPDK_CONFIG_VTUNE 00:13:01.589 #define SPDK_CONFIG_VTUNE_DIR 00:13:01.589 #define SPDK_CONFIG_WERROR 1 00:13:01.589 #define SPDK_CONFIG_WPDK_DIR 00:13:01.589 #undef SPDK_CONFIG_XNVME 00:13:01.589 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:13:01.589 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:13:01.589 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:01.589 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:13:01.589 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:01.589 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:01.589 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:01.589 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.589 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.589 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.589 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:13:01.589 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.589 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:13:01.589 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:13:01.589 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:13:01.589 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:13:01.589 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:13:01.589 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:13:01.589 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:13:01.589 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:13:01.589 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:13:01.589 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:13:01.589 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:13:01.590 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:13:01.591 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:13:01.591 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:13:01.591 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:13:01.591 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:13:01.591 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:13:01.591 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:13:01.591 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:13:01.591 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:13:01.591 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:13:01.591 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:13:01.591 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:13:01.591 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:13:01.591 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:13:01.591 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:13:01.591 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:13:01.591 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:13:01.591 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:13:01.591 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:13:01.591 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:13:01.591 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:13:01.591 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:13:01.591 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:13:01.591 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:13:01.591 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:13:01.591 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:13:01.591 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:13:01.591 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:13:01.591 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:13:01.591 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:13:01.591 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:13:01.591 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:13:01.591 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:13:01.591 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:13:01.591 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:13:01.591 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:13:01.591 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:13:01.591 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:13:01.591 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:13:01.591 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:13:01.591 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:01.591 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:01.591 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:01.591 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:01.591 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:13:01.591 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:13:01.591 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:13:01.591 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:13:01.591 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:13:01.591 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:13:01.591 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:01.591 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:01.591 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:01.591 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:01.591 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:13:01.591 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:13:01.591 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:13:01.591 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:13:01.591 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:01.591 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:01.591 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:01.591 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:01.591 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:13:01.591 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:13:01.591 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j144 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 3634003 ]] 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 3634003 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.Qs6suq 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.Qs6suq/tests/target /tmp/spdk.Qs6suq 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=677969920 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=4606459904 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=118290087936 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=129356513280 00:13:01.592 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=11066425344 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64666890240 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678256640 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=11366400 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=25847934976 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=25871302656 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=23367680 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=efivarfs 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=efivarfs 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=216064 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=507904 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=287744 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64677330944 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678256640 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=925696 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=12935639040 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=12935651328 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:13:01.593 * Looking for test storage... 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=118290087936 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=13281017856 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:01.593 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1668 -- # set -o errtrace 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1672 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1673 -- # true 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1675 -- # xtrace_fd 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:13:01.593 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:13:01.594 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:01.594 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:13:01.594 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:13:01.594 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:13:01.594 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:13:01.594 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:01.594 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:13:01.594 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:13:01.594 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:01.594 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:01.594 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:13:01.594 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:01.594 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:01.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:01.594 --rc genhtml_branch_coverage=1 00:13:01.594 --rc genhtml_function_coverage=1 00:13:01.594 --rc genhtml_legend=1 00:13:01.594 --rc geninfo_all_blocks=1 00:13:01.594 --rc geninfo_unexecuted_blocks=1 00:13:01.594 00:13:01.594 ' 00:13:01.594 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:01.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:01.594 --rc genhtml_branch_coverage=1 00:13:01.594 --rc genhtml_function_coverage=1 00:13:01.594 --rc genhtml_legend=1 00:13:01.594 --rc geninfo_all_blocks=1 00:13:01.594 --rc geninfo_unexecuted_blocks=1 00:13:01.594 00:13:01.594 ' 00:13:01.594 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:01.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:01.594 --rc genhtml_branch_coverage=1 00:13:01.594 --rc genhtml_function_coverage=1 00:13:01.594 --rc genhtml_legend=1 00:13:01.594 --rc geninfo_all_blocks=1 00:13:01.594 --rc geninfo_unexecuted_blocks=1 00:13:01.594 00:13:01.594 ' 00:13:01.594 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:01.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:01.594 --rc genhtml_branch_coverage=1 00:13:01.594 --rc genhtml_function_coverage=1 00:13:01.594 --rc genhtml_legend=1 00:13:01.594 --rc geninfo_all_blocks=1 00:13:01.594 --rc geninfo_unexecuted_blocks=1 00:13:01.594 00:13:01.594 ' 00:13:01.594 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:01.594 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:13:01.594 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:01.594 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:01.594 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:01.594 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:01.594 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:01.594 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:01.594 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:01.594 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:01.594 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:01.594 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:01.594 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:01.594 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:01.594 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:01.594 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:01.594 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:01.594 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:01.594 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:01.594 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:13:01.855 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:01.855 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:01.855 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:01.855 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.855 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.855 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.855 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:13:01.855 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.855 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:13:01.855 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:01.855 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:01.855 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:01.855 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:01.855 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:01.855 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:01.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:01.855 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:01.855 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:01.855 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:01.855 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:13:01.855 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:01.855 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:13:01.855 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:13:01.855 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:01.855 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@472 -- # prepare_net_devs 00:13:01.855 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@434 -- # local -g is_hw=no 00:13:01.855 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@436 -- # remove_spdk_ns 00:13:01.855 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:01.855 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:01.855 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:01.855 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:13:01.855 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:13:01.855 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:13:01.855 08:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:09.993 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:09.993 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:13:09.993 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:09.993 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:09.993 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:09.993 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:09.993 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:09.993 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:13:09.993 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:09.993 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:13:09.993 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:13:09.993 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:13:09.993 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:13:09.993 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:13:09.993 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:13:09.993 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:09.993 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:09.993 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:09.993 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:09.993 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:09.993 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:09.993 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:09.993 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:09.993 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:09.993 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:09.993 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:09.993 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:13:09.993 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:13:09.993 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:13:09.993 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:13:09.993 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:13:09.993 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:13:09.993 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:13:09.993 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:09.993 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:09.993 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:13:09.993 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:13:09.993 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:09.993 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:09.993 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:13:09.993 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:13:09.993 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:09.993 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:09.993 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:13:09.993 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:13:09.993 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:09.994 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:09.994 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # is_hw=yes 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:09.994 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:09.994 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.535 ms 00:13:09.994 00:13:09.994 --- 10.0.0.2 ping statistics --- 00:13:09.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.994 rtt min/avg/max/mdev = 0.535/0.535/0.535/0.000 ms 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:09.994 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:09.994 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:13:09.994 00:13:09.994 --- 10.0.0.1 ping statistics --- 00:13:09.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.994 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # return 0 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:09.994 ************************************ 00:13:09.994 START TEST nvmf_filesystem_no_in_capsule 00:13:09.994 ************************************ 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@505 -- # nvmfpid=3637639 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@506 -- # waitforlisten 3637639 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 3637639 ']' 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:09.994 08:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:09.994 [2024-10-01 08:28:00.835077] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:13:09.994 [2024-10-01 08:28:00.835127] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:09.994 [2024-10-01 08:28:00.903113] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:09.994 [2024-10-01 08:28:00.966676] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:09.994 [2024-10-01 08:28:00.966713] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:09.994 [2024-10-01 08:28:00.966721] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:09.994 [2024-10-01 08:28:00.966732] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:09.995 [2024-10-01 08:28:00.966738] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:09.995 [2024-10-01 08:28:00.968253] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:09.995 [2024-10-01 08:28:00.968368] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:13:09.995 [2024-10-01 08:28:00.968508] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.995 [2024-10-01 08:28:00.968508] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:13:09.995 08:28:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:09.995 08:28:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:13:09.995 08:28:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:13:09.995 08:28:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:09.995 08:28:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:09.995 08:28:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:09.995 08:28:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:09.995 08:28:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:09.995 08:28:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.995 08:28:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:09.995 [2024-10-01 08:28:01.675040] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:09.995 08:28:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.995 08:28:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:09.995 08:28:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.995 08:28:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:09.995 Malloc1 00:13:09.995 08:28:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.995 08:28:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:09.995 08:28:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.995 08:28:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:09.995 08:28:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.995 08:28:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:09.995 08:28:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.995 08:28:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:09.995 08:28:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.995 08:28:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:09.995 08:28:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.995 08:28:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:09.995 [2024-10-01 08:28:01.802107] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:09.995 08:28:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.995 08:28:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:09.995 08:28:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:13:09.995 08:28:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:13:09.995 08:28:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:13:09.995 08:28:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:13:09.995 08:28:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:09.995 08:28:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.995 08:28:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:10.255 08:28:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.255 08:28:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:13:10.255 { 00:13:10.255 "name": "Malloc1", 00:13:10.255 "aliases": [ 00:13:10.255 "3bbe9d0b-8447-4f06-b2aa-70492e7f7ca9" 00:13:10.255 ], 00:13:10.255 "product_name": "Malloc disk", 00:13:10.255 "block_size": 512, 00:13:10.255 "num_blocks": 1048576, 00:13:10.255 "uuid": "3bbe9d0b-8447-4f06-b2aa-70492e7f7ca9", 00:13:10.255 "assigned_rate_limits": { 00:13:10.255 "rw_ios_per_sec": 0, 00:13:10.255 "rw_mbytes_per_sec": 0, 00:13:10.255 "r_mbytes_per_sec": 0, 00:13:10.255 "w_mbytes_per_sec": 0 00:13:10.255 }, 00:13:10.255 "claimed": true, 00:13:10.255 "claim_type": "exclusive_write", 00:13:10.255 "zoned": false, 00:13:10.255 "supported_io_types": { 00:13:10.255 "read": true, 00:13:10.255 "write": true, 00:13:10.255 "unmap": true, 00:13:10.255 "flush": true, 00:13:10.255 "reset": true, 00:13:10.255 "nvme_admin": false, 00:13:10.255 "nvme_io": false, 00:13:10.255 "nvme_io_md": false, 00:13:10.255 "write_zeroes": true, 00:13:10.255 "zcopy": true, 00:13:10.255 "get_zone_info": false, 00:13:10.255 "zone_management": false, 00:13:10.255 "zone_append": false, 00:13:10.255 "compare": false, 00:13:10.255 "compare_and_write": false, 00:13:10.255 "abort": true, 00:13:10.255 "seek_hole": false, 00:13:10.255 "seek_data": false, 00:13:10.255 "copy": true, 00:13:10.255 "nvme_iov_md": false 00:13:10.255 }, 00:13:10.255 "memory_domains": [ 00:13:10.255 { 00:13:10.255 "dma_device_id": "system", 00:13:10.255 "dma_device_type": 1 00:13:10.255 }, 00:13:10.255 { 00:13:10.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:10.255 "dma_device_type": 2 00:13:10.255 } 00:13:10.255 ], 00:13:10.255 "driver_specific": {} 00:13:10.255 } 00:13:10.255 ]' 00:13:10.255 08:28:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:13:10.255 08:28:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:13:10.255 08:28:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:13:10.255 08:28:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:13:10.255 08:28:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:13:10.255 08:28:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:13:10.255 08:28:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:10.255 08:28:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:11.637 08:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:11.637 08:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:13:11.637 08:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:11.637 08:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:11.637 08:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:13:14.178 08:28:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:14.178 08:28:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:14.179 08:28:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:14.179 08:28:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:14.179 08:28:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:14.179 08:28:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:13:14.179 08:28:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:14.179 08:28:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:14.179 08:28:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:14.179 08:28:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:14.179 08:28:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:14.179 08:28:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:14.179 08:28:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:13:14.179 08:28:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:14.179 08:28:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:14.179 08:28:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:14.179 08:28:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:14.179 08:28:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:13:14.749 08:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:13:15.692 08:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:13:15.692 08:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:15.692 08:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:15.692 08:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:15.692 08:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:15.692 ************************************ 00:13:15.692 START TEST filesystem_ext4 00:13:15.692 ************************************ 00:13:15.692 08:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:15.692 08:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:15.692 08:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:15.692 08:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:15.692 08:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:13:15.692 08:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:15.692 08:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:13:15.692 08:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:13:15.692 08:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:13:15.692 08:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:13:15.692 08:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:15.692 mke2fs 1.47.0 (5-Feb-2023) 00:13:15.692 Discarding device blocks: 0/522240 done 00:13:15.692 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:15.692 Filesystem UUID: 757522bc-c5a1-456d-b450-162e1c6257f2 00:13:15.692 Superblock backups stored on blocks: 00:13:15.692 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:15.692 00:13:15.692 Allocating group tables: 0/64 done 00:13:15.692 Writing inode tables: 0/64 done 00:13:15.952 Creating journal (8192 blocks): done 00:13:16.212 Writing superblocks and filesystem accounting information: 0/64 done 00:13:16.212 00:13:16.212 08:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:13:16.212 08:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:22.798 08:28:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:22.798 08:28:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:13:22.798 08:28:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:22.798 08:28:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:13:22.798 08:28:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:22.799 08:28:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:22.799 08:28:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3637639 00:13:22.799 08:28:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:22.799 08:28:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:22.799 08:28:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:22.799 08:28:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:22.799 00:13:22.799 real 0m6.671s 00:13:22.799 user 0m0.030s 00:13:22.799 sys 0m0.076s 00:13:22.799 08:28:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:22.799 08:28:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:22.799 ************************************ 00:13:22.799 END TEST filesystem_ext4 00:13:22.799 ************************************ 00:13:22.799 08:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:22.799 08:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:22.799 08:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:22.799 08:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:22.799 ************************************ 00:13:22.799 START TEST filesystem_btrfs 00:13:22.799 ************************************ 00:13:22.799 08:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:22.799 08:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:22.799 08:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:22.799 08:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:22.799 08:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:13:22.799 08:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:22.799 08:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:13:22.799 08:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:13:22.799 08:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:13:22.799 08:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:13:22.799 08:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:22.799 btrfs-progs v6.8.1 00:13:22.799 See https://btrfs.readthedocs.io for more information. 00:13:22.799 00:13:22.799 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:22.799 NOTE: several default settings have changed in version 5.15, please make sure 00:13:22.799 this does not affect your deployments: 00:13:22.799 - DUP for metadata (-m dup) 00:13:22.799 - enabled no-holes (-O no-holes) 00:13:22.799 - enabled free-space-tree (-R free-space-tree) 00:13:22.799 00:13:22.799 Label: (null) 00:13:22.799 UUID: 23145abc-cb05-4963-b2c9-56c60a3c9038 00:13:22.799 Node size: 16384 00:13:22.799 Sector size: 4096 (CPU page size: 4096) 00:13:22.799 Filesystem size: 510.00MiB 00:13:22.799 Block group profiles: 00:13:22.799 Data: single 8.00MiB 00:13:22.799 Metadata: DUP 32.00MiB 00:13:22.799 System: DUP 8.00MiB 00:13:22.799 SSD detected: yes 00:13:22.799 Zoned device: no 00:13:22.799 Features: extref, skinny-metadata, no-holes, free-space-tree 00:13:22.799 Checksum: crc32c 00:13:22.799 Number of devices: 1 00:13:22.799 Devices: 00:13:22.799 ID SIZE PATH 00:13:22.799 1 510.00MiB /dev/nvme0n1p1 00:13:22.799 00:13:22.799 08:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:13:22.799 08:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:22.799 08:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:22.799 08:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:13:22.799 08:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:22.799 08:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:13:22.799 08:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:13:22.799 08:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:22.799 08:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3637639 00:13:22.799 08:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:22.799 08:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:22.799 08:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:22.799 08:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:22.799 00:13:22.799 real 0m0.440s 00:13:22.799 user 0m0.029s 00:13:22.799 sys 0m0.117s 00:13:22.799 08:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:22.799 08:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:13:22.799 ************************************ 00:13:22.799 END TEST filesystem_btrfs 00:13:22.799 ************************************ 00:13:22.799 08:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:13:22.799 08:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:22.799 08:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:22.799 08:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:22.799 ************************************ 00:13:22.799 START TEST filesystem_xfs 00:13:22.799 ************************************ 00:13:22.799 08:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:13:22.799 08:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:22.799 08:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:22.799 08:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:22.799 08:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:13:22.799 08:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:22.799 08:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:13:22.799 08:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:13:22.799 08:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:13:22.799 08:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:13:22.799 08:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:23.062 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:23.062 = sectsz=512 attr=2, projid32bit=1 00:13:23.062 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:23.062 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:23.062 data = bsize=4096 blocks=130560, imaxpct=25 00:13:23.062 = sunit=0 swidth=0 blks 00:13:23.062 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:23.062 log =internal log bsize=4096 blocks=16384, version=2 00:13:23.062 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:23.062 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:23.634 Discarding blocks...Done. 00:13:23.634 08:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:13:23.634 08:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:25.548 08:28:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:25.548 08:28:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:13:25.548 08:28:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:25.548 08:28:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:13:25.548 08:28:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:13:25.548 08:28:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:25.548 08:28:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3637639 00:13:25.548 08:28:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:25.548 08:28:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:25.548 08:28:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:25.548 08:28:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:25.548 00:13:25.548 real 0m2.627s 00:13:25.548 user 0m0.029s 00:13:25.548 sys 0m0.075s 00:13:25.548 08:28:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:25.548 08:28:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:25.548 ************************************ 00:13:25.548 END TEST filesystem_xfs 00:13:25.548 ************************************ 00:13:25.548 08:28:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:25.810 08:28:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:25.810 08:28:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:26.070 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:26.070 08:28:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:26.070 08:28:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:13:26.070 08:28:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:26.070 08:28:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:26.070 08:28:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:26.070 08:28:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:26.070 08:28:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:13:26.070 08:28:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:26.070 08:28:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.070 08:28:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:26.070 08:28:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.070 08:28:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:26.070 08:28:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3637639 00:13:26.070 08:28:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 3637639 ']' 00:13:26.070 08:28:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 3637639 00:13:26.070 08:28:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:13:26.070 08:28:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:26.070 08:28:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3637639 00:13:26.070 08:28:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:26.070 08:28:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:26.070 08:28:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3637639' 00:13:26.070 killing process with pid 3637639 00:13:26.070 08:28:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 3637639 00:13:26.070 08:28:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 3637639 00:13:26.331 08:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:26.331 00:13:26.331 real 0m17.249s 00:13:26.331 user 1m8.068s 00:13:26.331 sys 0m1.374s 00:13:26.331 08:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:26.331 08:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:26.331 ************************************ 00:13:26.331 END TEST nvmf_filesystem_no_in_capsule 00:13:26.331 ************************************ 00:13:26.331 08:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:13:26.331 08:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:26.331 08:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:26.331 08:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:26.331 ************************************ 00:13:26.331 START TEST nvmf_filesystem_in_capsule 00:13:26.331 ************************************ 00:13:26.331 08:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:13:26.331 08:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:13:26.331 08:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:26.331 08:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:13:26.331 08:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:26.332 08:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:26.332 08:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@505 -- # nvmfpid=3641229 00:13:26.332 08:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@506 -- # waitforlisten 3641229 00:13:26.332 08:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:26.332 08:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 3641229 ']' 00:13:26.332 08:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:26.332 08:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:26.332 08:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:26.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:26.332 08:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:26.332 08:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:26.332 [2024-10-01 08:28:18.140224] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:13:26.332 [2024-10-01 08:28:18.140274] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:26.593 [2024-10-01 08:28:18.208441] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:26.593 [2024-10-01 08:28:18.276257] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:26.593 [2024-10-01 08:28:18.276294] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:26.593 [2024-10-01 08:28:18.276302] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:26.593 [2024-10-01 08:28:18.276309] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:26.593 [2024-10-01 08:28:18.276315] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:26.593 [2024-10-01 08:28:18.277867] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:26.593 [2024-10-01 08:28:18.278017] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:13:26.593 [2024-10-01 08:28:18.278158] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:13:26.593 [2024-10-01 08:28:18.278317] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.166 08:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:27.166 08:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:13:27.166 08:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:13:27.166 08:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:27.166 08:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:27.166 08:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:27.166 08:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:27.166 08:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:13:27.166 08:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.166 08:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:27.166 [2024-10-01 08:28:18.983019] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:27.166 08:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.427 08:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:27.427 08:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.427 08:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:27.427 Malloc1 00:13:27.427 08:28:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.427 08:28:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:27.427 08:28:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.427 08:28:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:27.427 08:28:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.427 08:28:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:27.427 08:28:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.427 08:28:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:27.427 08:28:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.427 08:28:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:27.427 08:28:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.427 08:28:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:27.427 [2024-10-01 08:28:19.111031] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:27.427 08:28:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.427 08:28:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:27.427 08:28:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:13:27.427 08:28:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:13:27.427 08:28:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:13:27.427 08:28:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:13:27.427 08:28:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:27.427 08:28:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.427 08:28:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:27.427 08:28:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.427 08:28:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:13:27.427 { 00:13:27.427 "name": "Malloc1", 00:13:27.427 "aliases": [ 00:13:27.427 "a438f867-62ee-4da4-b987-a76ab162ade7" 00:13:27.427 ], 00:13:27.427 "product_name": "Malloc disk", 00:13:27.427 "block_size": 512, 00:13:27.427 "num_blocks": 1048576, 00:13:27.427 "uuid": "a438f867-62ee-4da4-b987-a76ab162ade7", 00:13:27.427 "assigned_rate_limits": { 00:13:27.427 "rw_ios_per_sec": 0, 00:13:27.427 "rw_mbytes_per_sec": 0, 00:13:27.427 "r_mbytes_per_sec": 0, 00:13:27.427 "w_mbytes_per_sec": 0 00:13:27.427 }, 00:13:27.427 "claimed": true, 00:13:27.427 "claim_type": "exclusive_write", 00:13:27.427 "zoned": false, 00:13:27.427 "supported_io_types": { 00:13:27.427 "read": true, 00:13:27.427 "write": true, 00:13:27.427 "unmap": true, 00:13:27.427 "flush": true, 00:13:27.427 "reset": true, 00:13:27.427 "nvme_admin": false, 00:13:27.427 "nvme_io": false, 00:13:27.427 "nvme_io_md": false, 00:13:27.427 "write_zeroes": true, 00:13:27.427 "zcopy": true, 00:13:27.427 "get_zone_info": false, 00:13:27.427 "zone_management": false, 00:13:27.427 "zone_append": false, 00:13:27.427 "compare": false, 00:13:27.427 "compare_and_write": false, 00:13:27.427 "abort": true, 00:13:27.427 "seek_hole": false, 00:13:27.427 "seek_data": false, 00:13:27.427 "copy": true, 00:13:27.427 "nvme_iov_md": false 00:13:27.427 }, 00:13:27.427 "memory_domains": [ 00:13:27.427 { 00:13:27.427 "dma_device_id": "system", 00:13:27.427 "dma_device_type": 1 00:13:27.427 }, 00:13:27.427 { 00:13:27.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:27.427 "dma_device_type": 2 00:13:27.427 } 00:13:27.427 ], 00:13:27.427 "driver_specific": {} 00:13:27.427 } 00:13:27.427 ]' 00:13:27.427 08:28:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:13:27.427 08:28:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:13:27.427 08:28:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:13:27.427 08:28:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:13:27.427 08:28:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:13:27.427 08:28:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:13:27.427 08:28:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:27.427 08:28:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:29.343 08:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:29.343 08:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:13:29.343 08:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:29.343 08:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:29.343 08:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:13:31.258 08:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:31.258 08:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:31.258 08:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:31.258 08:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:31.258 08:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:31.258 08:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:13:31.258 08:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:31.258 08:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:31.258 08:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:31.259 08:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:31.259 08:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:31.259 08:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:31.259 08:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:13:31.259 08:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:31.259 08:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:31.259 08:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:31.259 08:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:31.527 08:28:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:13:32.099 08:28:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:13:33.041 08:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:13:33.041 08:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:33.041 08:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:33.041 08:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:33.041 08:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:33.041 ************************************ 00:13:33.041 START TEST filesystem_in_capsule_ext4 00:13:33.041 ************************************ 00:13:33.041 08:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:33.041 08:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:33.041 08:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:33.041 08:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:33.041 08:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:13:33.041 08:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:33.041 08:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:13:33.041 08:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:13:33.041 08:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:13:33.041 08:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:13:33.041 08:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:33.041 mke2fs 1.47.0 (5-Feb-2023) 00:13:33.041 Discarding device blocks: 0/522240 done 00:13:33.041 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:33.041 Filesystem UUID: c9358f25-c161-4492-b435-6b7c905052b3 00:13:33.041 Superblock backups stored on blocks: 00:13:33.041 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:33.041 00:13:33.041 Allocating group tables: 0/64 done 00:13:33.041 Writing inode tables: 0/64 done 00:13:34.249 Creating journal (8192 blocks): done 00:13:34.249 Writing superblocks and filesystem accounting information: 0/64 done 00:13:34.249 00:13:34.249 08:28:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:13:34.249 08:28:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:40.831 08:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:40.831 08:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:13:40.831 08:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:40.831 08:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:13:40.831 08:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:40.831 08:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:40.831 08:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3641229 00:13:40.831 08:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:40.831 08:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:40.831 08:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:40.831 08:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:40.831 00:13:40.831 real 0m6.807s 00:13:40.831 user 0m0.026s 00:13:40.831 sys 0m0.078s 00:13:40.831 08:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:40.831 08:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:40.831 ************************************ 00:13:40.831 END TEST filesystem_in_capsule_ext4 00:13:40.831 ************************************ 00:13:40.831 08:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:40.831 08:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:40.831 08:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:40.831 08:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:40.831 ************************************ 00:13:40.831 START TEST filesystem_in_capsule_btrfs 00:13:40.831 ************************************ 00:13:40.831 08:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:40.831 08:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:40.831 08:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:40.831 08:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:40.831 08:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:13:40.832 08:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:40.832 08:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:13:40.832 08:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:13:40.832 08:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:13:40.832 08:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:13:40.832 08:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:40.832 btrfs-progs v6.8.1 00:13:40.832 See https://btrfs.readthedocs.io for more information. 00:13:40.832 00:13:40.832 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:40.832 NOTE: several default settings have changed in version 5.15, please make sure 00:13:40.832 this does not affect your deployments: 00:13:40.832 - DUP for metadata (-m dup) 00:13:40.832 - enabled no-holes (-O no-holes) 00:13:40.832 - enabled free-space-tree (-R free-space-tree) 00:13:40.832 00:13:40.832 Label: (null) 00:13:40.832 UUID: 0f8d5a5a-a9b9-4918-9edb-3320fcfe3828 00:13:40.832 Node size: 16384 00:13:40.832 Sector size: 4096 (CPU page size: 4096) 00:13:40.832 Filesystem size: 510.00MiB 00:13:40.832 Block group profiles: 00:13:40.832 Data: single 8.00MiB 00:13:40.832 Metadata: DUP 32.00MiB 00:13:40.832 System: DUP 8.00MiB 00:13:40.832 SSD detected: yes 00:13:40.832 Zoned device: no 00:13:40.832 Features: extref, skinny-metadata, no-holes, free-space-tree 00:13:40.832 Checksum: crc32c 00:13:40.832 Number of devices: 1 00:13:40.832 Devices: 00:13:40.832 ID SIZE PATH 00:13:40.832 1 510.00MiB /dev/nvme0n1p1 00:13:40.832 00:13:40.832 08:28:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:13:40.832 08:28:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:41.402 08:28:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:41.402 08:28:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:13:41.402 08:28:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:41.402 08:28:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:13:41.661 08:28:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:13:41.661 08:28:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:41.661 08:28:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3641229 00:13:41.662 08:28:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:41.662 08:28:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:41.662 08:28:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:41.662 08:28:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:41.662 00:13:41.662 real 0m1.642s 00:13:41.662 user 0m0.030s 00:13:41.662 sys 0m0.121s 00:13:41.662 08:28:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:41.662 08:28:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:13:41.662 ************************************ 00:13:41.662 END TEST filesystem_in_capsule_btrfs 00:13:41.662 ************************************ 00:13:41.662 08:28:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:13:41.662 08:28:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:41.662 08:28:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:41.662 08:28:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:41.662 ************************************ 00:13:41.662 START TEST filesystem_in_capsule_xfs 00:13:41.662 ************************************ 00:13:41.662 08:28:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:13:41.662 08:28:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:41.662 08:28:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:41.662 08:28:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:41.662 08:28:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:13:41.662 08:28:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:41.662 08:28:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:13:41.662 08:28:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:13:41.662 08:28:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:13:41.662 08:28:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:13:41.662 08:28:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:41.662 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:41.662 = sectsz=512 attr=2, projid32bit=1 00:13:41.662 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:41.662 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:41.662 data = bsize=4096 blocks=130560, imaxpct=25 00:13:41.662 = sunit=0 swidth=0 blks 00:13:41.662 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:41.662 log =internal log bsize=4096 blocks=16384, version=2 00:13:41.662 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:41.662 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:42.603 Discarding blocks...Done. 00:13:42.603 08:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:13:42.603 08:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:44.515 08:28:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:44.515 08:28:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:13:44.515 08:28:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:44.515 08:28:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:13:44.515 08:28:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:13:44.515 08:28:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:44.515 08:28:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3641229 00:13:44.515 08:28:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:44.515 08:28:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:44.515 08:28:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:44.515 08:28:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:44.515 00:13:44.515 real 0m2.935s 00:13:44.515 user 0m0.024s 00:13:44.515 sys 0m0.079s 00:13:44.515 08:28:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:44.515 08:28:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:44.515 ************************************ 00:13:44.515 END TEST filesystem_in_capsule_xfs 00:13:44.515 ************************************ 00:13:44.515 08:28:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:45.088 08:28:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:45.348 08:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:45.348 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:45.348 08:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:45.348 08:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:13:45.348 08:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:45.348 08:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:45.348 08:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:45.348 08:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:45.348 08:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:13:45.348 08:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:45.348 08:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.348 08:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:45.609 08:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.609 08:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:45.609 08:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3641229 00:13:45.609 08:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 3641229 ']' 00:13:45.609 08:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 3641229 00:13:45.609 08:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:13:45.609 08:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:45.609 08:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3641229 00:13:45.609 08:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:45.609 08:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:45.609 08:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3641229' 00:13:45.609 killing process with pid 3641229 00:13:45.609 08:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 3641229 00:13:45.609 08:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 3641229 00:13:45.869 08:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:45.869 00:13:45.869 real 0m19.403s 00:13:45.869 user 1m16.591s 00:13:45.869 sys 0m1.453s 00:13:45.869 08:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:45.869 08:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:45.869 ************************************ 00:13:45.869 END TEST nvmf_filesystem_in_capsule 00:13:45.869 ************************************ 00:13:45.869 08:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:13:45.869 08:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@512 -- # nvmfcleanup 00:13:45.869 08:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:13:45.869 08:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:45.869 08:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:13:45.869 08:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:45.869 08:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:45.869 rmmod nvme_tcp 00:13:45.869 rmmod nvme_fabrics 00:13:45.869 rmmod nvme_keyring 00:13:45.869 08:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:45.869 08:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:13:45.869 08:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:13:45.869 08:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:13:45.869 08:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:13:45.869 08:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:13:45.869 08:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:13:45.869 08:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:13:45.869 08:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@787 -- # iptables-save 00:13:45.869 08:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:13:45.869 08:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@787 -- # iptables-restore 00:13:45.869 08:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:45.869 08:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:45.869 08:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:45.869 08:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:45.869 08:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.476 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:48.476 00:13:48.476 real 0m46.760s 00:13:48.476 user 2m27.048s 00:13:48.476 sys 0m8.519s 00:13:48.476 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:48.476 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:48.476 ************************************ 00:13:48.476 END TEST nvmf_filesystem 00:13:48.476 ************************************ 00:13:48.476 08:28:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:13:48.476 08:28:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:48.476 08:28:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:48.476 08:28:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:48.476 ************************************ 00:13:48.476 START TEST nvmf_target_discovery 00:13:48.476 ************************************ 00:13:48.476 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:13:48.476 * Looking for test storage... 00:13:48.476 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:48.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.477 --rc genhtml_branch_coverage=1 00:13:48.477 --rc genhtml_function_coverage=1 00:13:48.477 --rc genhtml_legend=1 00:13:48.477 --rc geninfo_all_blocks=1 00:13:48.477 --rc geninfo_unexecuted_blocks=1 00:13:48.477 00:13:48.477 ' 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:48.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.477 --rc genhtml_branch_coverage=1 00:13:48.477 --rc genhtml_function_coverage=1 00:13:48.477 --rc genhtml_legend=1 00:13:48.477 --rc geninfo_all_blocks=1 00:13:48.477 --rc geninfo_unexecuted_blocks=1 00:13:48.477 00:13:48.477 ' 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:48.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.477 --rc genhtml_branch_coverage=1 00:13:48.477 --rc genhtml_function_coverage=1 00:13:48.477 --rc genhtml_legend=1 00:13:48.477 --rc geninfo_all_blocks=1 00:13:48.477 --rc geninfo_unexecuted_blocks=1 00:13:48.477 00:13:48.477 ' 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:48.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.477 --rc genhtml_branch_coverage=1 00:13:48.477 --rc genhtml_function_coverage=1 00:13:48.477 --rc genhtml_legend=1 00:13:48.477 --rc geninfo_all_blocks=1 00:13:48.477 --rc geninfo_unexecuted_blocks=1 00:13:48.477 00:13:48.477 ' 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:48.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:48.477 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:48.478 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:48.478 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:13:48.478 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:13:48.478 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:13:48.478 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:13:48.478 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:13:48.478 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:13:48.478 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:48.478 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@472 -- # prepare_net_devs 00:13:48.478 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@434 -- # local -g is_hw=no 00:13:48.478 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@436 -- # remove_spdk_ns 00:13:48.478 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:48.478 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:48.478 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.478 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:13:48.478 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:13:48.478 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:13:48.478 08:28:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:55.142 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:55.142 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:13:55.142 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:55.142 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:55.142 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:55.142 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:55.142 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:55.142 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:13:55.142 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:55.142 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:13:55.142 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:13:55.142 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:13:55.142 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:13:55.142 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:13:55.142 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:13:55.142 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:55.142 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:55.142 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:55.142 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:55.142 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:55.142 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:55.142 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:55.142 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:55.142 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:55.142 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:55.142 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:55.142 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:13:55.142 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:13:55.142 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:13:55.142 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:13:55.142 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:13:55.142 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:13:55.142 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:13:55.142 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:55.142 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:55.142 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:13:55.142 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:13:55.142 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:55.142 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:55.142 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:13:55.142 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:13:55.142 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:55.142 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:55.142 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:13:55.142 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:13:55.142 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:55.142 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:55.142 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:13:55.142 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:13:55.142 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:13:55.142 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:13:55.142 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:13:55.142 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:55.142 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:13:55.142 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:55.142 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ up == up ]] 00:13:55.142 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:13:55.142 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:55.143 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:55.143 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:55.143 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:13:55.143 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:13:55.143 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:55.143 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:13:55.143 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:55.143 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ up == up ]] 00:13:55.143 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:13:55.143 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:55.143 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:55.143 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:55.143 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:13:55.143 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:13:55.143 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # is_hw=yes 00:13:55.143 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:13:55.143 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:13:55.143 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:13:55.143 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:55.143 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:55.143 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:55.143 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:55.143 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:55.143 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:55.143 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:55.143 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:55.143 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:55.143 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:55.143 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:55.143 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:55.143 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:55.143 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:55.143 08:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:55.404 08:28:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:55.404 08:28:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:55.404 08:28:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:55.404 08:28:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:55.404 08:28:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:55.404 08:28:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:55.404 08:28:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:55.404 08:28:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:55.404 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:55.404 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.505 ms 00:13:55.404 00:13:55.404 --- 10.0.0.2 ping statistics --- 00:13:55.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:55.404 rtt min/avg/max/mdev = 0.505/0.505/0.505/0.000 ms 00:13:55.404 08:28:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:55.404 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:55.404 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:13:55.404 00:13:55.404 --- 10.0.0.1 ping statistics --- 00:13:55.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:55.404 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:13:55.404 08:28:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:55.404 08:28:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # return 0 00:13:55.404 08:28:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:13:55.404 08:28:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:55.404 08:28:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:13:55.404 08:28:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:13:55.404 08:28:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:55.404 08:28:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:13:55.404 08:28:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:13:55.404 08:28:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:13:55.404 08:28:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:13:55.404 08:28:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:55.404 08:28:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:55.404 08:28:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@505 -- # nvmfpid=3649489 00:13:55.404 08:28:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@506 -- # waitforlisten 3649489 00:13:55.404 08:28:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 3649489 ']' 00:13:55.404 08:28:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:55.404 08:28:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:55.404 08:28:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:55.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:55.404 08:28:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:55.404 08:28:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:55.404 08:28:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:55.665 [2024-10-01 08:28:47.238237] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:13:55.665 [2024-10-01 08:28:47.238306] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:55.665 [2024-10-01 08:28:47.309693] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:55.665 [2024-10-01 08:28:47.383754] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:55.665 [2024-10-01 08:28:47.383793] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:55.665 [2024-10-01 08:28:47.383802] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:55.665 [2024-10-01 08:28:47.383809] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:55.665 [2024-10-01 08:28:47.383815] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:55.665 [2024-10-01 08:28:47.385625] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:55.665 [2024-10-01 08:28:47.385744] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:13:55.665 [2024-10-01 08:28:47.385901] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:55.665 [2024-10-01 08:28:47.385902] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:13:56.236 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:56.236 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:13:56.236 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:13:56.236 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:56.236 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:56.497 [2024-10-01 08:28:48.082970] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:56.497 Null1 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:56.497 [2024-10-01 08:28:48.143306] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:56.497 Null2 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:56.497 Null3 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:56.497 Null4 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.497 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:56.498 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.498 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:56.498 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.498 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:56.498 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.498 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:13:56.498 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.498 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:56.498 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.498 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:13:56.759 00:13:56.759 Discovery Log Number of Records 6, Generation counter 6 00:13:56.759 =====Discovery Log Entry 0====== 00:13:56.759 trtype: tcp 00:13:56.759 adrfam: ipv4 00:13:56.759 subtype: current discovery subsystem 00:13:56.759 treq: not required 00:13:56.759 portid: 0 00:13:56.759 trsvcid: 4420 00:13:56.759 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:56.759 traddr: 10.0.0.2 00:13:56.759 eflags: explicit discovery connections, duplicate discovery information 00:13:56.759 sectype: none 00:13:56.759 =====Discovery Log Entry 1====== 00:13:56.759 trtype: tcp 00:13:56.759 adrfam: ipv4 00:13:56.759 subtype: nvme subsystem 00:13:56.759 treq: not required 00:13:56.759 portid: 0 00:13:56.759 trsvcid: 4420 00:13:56.759 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:56.759 traddr: 10.0.0.2 00:13:56.759 eflags: none 00:13:56.759 sectype: none 00:13:56.759 =====Discovery Log Entry 2====== 00:13:56.759 trtype: tcp 00:13:56.759 adrfam: ipv4 00:13:56.759 subtype: nvme subsystem 00:13:56.759 treq: not required 00:13:56.759 portid: 0 00:13:56.759 trsvcid: 4420 00:13:56.759 subnqn: nqn.2016-06.io.spdk:cnode2 00:13:56.759 traddr: 10.0.0.2 00:13:56.759 eflags: none 00:13:56.759 sectype: none 00:13:56.759 =====Discovery Log Entry 3====== 00:13:56.759 trtype: tcp 00:13:56.759 adrfam: ipv4 00:13:56.759 subtype: nvme subsystem 00:13:56.759 treq: not required 00:13:56.759 portid: 0 00:13:56.759 trsvcid: 4420 00:13:56.759 subnqn: nqn.2016-06.io.spdk:cnode3 00:13:56.759 traddr: 10.0.0.2 00:13:56.759 eflags: none 00:13:56.759 sectype: none 00:13:56.759 =====Discovery Log Entry 4====== 00:13:56.759 trtype: tcp 00:13:56.759 adrfam: ipv4 00:13:56.759 subtype: nvme subsystem 00:13:56.759 treq: not required 00:13:56.759 portid: 0 00:13:56.759 trsvcid: 4420 00:13:56.759 subnqn: nqn.2016-06.io.spdk:cnode4 00:13:56.759 traddr: 10.0.0.2 00:13:56.759 eflags: none 00:13:56.759 sectype: none 00:13:56.759 =====Discovery Log Entry 5====== 00:13:56.759 trtype: tcp 00:13:56.759 adrfam: ipv4 00:13:56.759 subtype: discovery subsystem referral 00:13:56.759 treq: not required 00:13:56.759 portid: 0 00:13:56.759 trsvcid: 4430 00:13:56.759 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:56.759 traddr: 10.0.0.2 00:13:56.759 eflags: none 00:13:56.759 sectype: none 00:13:56.759 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:13:56.759 Perform nvmf subsystem discovery via RPC 00:13:56.759 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:13:56.759 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.759 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:56.759 [ 00:13:56.759 { 00:13:56.759 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:56.759 "subtype": "Discovery", 00:13:56.759 "listen_addresses": [ 00:13:56.759 { 00:13:56.759 "trtype": "TCP", 00:13:56.759 "adrfam": "IPv4", 00:13:56.759 "traddr": "10.0.0.2", 00:13:56.759 "trsvcid": "4420" 00:13:56.759 } 00:13:56.759 ], 00:13:56.759 "allow_any_host": true, 00:13:56.759 "hosts": [] 00:13:56.759 }, 00:13:56.759 { 00:13:56.759 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:56.759 "subtype": "NVMe", 00:13:56.759 "listen_addresses": [ 00:13:56.759 { 00:13:56.759 "trtype": "TCP", 00:13:56.759 "adrfam": "IPv4", 00:13:56.759 "traddr": "10.0.0.2", 00:13:56.759 "trsvcid": "4420" 00:13:56.759 } 00:13:56.759 ], 00:13:56.759 "allow_any_host": true, 00:13:56.759 "hosts": [], 00:13:56.759 "serial_number": "SPDK00000000000001", 00:13:56.759 "model_number": "SPDK bdev Controller", 00:13:56.759 "max_namespaces": 32, 00:13:56.759 "min_cntlid": 1, 00:13:56.759 "max_cntlid": 65519, 00:13:56.759 "namespaces": [ 00:13:56.759 { 00:13:56.759 "nsid": 1, 00:13:56.759 "bdev_name": "Null1", 00:13:56.759 "name": "Null1", 00:13:56.759 "nguid": "1A77F92E70384FA9BD31B55003829914", 00:13:56.759 "uuid": "1a77f92e-7038-4fa9-bd31-b55003829914" 00:13:56.759 } 00:13:56.759 ] 00:13:56.759 }, 00:13:56.759 { 00:13:56.759 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:56.759 "subtype": "NVMe", 00:13:56.759 "listen_addresses": [ 00:13:56.759 { 00:13:56.759 "trtype": "TCP", 00:13:56.759 "adrfam": "IPv4", 00:13:56.759 "traddr": "10.0.0.2", 00:13:56.759 "trsvcid": "4420" 00:13:56.759 } 00:13:56.759 ], 00:13:56.759 "allow_any_host": true, 00:13:56.759 "hosts": [], 00:13:56.759 "serial_number": "SPDK00000000000002", 00:13:56.759 "model_number": "SPDK bdev Controller", 00:13:56.759 "max_namespaces": 32, 00:13:56.759 "min_cntlid": 1, 00:13:56.759 "max_cntlid": 65519, 00:13:56.759 "namespaces": [ 00:13:56.759 { 00:13:56.759 "nsid": 1, 00:13:56.759 "bdev_name": "Null2", 00:13:56.759 "name": "Null2", 00:13:56.759 "nguid": "E42D3C3BB5E041429C4EBF42D1A151A7", 00:13:56.759 "uuid": "e42d3c3b-b5e0-4142-9c4e-bf42d1a151a7" 00:13:56.759 } 00:13:56.759 ] 00:13:56.759 }, 00:13:56.759 { 00:13:56.759 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:13:56.759 "subtype": "NVMe", 00:13:56.759 "listen_addresses": [ 00:13:56.759 { 00:13:56.759 "trtype": "TCP", 00:13:56.759 "adrfam": "IPv4", 00:13:56.759 "traddr": "10.0.0.2", 00:13:56.759 "trsvcid": "4420" 00:13:56.759 } 00:13:56.759 ], 00:13:56.760 "allow_any_host": true, 00:13:56.760 "hosts": [], 00:13:56.760 "serial_number": "SPDK00000000000003", 00:13:56.760 "model_number": "SPDK bdev Controller", 00:13:56.760 "max_namespaces": 32, 00:13:56.760 "min_cntlid": 1, 00:13:56.760 "max_cntlid": 65519, 00:13:56.760 "namespaces": [ 00:13:56.760 { 00:13:56.760 "nsid": 1, 00:13:56.760 "bdev_name": "Null3", 00:13:56.760 "name": "Null3", 00:13:56.760 "nguid": "67E47B9EADE14DA7BBEC4B1F21C568B1", 00:13:56.760 "uuid": "67e47b9e-ade1-4da7-bbec-4b1f21c568b1" 00:13:56.760 } 00:13:56.760 ] 00:13:56.760 }, 00:13:56.760 { 00:13:56.760 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:13:56.760 "subtype": "NVMe", 00:13:56.760 "listen_addresses": [ 00:13:56.760 { 00:13:56.760 "trtype": "TCP", 00:13:56.760 "adrfam": "IPv4", 00:13:56.760 "traddr": "10.0.0.2", 00:13:56.760 "trsvcid": "4420" 00:13:56.760 } 00:13:56.760 ], 00:13:56.760 "allow_any_host": true, 00:13:56.760 "hosts": [], 00:13:56.760 "serial_number": "SPDK00000000000004", 00:13:56.760 "model_number": "SPDK bdev Controller", 00:13:56.760 "max_namespaces": 32, 00:13:56.760 "min_cntlid": 1, 00:13:56.760 "max_cntlid": 65519, 00:13:56.760 "namespaces": [ 00:13:56.760 { 00:13:56.760 "nsid": 1, 00:13:56.760 "bdev_name": "Null4", 00:13:56.760 "name": "Null4", 00:13:56.760 "nguid": "C69759A85830435BA97D122944A6C9AA", 00:13:56.760 "uuid": "c69759a8-5830-435b-a97d-122944a6c9aa" 00:13:56.760 } 00:13:56.760 ] 00:13:56.760 } 00:13:56.760 ] 00:13:56.760 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.760 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:13:56.760 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:56.760 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:56.760 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.760 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:57.021 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.021 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:13:57.021 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.021 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:57.021 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.021 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:57.021 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:13:57.021 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.021 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:57.021 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.021 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:13:57.021 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.021 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:57.021 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.021 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:57.021 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:13:57.021 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.021 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:57.021 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.021 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:13:57.021 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.021 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:57.021 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.021 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:57.021 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:13:57.021 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.021 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:57.021 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.021 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:13:57.021 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.021 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:57.021 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.021 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:13:57.021 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.021 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:57.021 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.021 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:13:57.021 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:13:57.021 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.021 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:57.021 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.021 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:13:57.021 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:13:57.021 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:13:57.021 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:13:57.021 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # nvmfcleanup 00:13:57.021 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:13:57.021 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:57.021 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:13:57.021 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:57.021 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:57.021 rmmod nvme_tcp 00:13:57.021 rmmod nvme_fabrics 00:13:57.021 rmmod nvme_keyring 00:13:57.021 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:57.021 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:13:57.021 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:13:57.021 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@513 -- # '[' -n 3649489 ']' 00:13:57.021 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@514 -- # killprocess 3649489 00:13:57.022 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 3649489 ']' 00:13:57.022 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 3649489 00:13:57.022 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:13:57.022 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:57.022 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3649489 00:13:57.283 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:57.283 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:57.283 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3649489' 00:13:57.283 killing process with pid 3649489 00:13:57.283 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 3649489 00:13:57.283 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 3649489 00:13:57.283 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:13:57.283 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:13:57.283 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:13:57.283 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:13:57.283 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@787 -- # iptables-save 00:13:57.283 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:13:57.283 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@787 -- # iptables-restore 00:13:57.283 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:57.283 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:57.283 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:57.283 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:57.283 08:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:59.829 00:13:59.829 real 0m11.323s 00:13:59.829 user 0m8.747s 00:13:59.829 sys 0m5.756s 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:59.829 ************************************ 00:13:59.829 END TEST nvmf_target_discovery 00:13:59.829 ************************************ 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:59.829 ************************************ 00:13:59.829 START TEST nvmf_referrals 00:13:59.829 ************************************ 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:59.829 * Looking for test storage... 00:13:59.829 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lcov --version 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:59.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:59.829 --rc genhtml_branch_coverage=1 00:13:59.829 --rc genhtml_function_coverage=1 00:13:59.829 --rc genhtml_legend=1 00:13:59.829 --rc geninfo_all_blocks=1 00:13:59.829 --rc geninfo_unexecuted_blocks=1 00:13:59.829 00:13:59.829 ' 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:59.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:59.829 --rc genhtml_branch_coverage=1 00:13:59.829 --rc genhtml_function_coverage=1 00:13:59.829 --rc genhtml_legend=1 00:13:59.829 --rc geninfo_all_blocks=1 00:13:59.829 --rc geninfo_unexecuted_blocks=1 00:13:59.829 00:13:59.829 ' 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:59.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:59.829 --rc genhtml_branch_coverage=1 00:13:59.829 --rc genhtml_function_coverage=1 00:13:59.829 --rc genhtml_legend=1 00:13:59.829 --rc geninfo_all_blocks=1 00:13:59.829 --rc geninfo_unexecuted_blocks=1 00:13:59.829 00:13:59.829 ' 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:59.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:59.829 --rc genhtml_branch_coverage=1 00:13:59.829 --rc genhtml_function_coverage=1 00:13:59.829 --rc genhtml_legend=1 00:13:59.829 --rc geninfo_all_blocks=1 00:13:59.829 --rc geninfo_unexecuted_blocks=1 00:13:59.829 00:13:59.829 ' 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.829 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.830 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.830 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:13:59.830 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.830 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:13:59.830 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:59.830 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:59.830 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:59.830 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:59.830 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:59.830 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:59.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:59.830 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:59.830 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:59.830 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:59.830 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:13:59.830 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:13:59.830 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:13:59.830 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:13:59.830 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:13:59.830 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:13:59.830 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:13:59.830 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:13:59.830 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:59.830 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@472 -- # prepare_net_devs 00:13:59.830 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@434 -- # local -g is_hw=no 00:13:59.830 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@436 -- # remove_spdk_ns 00:13:59.830 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:59.830 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:59.830 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:59.830 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:13:59.830 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:13:59.830 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:13:59.830 08:28:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:07.971 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:07.971 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ up == up ]] 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:07.971 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ up == up ]] 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:07.971 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # is_hw=yes 00:14:07.971 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:14:07.972 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:14:07.972 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:14:07.972 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:07.972 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:07.972 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:07.972 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:07.972 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:07.972 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:07.972 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:07.972 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:07.972 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:07.972 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:07.972 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:07.972 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:07.972 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:07.972 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:07.972 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:07.972 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:07.972 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:07.972 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:07.972 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:07.972 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:07.972 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:07.972 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:07.972 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:07.972 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:07.972 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.615 ms 00:14:07.972 00:14:07.972 --- 10.0.0.2 ping statistics --- 00:14:07.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.972 rtt min/avg/max/mdev = 0.615/0.615/0.615/0.000 ms 00:14:07.972 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:07.972 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:07.972 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:14:07.972 00:14:07.972 --- 10.0.0.1 ping statistics --- 00:14:07.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.972 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:14:07.972 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:07.972 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # return 0 00:14:07.972 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:14:07.972 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:07.972 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:14:07.972 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:14:07.972 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:07.972 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:14:07.972 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:14:07.972 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:14:07.972 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:14:07.972 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:07.972 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:07.972 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@505 -- # nvmfpid=3653927 00:14:07.972 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@506 -- # waitforlisten 3653927 00:14:07.972 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:07.972 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 3653927 ']' 00:14:07.972 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:07.972 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:07.972 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:07.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:07.972 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:07.972 08:28:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:07.972 [2024-10-01 08:28:58.798932] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:14:07.972 [2024-10-01 08:28:58.798992] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:07.972 [2024-10-01 08:28:58.869552] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:07.972 [2024-10-01 08:28:58.933660] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:07.972 [2024-10-01 08:28:58.933696] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:07.972 [2024-10-01 08:28:58.933704] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:07.972 [2024-10-01 08:28:58.933711] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:07.972 [2024-10-01 08:28:58.933717] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:07.972 [2024-10-01 08:28:58.935229] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:07.972 [2024-10-01 08:28:58.935530] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:07.972 [2024-10-01 08:28:58.935662] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.972 [2024-10-01 08:28:58.935662] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:14:07.972 08:28:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:07.972 08:28:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:14:07.972 08:28:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:14:07.972 08:28:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:07.972 08:28:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:07.972 08:28:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:07.972 08:28:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:07.972 08:28:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.972 08:28:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:07.972 [2024-10-01 08:28:59.629914] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:07.972 08:28:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.972 08:28:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:14:07.972 08:28:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.972 08:28:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:07.972 [2024-10-01 08:28:59.646164] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:14:07.972 08:28:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.972 08:28:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:14:07.972 08:28:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.972 08:28:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:07.972 08:28:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.972 08:28:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:14:07.972 08:28:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.972 08:28:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:07.972 08:28:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.972 08:28:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:14:07.972 08:28:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.972 08:28:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:07.972 08:28:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.972 08:28:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:07.972 08:28:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:14:07.972 08:28:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.972 08:28:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:07.972 08:28:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.972 08:28:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:14:07.972 08:28:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:14:07.972 08:28:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:07.973 08:28:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:07.973 08:28:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:07.973 08:28:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.973 08:28:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:07.973 08:28:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:07.973 08:28:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.973 08:28:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:14:07.973 08:28:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:14:07.973 08:28:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:14:07.973 08:28:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:07.973 08:28:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:07.973 08:28:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:07.973 08:28:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:07.973 08:28:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:08.233 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:14:08.234 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:14:08.234 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:14:08.234 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.234 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:08.234 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.234 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:14:08.234 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.234 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:08.234 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.234 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:14:08.234 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.234 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:08.234 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.234 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:08.234 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:14:08.234 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.234 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:08.494 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.494 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:14:08.494 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:14:08.494 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:08.494 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:08.494 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:08.494 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:08.494 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:08.755 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:14:08.755 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:14:08.755 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:14:08.755 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.755 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:08.755 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.755 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:14:08.755 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.755 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:08.755 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.755 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:14:08.755 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:08.755 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:08.755 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:08.755 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.755 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:08.755 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:08.755 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.755 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:14:08.755 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:14:08.755 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:14:08.755 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:08.755 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:08.755 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:08.755 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:08.755 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:08.755 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:14:08.755 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:14:08.755 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:14:08.755 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:14:08.755 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:14:08.755 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:08.755 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:14:09.016 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:14:09.016 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:14:09.016 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:14:09.016 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:14:09.016 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:09.016 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:14:09.277 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:14:09.277 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:14:09.277 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.277 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:09.277 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.277 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:14:09.277 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:09.277 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:09.277 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:09.277 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.277 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:09.277 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:09.277 08:29:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.277 08:29:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:14:09.277 08:29:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:14:09.277 08:29:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:14:09.277 08:29:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:09.277 08:29:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:09.277 08:29:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:09.277 08:29:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:09.277 08:29:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:09.538 08:29:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:14:09.538 08:29:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:14:09.538 08:29:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:14:09.538 08:29:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:14:09.538 08:29:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:14:09.538 08:29:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:09.538 08:29:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:14:09.797 08:29:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:14:09.797 08:29:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:14:09.797 08:29:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:14:09.797 08:29:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:14:09.797 08:29:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:09.797 08:29:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:14:09.797 08:29:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:14:09.797 08:29:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:14:09.797 08:29:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.797 08:29:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:09.797 08:29:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.797 08:29:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:09.797 08:29:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:14:09.797 08:29:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.797 08:29:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:09.797 08:29:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.057 08:29:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:14:10.057 08:29:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:14:10.057 08:29:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:10.057 08:29:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:10.057 08:29:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:10.057 08:29:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:10.057 08:29:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:10.057 08:29:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:14:10.057 08:29:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:14:10.057 08:29:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:14:10.057 08:29:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:14:10.057 08:29:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # nvmfcleanup 00:14:10.057 08:29:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:14:10.057 08:29:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:10.057 08:29:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:14:10.057 08:29:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:10.057 08:29:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:10.057 rmmod nvme_tcp 00:14:10.057 rmmod nvme_fabrics 00:14:10.057 rmmod nvme_keyring 00:14:10.318 08:29:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:10.318 08:29:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:14:10.318 08:29:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:14:10.318 08:29:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@513 -- # '[' -n 3653927 ']' 00:14:10.318 08:29:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@514 -- # killprocess 3653927 00:14:10.318 08:29:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 3653927 ']' 00:14:10.318 08:29:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 3653927 00:14:10.318 08:29:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:14:10.318 08:29:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:10.318 08:29:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3653927 00:14:10.318 08:29:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:10.318 08:29:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:10.318 08:29:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3653927' 00:14:10.318 killing process with pid 3653927 00:14:10.318 08:29:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 3653927 00:14:10.318 08:29:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 3653927 00:14:10.318 08:29:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:14:10.318 08:29:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:14:10.318 08:29:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:14:10.318 08:29:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:14:10.318 08:29:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@787 -- # iptables-save 00:14:10.318 08:29:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:14:10.318 08:29:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@787 -- # iptables-restore 00:14:10.318 08:29:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:10.318 08:29:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:10.318 08:29:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:10.318 08:29:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:10.318 08:29:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:12.861 00:14:12.861 real 0m13.041s 00:14:12.861 user 0m15.745s 00:14:12.861 sys 0m6.341s 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:12.861 ************************************ 00:14:12.861 END TEST nvmf_referrals 00:14:12.861 ************************************ 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:12.861 ************************************ 00:14:12.861 START TEST nvmf_connect_disconnect 00:14:12.861 ************************************ 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:14:12.861 * Looking for test storage... 00:14:12.861 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lcov --version 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:12.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.861 --rc genhtml_branch_coverage=1 00:14:12.861 --rc genhtml_function_coverage=1 00:14:12.861 --rc genhtml_legend=1 00:14:12.861 --rc geninfo_all_blocks=1 00:14:12.861 --rc geninfo_unexecuted_blocks=1 00:14:12.861 00:14:12.861 ' 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:12.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.861 --rc genhtml_branch_coverage=1 00:14:12.861 --rc genhtml_function_coverage=1 00:14:12.861 --rc genhtml_legend=1 00:14:12.861 --rc geninfo_all_blocks=1 00:14:12.861 --rc geninfo_unexecuted_blocks=1 00:14:12.861 00:14:12.861 ' 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:12.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.861 --rc genhtml_branch_coverage=1 00:14:12.861 --rc genhtml_function_coverage=1 00:14:12.861 --rc genhtml_legend=1 00:14:12.861 --rc geninfo_all_blocks=1 00:14:12.861 --rc geninfo_unexecuted_blocks=1 00:14:12.861 00:14:12.861 ' 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:12.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.861 --rc genhtml_branch_coverage=1 00:14:12.861 --rc genhtml_function_coverage=1 00:14:12.861 --rc genhtml_legend=1 00:14:12.861 --rc geninfo_all_blocks=1 00:14:12.861 --rc geninfo_unexecuted_blocks=1 00:14:12.861 00:14:12.861 ' 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.861 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.862 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:14:12.862 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.862 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:14:12.862 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:12.862 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:12.862 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:12.862 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:12.862 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:12.862 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:12.862 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:12.862 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:12.862 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:12.862 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:12.862 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:12.862 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:12.862 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:14:12.862 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:14:12.862 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:12.862 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@472 -- # prepare_net_devs 00:14:12.862 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@434 -- # local -g is_hw=no 00:14:12.862 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@436 -- # remove_spdk_ns 00:14:12.862 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:12.862 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:12.862 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:12.862 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:14:12.862 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:14:12.862 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:14:12.862 08:29:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:20.999 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:20.999 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:14:20.999 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:20.999 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:20.999 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:20.999 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:20.999 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:20.999 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:14:20.999 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:20.999 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:14:20.999 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:14:20.999 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:14:20.999 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:14:20.999 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:14:20.999 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:14:20.999 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:20.999 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:20.999 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:20.999 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:20.999 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:20.999 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:20.999 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:20.999 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:20.999 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:20.999 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:20.999 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:20.999 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:14:20.999 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:14:20.999 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:14:20.999 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:14:20.999 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:14:20.999 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:14:20.999 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:14:20.999 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:20.999 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:20.999 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:14:20.999 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:14:20.999 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:20.999 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:20.999 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:14:20.999 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:14:20.999 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:20.999 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:20.999 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:14:20.999 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:14:20.999 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:20.999 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ up == up ]] 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:21.000 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ up == up ]] 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:21.000 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # is_hw=yes 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:21.000 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:21.000 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.535 ms 00:14:21.000 00:14:21.000 --- 10.0.0.2 ping statistics --- 00:14:21.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.000 rtt min/avg/max/mdev = 0.535/0.535/0.535/0.000 ms 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:21.000 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:21.000 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:14:21.000 00:14:21.000 --- 10.0.0.1 ping statistics --- 00:14:21.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.000 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # return 0 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@505 -- # nvmfpid=3658954 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@506 -- # waitforlisten 3658954 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 3658954 ']' 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:21.000 08:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:21.000 [2024-10-01 08:29:11.858125] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:14:21.000 [2024-10-01 08:29:11.858181] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:21.000 [2024-10-01 08:29:11.924964] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:21.000 [2024-10-01 08:29:11.988859] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:21.000 [2024-10-01 08:29:11.988893] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:21.000 [2024-10-01 08:29:11.988901] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:21.000 [2024-10-01 08:29:11.988908] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:21.000 [2024-10-01 08:29:11.988914] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:21.000 [2024-10-01 08:29:11.990434] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:21.000 [2024-10-01 08:29:11.990549] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:21.000 [2024-10-01 08:29:11.990703] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:21.000 [2024-10-01 08:29:11.990704] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:14:21.000 08:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:21.000 08:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:14:21.000 08:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:14:21.000 08:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:21.001 08:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:21.001 08:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:21.001 08:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:14:21.001 08:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.001 08:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:21.001 [2024-10-01 08:29:12.707185] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:21.001 08:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.001 08:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:14:21.001 08:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.001 08:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:21.001 08:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.001 08:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:14:21.001 08:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:21.001 08:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.001 08:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:21.001 08:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.001 08:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:21.001 08:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.001 08:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:21.001 08:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.001 08:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:21.001 08:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.001 08:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:21.001 [2024-10-01 08:29:12.766225] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:21.001 08:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.001 08:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:14:21.001 08:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:14:21.001 08:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:14:25.229 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.527 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:32.728 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:36.027 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.327 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.327 08:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:14:39.327 08:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:14:39.327 08:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # nvmfcleanup 00:14:39.327 08:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:14:39.327 08:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:39.327 08:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:14:39.327 08:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:39.327 08:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:39.327 rmmod nvme_tcp 00:14:39.327 rmmod nvme_fabrics 00:14:39.327 rmmod nvme_keyring 00:14:39.327 08:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:39.327 08:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:14:39.327 08:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:14:39.327 08:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@513 -- # '[' -n 3658954 ']' 00:14:39.327 08:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@514 -- # killprocess 3658954 00:14:39.327 08:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 3658954 ']' 00:14:39.327 08:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 3658954 00:14:39.327 08:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:14:39.327 08:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:39.588 08:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3658954 00:14:39.588 08:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:39.588 08:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:39.588 08:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3658954' 00:14:39.588 killing process with pid 3658954 00:14:39.588 08:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 3658954 00:14:39.588 08:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 3658954 00:14:39.588 08:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:14:39.588 08:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:14:39.588 08:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:14:39.588 08:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:14:39.588 08:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:14:39.588 08:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@787 -- # iptables-save 00:14:39.588 08:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@787 -- # iptables-restore 00:14:39.588 08:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:39.588 08:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:39.588 08:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:39.588 08:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:39.588 08:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:42.134 00:14:42.134 real 0m29.164s 00:14:42.134 user 1m19.116s 00:14:42.134 sys 0m6.988s 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:42.134 ************************************ 00:14:42.134 END TEST nvmf_connect_disconnect 00:14:42.134 ************************************ 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:42.134 ************************************ 00:14:42.134 START TEST nvmf_multitarget 00:14:42.134 ************************************ 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:42.134 * Looking for test storage... 00:14:42.134 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lcov --version 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:42.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.134 --rc genhtml_branch_coverage=1 00:14:42.134 --rc genhtml_function_coverage=1 00:14:42.134 --rc genhtml_legend=1 00:14:42.134 --rc geninfo_all_blocks=1 00:14:42.134 --rc geninfo_unexecuted_blocks=1 00:14:42.134 00:14:42.134 ' 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:42.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.134 --rc genhtml_branch_coverage=1 00:14:42.134 --rc genhtml_function_coverage=1 00:14:42.134 --rc genhtml_legend=1 00:14:42.134 --rc geninfo_all_blocks=1 00:14:42.134 --rc geninfo_unexecuted_blocks=1 00:14:42.134 00:14:42.134 ' 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:42.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.134 --rc genhtml_branch_coverage=1 00:14:42.134 --rc genhtml_function_coverage=1 00:14:42.134 --rc genhtml_legend=1 00:14:42.134 --rc geninfo_all_blocks=1 00:14:42.134 --rc geninfo_unexecuted_blocks=1 00:14:42.134 00:14:42.134 ' 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:42.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.134 --rc genhtml_branch_coverage=1 00:14:42.134 --rc genhtml_function_coverage=1 00:14:42.134 --rc genhtml_legend=1 00:14:42.134 --rc geninfo_all_blocks=1 00:14:42.134 --rc geninfo_unexecuted_blocks=1 00:14:42.134 00:14:42.134 ' 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.134 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.135 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.135 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:14:42.135 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.135 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:14:42.135 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:42.135 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:42.135 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:42.135 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:42.135 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:42.135 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:42.135 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:42.135 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:42.135 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:42.135 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:42.135 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:42.135 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:14:42.135 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:14:42.135 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:42.135 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@472 -- # prepare_net_devs 00:14:42.135 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@434 -- # local -g is_hw=no 00:14:42.135 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@436 -- # remove_spdk_ns 00:14:42.135 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:42.135 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:42.135 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:42.135 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:14:42.135 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:14:42.135 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:14:42.135 08:29:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:50.278 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:50.278 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:14:50.278 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:50.278 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:50.278 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:50.278 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:50.278 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:50.278 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:14:50.278 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:50.278 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:14:50.278 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:14:50.278 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:14:50.278 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:14:50.278 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:14:50.278 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:14:50.278 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:50.278 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:50.278 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:50.278 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:50.278 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:50.278 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:50.278 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:50.278 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:50.278 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:50.278 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:50.278 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:50.278 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:14:50.278 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:14:50.278 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:14:50.278 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:14:50.278 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:14:50.278 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:14:50.278 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:14:50.278 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:50.278 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:50.278 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:14:50.278 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:14:50.278 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:50.278 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:50.278 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:14:50.278 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:14:50.278 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:50.278 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:50.278 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:14:50.278 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:14:50.278 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:50.278 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:50.278 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:14:50.278 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:14:50.278 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:14:50.278 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:14:50.278 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:14:50.278 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:50.278 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:14:50.278 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:50.278 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ up == up ]] 00:14:50.278 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:14:50.278 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:50.278 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:50.278 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:50.278 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:14:50.278 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:14:50.278 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:50.278 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:14:50.279 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:50.279 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ up == up ]] 00:14:50.279 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:14:50.279 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:50.279 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:50.279 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:50.279 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:14:50.279 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:14:50.279 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # is_hw=yes 00:14:50.279 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:14:50.279 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:14:50.279 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:14:50.279 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:50.279 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:50.279 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:50.279 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:50.279 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:50.279 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:50.279 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:50.279 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:50.279 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:50.279 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:50.279 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:50.279 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:50.279 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:50.279 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:50.279 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:50.279 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:50.279 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:50.279 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:50.279 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:50.279 08:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:50.279 08:29:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:50.279 08:29:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:50.279 08:29:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:50.279 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:50.279 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.655 ms 00:14:50.279 00:14:50.279 --- 10.0.0.2 ping statistics --- 00:14:50.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:50.279 rtt min/avg/max/mdev = 0.655/0.655/0.655/0.000 ms 00:14:50.279 08:29:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:50.279 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:50.279 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:14:50.279 00:14:50.279 --- 10.0.0.1 ping statistics --- 00:14:50.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:50.279 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:14:50.279 08:29:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:50.279 08:29:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # return 0 00:14:50.279 08:29:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:14:50.279 08:29:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:50.279 08:29:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:14:50.279 08:29:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:14:50.279 08:29:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:50.279 08:29:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:14:50.279 08:29:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:14:50.279 08:29:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:14:50.279 08:29:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:14:50.279 08:29:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:50.279 08:29:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:50.279 08:29:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@505 -- # nvmfpid=3666989 00:14:50.279 08:29:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@506 -- # waitforlisten 3666989 00:14:50.279 08:29:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:50.279 08:29:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 3666989 ']' 00:14:50.279 08:29:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:50.279 08:29:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:50.279 08:29:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:50.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:50.279 08:29:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:50.279 08:29:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:50.279 [2024-10-01 08:29:41.136116] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:14:50.279 [2024-10-01 08:29:41.136188] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:50.279 [2024-10-01 08:29:41.208017] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:50.279 [2024-10-01 08:29:41.281999] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:50.279 [2024-10-01 08:29:41.282038] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:50.279 [2024-10-01 08:29:41.282046] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:50.279 [2024-10-01 08:29:41.282053] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:50.279 [2024-10-01 08:29:41.282058] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:50.279 [2024-10-01 08:29:41.283631] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:50.279 [2024-10-01 08:29:41.283746] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:50.279 [2024-10-01 08:29:41.283902] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.279 [2024-10-01 08:29:41.283903] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:14:50.279 08:29:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:50.279 08:29:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:14:50.279 08:29:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:14:50.279 08:29:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:50.279 08:29:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:50.279 08:29:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:50.279 08:29:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:50.279 08:29:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:50.279 08:29:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:14:50.279 08:29:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:14:50.279 08:29:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:14:50.541 "nvmf_tgt_1" 00:14:50.541 08:29:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:14:50.541 "nvmf_tgt_2" 00:14:50.541 08:29:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:50.541 08:29:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:14:50.802 08:29:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:14:50.802 08:29:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:14:50.802 true 00:14:50.802 08:29:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:14:50.802 true 00:14:50.802 08:29:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:50.802 08:29:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:14:51.063 08:29:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:14:51.063 08:29:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:14:51.063 08:29:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:14:51.063 08:29:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # nvmfcleanup 00:14:51.063 08:29:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:14:51.063 08:29:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:51.063 08:29:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:14:51.063 08:29:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:51.063 08:29:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:51.063 rmmod nvme_tcp 00:14:51.063 rmmod nvme_fabrics 00:14:51.063 rmmod nvme_keyring 00:14:51.063 08:29:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:51.063 08:29:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:14:51.063 08:29:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:14:51.063 08:29:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@513 -- # '[' -n 3666989 ']' 00:14:51.063 08:29:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@514 -- # killprocess 3666989 00:14:51.063 08:29:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 3666989 ']' 00:14:51.063 08:29:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 3666989 00:14:51.063 08:29:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:14:51.063 08:29:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:51.063 08:29:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3666989 00:14:51.063 08:29:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:51.063 08:29:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:51.063 08:29:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3666989' 00:14:51.063 killing process with pid 3666989 00:14:51.063 08:29:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 3666989 00:14:51.063 08:29:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 3666989 00:14:51.325 08:29:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:14:51.325 08:29:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:14:51.325 08:29:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:14:51.325 08:29:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:14:51.325 08:29:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@787 -- # iptables-save 00:14:51.325 08:29:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:14:51.325 08:29:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@787 -- # iptables-restore 00:14:51.325 08:29:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:51.325 08:29:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:51.325 08:29:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:51.325 08:29:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:51.325 08:29:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:53.872 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:53.872 00:14:53.872 real 0m11.567s 00:14:53.872 user 0m9.805s 00:14:53.872 sys 0m5.943s 00:14:53.872 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:53.872 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:53.872 ************************************ 00:14:53.872 END TEST nvmf_multitarget 00:14:53.872 ************************************ 00:14:53.872 08:29:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:53.872 08:29:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:53.872 08:29:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:53.872 08:29:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:53.872 ************************************ 00:14:53.872 START TEST nvmf_rpc 00:14:53.872 ************************************ 00:14:53.872 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:53.872 * Looking for test storage... 00:14:53.872 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:53.872 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:53.872 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:14:53.872 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:53.872 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:53.872 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:53.872 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:53.872 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:53.872 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:14:53.872 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:14:53.872 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:14:53.872 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:14:53.872 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:14:53.872 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:14:53.872 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:14:53.872 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:53.872 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:14:53.872 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:14:53.872 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:53.872 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:53.872 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:14:53.872 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:14:53.872 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:53.872 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:14:53.872 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:14:53.873 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:14:53.873 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:14:53.873 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:53.873 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:14:53.873 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:14:53.873 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:53.873 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:53.873 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:14:53.873 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:53.873 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:53.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.873 --rc genhtml_branch_coverage=1 00:14:53.873 --rc genhtml_function_coverage=1 00:14:53.873 --rc genhtml_legend=1 00:14:53.873 --rc geninfo_all_blocks=1 00:14:53.873 --rc geninfo_unexecuted_blocks=1 00:14:53.873 00:14:53.873 ' 00:14:53.873 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:53.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.873 --rc genhtml_branch_coverage=1 00:14:53.873 --rc genhtml_function_coverage=1 00:14:53.873 --rc genhtml_legend=1 00:14:53.873 --rc geninfo_all_blocks=1 00:14:53.873 --rc geninfo_unexecuted_blocks=1 00:14:53.873 00:14:53.873 ' 00:14:53.873 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:53.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.873 --rc genhtml_branch_coverage=1 00:14:53.873 --rc genhtml_function_coverage=1 00:14:53.873 --rc genhtml_legend=1 00:14:53.873 --rc geninfo_all_blocks=1 00:14:53.873 --rc geninfo_unexecuted_blocks=1 00:14:53.873 00:14:53.873 ' 00:14:53.873 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:53.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.873 --rc genhtml_branch_coverage=1 00:14:53.873 --rc genhtml_function_coverage=1 00:14:53.873 --rc genhtml_legend=1 00:14:53.873 --rc geninfo_all_blocks=1 00:14:53.873 --rc geninfo_unexecuted_blocks=1 00:14:53.873 00:14:53.873 ' 00:14:53.873 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:53.873 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:14:53.873 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:53.873 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:53.873 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:53.873 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:53.873 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:53.873 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:53.873 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:53.873 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:53.873 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:53.873 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:53.873 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:53.873 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:53.873 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:53.873 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:53.873 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:53.873 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:53.873 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:53.873 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:14:53.873 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:53.873 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:53.873 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:53.873 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.873 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.873 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.873 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:14:53.873 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.873 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:14:53.873 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:53.873 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:53.873 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:53.873 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:53.873 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:53.873 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:53.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:53.873 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:53.873 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:53.873 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:53.873 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:14:53.873 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:14:53.873 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:14:53.873 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:53.873 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@472 -- # prepare_net_devs 00:14:53.873 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@434 -- # local -g is_hw=no 00:14:53.873 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@436 -- # remove_spdk_ns 00:14:53.873 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:53.873 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:53.873 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:53.873 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:14:53.873 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:14:53.873 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:14:53.873 08:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:02.019 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:02.019 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:15:02.019 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:02.019 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:02.019 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:02.019 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:02.019 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:02.019 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:15:02.019 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:02.019 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:15:02.019 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:15:02.019 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:15:02.019 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:15:02.019 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:15:02.019 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:15:02.019 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:02.019 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:02.019 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:02.019 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:02.019 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:02.019 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:02.019 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:02.019 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:02.019 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:02.019 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:02.019 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:02.019 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:15:02.019 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:15:02.019 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:15:02.019 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:15:02.019 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:15:02.019 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:02.020 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:02.020 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ up == up ]] 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:02.020 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ up == up ]] 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:02.020 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # is_hw=yes 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:02.020 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:02.020 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.614 ms 00:15:02.020 00:15:02.020 --- 10.0.0.2 ping statistics --- 00:15:02.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:02.020 rtt min/avg/max/mdev = 0.614/0.614/0.614/0.000 ms 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:02.020 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:02.020 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:15:02.020 00:15:02.020 --- 10.0.0.1 ping statistics --- 00:15:02.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:02.020 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # return 0 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@505 -- # nvmfpid=3671453 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@506 -- # waitforlisten 3671453 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 3671453 ']' 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:02.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:02.020 08:29:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:02.020 [2024-10-01 08:29:52.830417] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:15:02.020 [2024-10-01 08:29:52.830489] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:02.020 [2024-10-01 08:29:52.903587] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:02.020 [2024-10-01 08:29:52.979089] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:02.020 [2024-10-01 08:29:52.979129] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:02.020 [2024-10-01 08:29:52.979137] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:02.020 [2024-10-01 08:29:52.979144] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:02.020 [2024-10-01 08:29:52.979149] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:02.020 [2024-10-01 08:29:52.980832] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:02.020 [2024-10-01 08:29:52.980947] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:02.020 [2024-10-01 08:29:52.981102] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:15:02.020 [2024-10-01 08:29:52.981289] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.020 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:02.020 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:15:02.020 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:02.021 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:02.021 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:02.021 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:02.021 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:15:02.021 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.021 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:02.021 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.021 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:15:02.021 "tick_rate": 2400000000, 00:15:02.021 "poll_groups": [ 00:15:02.021 { 00:15:02.021 "name": "nvmf_tgt_poll_group_000", 00:15:02.021 "admin_qpairs": 0, 00:15:02.021 "io_qpairs": 0, 00:15:02.021 "current_admin_qpairs": 0, 00:15:02.021 "current_io_qpairs": 0, 00:15:02.021 "pending_bdev_io": 0, 00:15:02.021 "completed_nvme_io": 0, 00:15:02.021 "transports": [] 00:15:02.021 }, 00:15:02.021 { 00:15:02.021 "name": "nvmf_tgt_poll_group_001", 00:15:02.021 "admin_qpairs": 0, 00:15:02.021 "io_qpairs": 0, 00:15:02.021 "current_admin_qpairs": 0, 00:15:02.021 "current_io_qpairs": 0, 00:15:02.021 "pending_bdev_io": 0, 00:15:02.021 "completed_nvme_io": 0, 00:15:02.021 "transports": [] 00:15:02.021 }, 00:15:02.021 { 00:15:02.021 "name": "nvmf_tgt_poll_group_002", 00:15:02.021 "admin_qpairs": 0, 00:15:02.021 "io_qpairs": 0, 00:15:02.021 "current_admin_qpairs": 0, 00:15:02.021 "current_io_qpairs": 0, 00:15:02.021 "pending_bdev_io": 0, 00:15:02.021 "completed_nvme_io": 0, 00:15:02.021 "transports": [] 00:15:02.021 }, 00:15:02.021 { 00:15:02.021 "name": "nvmf_tgt_poll_group_003", 00:15:02.021 "admin_qpairs": 0, 00:15:02.021 "io_qpairs": 0, 00:15:02.021 "current_admin_qpairs": 0, 00:15:02.021 "current_io_qpairs": 0, 00:15:02.021 "pending_bdev_io": 0, 00:15:02.021 "completed_nvme_io": 0, 00:15:02.021 "transports": [] 00:15:02.021 } 00:15:02.021 ] 00:15:02.021 }' 00:15:02.021 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:15:02.021 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:15:02.021 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:15:02.021 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:15:02.021 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:15:02.021 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:15:02.021 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:15:02.021 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:02.021 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.021 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:02.021 [2024-10-01 08:29:53.795295] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:02.021 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.021 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:15:02.021 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.021 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:02.021 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.021 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:15:02.021 "tick_rate": 2400000000, 00:15:02.021 "poll_groups": [ 00:15:02.021 { 00:15:02.021 "name": "nvmf_tgt_poll_group_000", 00:15:02.021 "admin_qpairs": 0, 00:15:02.021 "io_qpairs": 0, 00:15:02.021 "current_admin_qpairs": 0, 00:15:02.021 "current_io_qpairs": 0, 00:15:02.021 "pending_bdev_io": 0, 00:15:02.021 "completed_nvme_io": 0, 00:15:02.021 "transports": [ 00:15:02.021 { 00:15:02.021 "trtype": "TCP" 00:15:02.021 } 00:15:02.021 ] 00:15:02.021 }, 00:15:02.021 { 00:15:02.021 "name": "nvmf_tgt_poll_group_001", 00:15:02.021 "admin_qpairs": 0, 00:15:02.021 "io_qpairs": 0, 00:15:02.021 "current_admin_qpairs": 0, 00:15:02.021 "current_io_qpairs": 0, 00:15:02.021 "pending_bdev_io": 0, 00:15:02.021 "completed_nvme_io": 0, 00:15:02.021 "transports": [ 00:15:02.021 { 00:15:02.021 "trtype": "TCP" 00:15:02.021 } 00:15:02.021 ] 00:15:02.021 }, 00:15:02.021 { 00:15:02.021 "name": "nvmf_tgt_poll_group_002", 00:15:02.021 "admin_qpairs": 0, 00:15:02.021 "io_qpairs": 0, 00:15:02.021 "current_admin_qpairs": 0, 00:15:02.021 "current_io_qpairs": 0, 00:15:02.021 "pending_bdev_io": 0, 00:15:02.021 "completed_nvme_io": 0, 00:15:02.021 "transports": [ 00:15:02.021 { 00:15:02.021 "trtype": "TCP" 00:15:02.021 } 00:15:02.021 ] 00:15:02.021 }, 00:15:02.021 { 00:15:02.021 "name": "nvmf_tgt_poll_group_003", 00:15:02.021 "admin_qpairs": 0, 00:15:02.021 "io_qpairs": 0, 00:15:02.021 "current_admin_qpairs": 0, 00:15:02.021 "current_io_qpairs": 0, 00:15:02.021 "pending_bdev_io": 0, 00:15:02.021 "completed_nvme_io": 0, 00:15:02.021 "transports": [ 00:15:02.021 { 00:15:02.021 "trtype": "TCP" 00:15:02.021 } 00:15:02.021 ] 00:15:02.021 } 00:15:02.021 ] 00:15:02.021 }' 00:15:02.021 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:15:02.021 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:02.021 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:02.021 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:02.282 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:15:02.282 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:15:02.282 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:02.282 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:02.282 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:02.282 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:15:02.282 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:15:02.282 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:15:02.282 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:15:02.282 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:02.282 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.282 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:02.282 Malloc1 00:15:02.282 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.282 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:02.282 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.282 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:02.282 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.282 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:02.282 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.282 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:02.282 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.282 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:15:02.282 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.282 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:02.282 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.282 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:02.282 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.282 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:02.282 [2024-10-01 08:29:53.958898] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:02.282 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.282 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:15:02.282 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:15:02.282 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:15:02.282 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:15:02.282 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:02.282 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:15:02.282 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:02.282 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:15:02.282 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:02.282 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:15:02.282 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:15:02.282 08:29:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:15:02.282 [2024-10-01 08:29:53.985787] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:15:02.282 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:02.282 could not add new controller: failed to write to nvme-fabrics device 00:15:02.282 08:29:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:15:02.282 08:29:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:02.282 08:29:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:02.282 08:29:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:02.282 08:29:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:02.282 08:29:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.282 08:29:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:02.282 08:29:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.282 08:29:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:03.719 08:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:15:03.719 08:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:03.719 08:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:03.719 08:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:03.719 08:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:05.723 08:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:05.723 08:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:05.723 08:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:05.723 08:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:05.723 08:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:05.723 08:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:05.723 08:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:05.983 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.983 08:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:05.983 08:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:05.983 08:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:05.983 08:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:05.983 08:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:05.983 08:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:05.983 08:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:05.983 08:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:05.983 08:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.983 08:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.983 08:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.983 08:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:05.983 08:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:15:05.983 08:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:05.983 08:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:15:05.983 08:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:05.983 08:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:15:05.983 08:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:05.983 08:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:15:05.983 08:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:05.983 08:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:15:05.983 08:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:15:05.983 08:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:05.983 [2024-10-01 08:29:57.722957] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:15:05.983 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:05.983 could not add new controller: failed to write to nvme-fabrics device 00:15:05.983 08:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:15:05.983 08:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:05.983 08:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:05.983 08:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:05.983 08:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:15:05.983 08:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.983 08:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.983 08:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.983 08:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:07.895 08:29:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:15:07.895 08:29:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:07.895 08:29:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:07.895 08:29:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:07.895 08:29:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:09.807 08:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:09.807 08:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:09.807 08:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:09.807 08:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:09.807 08:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:09.807 08:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:09.807 08:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:09.807 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:09.807 08:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:09.808 08:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:09.808 08:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:09.808 08:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:09.808 08:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:09.808 08:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:09.808 08:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:09.808 08:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:09.808 08:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.808 08:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.808 08:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.808 08:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:15:09.808 08:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:09.808 08:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:09.808 08:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.808 08:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.808 08:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.808 08:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:09.808 08:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.808 08:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.808 [2024-10-01 08:30:01.453367] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:09.808 08:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.808 08:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:09.808 08:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.808 08:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.808 08:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.808 08:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:09.808 08:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.808 08:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.808 08:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.808 08:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:11.719 08:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:11.719 08:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:11.719 08:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:11.719 08:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:11.719 08:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:13.628 08:30:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:13.628 08:30:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:13.628 08:30:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:13.628 08:30:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:13.628 08:30:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:13.628 08:30:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:13.628 08:30:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:13.628 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:13.628 08:30:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:13.628 08:30:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:13.628 08:30:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:13.628 08:30:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:13.628 08:30:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:13.628 08:30:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:13.628 08:30:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:13.628 08:30:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:13.628 08:30:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.628 08:30:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:13.628 08:30:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.628 08:30:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:13.628 08:30:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.628 08:30:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:13.628 08:30:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.628 08:30:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:13.628 08:30:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:13.629 08:30:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.629 08:30:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:13.629 08:30:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.629 08:30:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:13.629 08:30:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.629 08:30:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:13.629 [2024-10-01 08:30:05.213932] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:13.629 08:30:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.629 08:30:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:13.629 08:30:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.629 08:30:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:13.629 08:30:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.629 08:30:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:13.629 08:30:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.629 08:30:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:13.629 08:30:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.629 08:30:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:15.010 08:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:15.010 08:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:15.010 08:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:15.010 08:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:15.010 08:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:17.550 08:30:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:17.550 08:30:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:17.550 08:30:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:17.550 08:30:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:17.550 08:30:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:17.550 08:30:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:17.550 08:30:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:17.550 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:17.550 08:30:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:17.550 08:30:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:17.550 08:30:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:17.550 08:30:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:17.550 08:30:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:17.550 08:30:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:17.550 08:30:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:17.550 08:30:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:17.550 08:30:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.550 08:30:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:17.550 08:30:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.550 08:30:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:17.550 08:30:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.550 08:30:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:17.551 08:30:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.551 08:30:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:17.551 08:30:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:17.551 08:30:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.551 08:30:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:17.551 08:30:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.551 08:30:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:17.551 08:30:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.551 08:30:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:17.551 [2024-10-01 08:30:08.972570] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:17.551 08:30:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.551 08:30:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:17.551 08:30:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.551 08:30:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:17.551 08:30:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.551 08:30:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:17.551 08:30:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.551 08:30:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:17.551 08:30:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.551 08:30:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:18.933 08:30:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:18.933 08:30:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:18.933 08:30:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:18.933 08:30:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:18.933 08:30:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:20.848 08:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:20.848 08:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:20.848 08:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:20.848 08:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:20.848 08:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:20.848 08:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:20.848 08:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:20.848 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:20.848 08:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:20.848 08:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:20.848 08:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:20.848 08:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:20.848 08:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:21.109 08:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:21.109 08:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:21.109 08:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:21.109 08:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.109 08:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:21.109 08:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.109 08:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:21.109 08:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.109 08:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:21.109 08:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.109 08:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:21.109 08:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:21.109 08:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.109 08:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:21.109 08:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.109 08:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:21.109 08:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.109 08:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:21.109 [2024-10-01 08:30:12.722769] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:21.109 08:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.109 08:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:21.109 08:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.109 08:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:21.109 08:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.109 08:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:21.109 08:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.109 08:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:21.109 08:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.109 08:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:23.020 08:30:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:23.020 08:30:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:23.020 08:30:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:23.020 08:30:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:23.020 08:30:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:24.931 08:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:24.931 08:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:24.931 08:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:24.931 08:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:24.931 08:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:24.931 08:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:24.931 08:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:24.931 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:24.931 08:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:24.931 08:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:24.931 08:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:24.931 08:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:24.931 08:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:24.931 08:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:24.931 08:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:24.931 08:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:24.931 08:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.931 08:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:24.931 08:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.931 08:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:24.931 08:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.931 08:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:24.931 08:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.931 08:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:24.931 08:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:24.931 08:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.931 08:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:24.931 08:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.931 08:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:24.931 08:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.931 08:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:24.931 [2024-10-01 08:30:16.488512] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:24.931 08:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.931 08:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:24.931 08:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.931 08:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:24.931 08:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.931 08:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:24.931 08:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.931 08:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:24.931 08:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.931 08:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:26.314 08:30:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:26.314 08:30:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:26.314 08:30:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:26.314 08:30:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:26.314 08:30:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:28.861 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:28.861 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:28.861 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:28.861 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:28.861 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:28.862 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:28.862 [2024-10-01 08:30:20.258577] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:28.862 [2024-10-01 08:30:20.322687] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:28.862 [2024-10-01 08:30:20.386901] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:28.862 [2024-10-01 08:30:20.451096] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.862 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:28.863 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.863 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:28.863 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.863 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:28.863 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.863 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:28.863 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.863 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:28.863 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.863 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:28.863 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.863 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:28.863 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.863 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:28.863 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:28.863 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.863 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:28.863 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.863 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:28.863 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.863 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:28.863 [2024-10-01 08:30:20.511286] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:28.863 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.863 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:28.863 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.863 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:28.863 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.863 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:28.863 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.863 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:28.863 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.863 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:28.863 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.863 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:28.863 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.863 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:28.863 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.863 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:28.863 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.863 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:15:28.863 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.863 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:28.863 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.863 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:15:28.863 "tick_rate": 2400000000, 00:15:28.863 "poll_groups": [ 00:15:28.863 { 00:15:28.863 "name": "nvmf_tgt_poll_group_000", 00:15:28.863 "admin_qpairs": 0, 00:15:28.863 "io_qpairs": 224, 00:15:28.863 "current_admin_qpairs": 0, 00:15:28.863 "current_io_qpairs": 0, 00:15:28.863 "pending_bdev_io": 0, 00:15:28.863 "completed_nvme_io": 275, 00:15:28.863 "transports": [ 00:15:28.863 { 00:15:28.863 "trtype": "TCP" 00:15:28.863 } 00:15:28.863 ] 00:15:28.863 }, 00:15:28.863 { 00:15:28.863 "name": "nvmf_tgt_poll_group_001", 00:15:28.863 "admin_qpairs": 1, 00:15:28.863 "io_qpairs": 223, 00:15:28.863 "current_admin_qpairs": 0, 00:15:28.863 "current_io_qpairs": 0, 00:15:28.863 "pending_bdev_io": 0, 00:15:28.863 "completed_nvme_io": 354, 00:15:28.863 "transports": [ 00:15:28.863 { 00:15:28.863 "trtype": "TCP" 00:15:28.863 } 00:15:28.863 ] 00:15:28.863 }, 00:15:28.863 { 00:15:28.863 "name": "nvmf_tgt_poll_group_002", 00:15:28.863 "admin_qpairs": 6, 00:15:28.863 "io_qpairs": 218, 00:15:28.863 "current_admin_qpairs": 0, 00:15:28.863 "current_io_qpairs": 0, 00:15:28.863 "pending_bdev_io": 0, 00:15:28.863 "completed_nvme_io": 386, 00:15:28.863 "transports": [ 00:15:28.863 { 00:15:28.863 "trtype": "TCP" 00:15:28.863 } 00:15:28.863 ] 00:15:28.863 }, 00:15:28.863 { 00:15:28.863 "name": "nvmf_tgt_poll_group_003", 00:15:28.863 "admin_qpairs": 0, 00:15:28.863 "io_qpairs": 224, 00:15:28.863 "current_admin_qpairs": 0, 00:15:28.863 "current_io_qpairs": 0, 00:15:28.863 "pending_bdev_io": 0, 00:15:28.863 "completed_nvme_io": 224, 00:15:28.863 "transports": [ 00:15:28.863 { 00:15:28.863 "trtype": "TCP" 00:15:28.863 } 00:15:28.863 ] 00:15:28.863 } 00:15:28.863 ] 00:15:28.863 }' 00:15:28.863 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:15:28.863 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:28.863 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:28.863 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:28.863 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:15:28.863 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:15:28.863 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:28.863 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:28.863 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:28.863 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:15:28.863 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:15:28.863 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:15:28.863 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:15:28.863 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # nvmfcleanup 00:15:28.863 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:15:28.863 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:28.863 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:15:28.863 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:28.863 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:29.124 rmmod nvme_tcp 00:15:29.124 rmmod nvme_fabrics 00:15:29.124 rmmod nvme_keyring 00:15:29.124 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:29.124 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:15:29.124 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:15:29.124 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@513 -- # '[' -n 3671453 ']' 00:15:29.124 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@514 -- # killprocess 3671453 00:15:29.124 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 3671453 ']' 00:15:29.124 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 3671453 00:15:29.124 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:15:29.124 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:29.124 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3671453 00:15:29.124 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:29.124 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:29.124 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3671453' 00:15:29.124 killing process with pid 3671453 00:15:29.124 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 3671453 00:15:29.124 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 3671453 00:15:29.385 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:15:29.385 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:15:29.385 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:15:29.385 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:15:29.385 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@787 -- # iptables-save 00:15:29.385 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:15:29.385 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@787 -- # iptables-restore 00:15:29.385 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:29.385 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:29.385 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:29.385 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:29.385 08:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:31.299 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:31.299 00:15:31.299 real 0m37.884s 00:15:31.299 user 1m53.768s 00:15:31.299 sys 0m7.714s 00:15:31.299 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:31.299 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:31.299 ************************************ 00:15:31.299 END TEST nvmf_rpc 00:15:31.299 ************************************ 00:15:31.299 08:30:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:15:31.299 08:30:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:31.299 08:30:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:31.299 08:30:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:31.562 ************************************ 00:15:31.562 START TEST nvmf_invalid 00:15:31.562 ************************************ 00:15:31.562 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:15:31.562 * Looking for test storage... 00:15:31.562 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:31.562 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:31.562 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lcov --version 00:15:31.562 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:31.562 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:31.562 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:31.562 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:31.562 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:31.562 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:15:31.562 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:15:31.562 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:15:31.562 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:15:31.562 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:15:31.562 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:15:31.562 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:15:31.562 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:31.562 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:15:31.562 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:15:31.562 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:31.562 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:31.562 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:15:31.562 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:15:31.562 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:31.562 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:15:31.562 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:15:31.562 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:15:31.562 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:15:31.562 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:31.562 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:15:31.562 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:15:31.562 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:31.562 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:31.562 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:15:31.562 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:31.562 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:31.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.562 --rc genhtml_branch_coverage=1 00:15:31.562 --rc genhtml_function_coverage=1 00:15:31.562 --rc genhtml_legend=1 00:15:31.562 --rc geninfo_all_blocks=1 00:15:31.562 --rc geninfo_unexecuted_blocks=1 00:15:31.562 00:15:31.562 ' 00:15:31.562 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:31.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.562 --rc genhtml_branch_coverage=1 00:15:31.562 --rc genhtml_function_coverage=1 00:15:31.562 --rc genhtml_legend=1 00:15:31.562 --rc geninfo_all_blocks=1 00:15:31.562 --rc geninfo_unexecuted_blocks=1 00:15:31.562 00:15:31.562 ' 00:15:31.562 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:31.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.562 --rc genhtml_branch_coverage=1 00:15:31.562 --rc genhtml_function_coverage=1 00:15:31.562 --rc genhtml_legend=1 00:15:31.562 --rc geninfo_all_blocks=1 00:15:31.563 --rc geninfo_unexecuted_blocks=1 00:15:31.563 00:15:31.563 ' 00:15:31.563 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:31.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.563 --rc genhtml_branch_coverage=1 00:15:31.563 --rc genhtml_function_coverage=1 00:15:31.563 --rc genhtml_legend=1 00:15:31.563 --rc geninfo_all_blocks=1 00:15:31.563 --rc geninfo_unexecuted_blocks=1 00:15:31.563 00:15:31.563 ' 00:15:31.563 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:31.563 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:15:31.563 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:31.563 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:31.563 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:31.563 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:31.563 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:31.563 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:31.563 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:31.563 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:31.563 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:31.563 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:31.563 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:31.563 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:31.563 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:31.563 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:31.563 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:31.563 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:31.563 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:31.563 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:15:31.563 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:31.563 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:31.563 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:31.563 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.563 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.563 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.563 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:15:31.563 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.563 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:15:31.563 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:31.563 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:31.563 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:31.563 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:31.563 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:31.563 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:31.563 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:31.563 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:31.563 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:31.563 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:31.563 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:31.563 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:31.563 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:15:31.563 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:15:31.563 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:15:31.563 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:15:31.563 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:15:31.563 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:31.563 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@472 -- # prepare_net_devs 00:15:31.563 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@434 -- # local -g is_hw=no 00:15:31.563 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@436 -- # remove_spdk_ns 00:15:31.563 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:31.563 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:31.563 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:31.563 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:15:31.563 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:15:31.563 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:15:31.563 08:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:39.709 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:39.709 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:15:39.709 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:39.709 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:39.709 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:39.709 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:39.709 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:39.709 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:15:39.709 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:39.709 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:15:39.709 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:15:39.709 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:39.710 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:39.710 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ up == up ]] 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:39.710 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ up == up ]] 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:39.710 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # is_hw=yes 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:39.710 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:39.710 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.580 ms 00:15:39.710 00:15:39.710 --- 10.0.0.2 ping statistics --- 00:15:39.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.710 rtt min/avg/max/mdev = 0.580/0.580/0.580/0.000 ms 00:15:39.710 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:39.710 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:39.710 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:15:39.710 00:15:39.710 --- 10.0.0.1 ping statistics --- 00:15:39.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.710 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:15:39.711 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:39.711 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # return 0 00:15:39.711 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:15:39.711 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:39.711 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:15:39.711 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:15:39.711 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:39.711 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:15:39.711 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:15:39.711 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:15:39.711 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:39.711 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:39.711 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:39.711 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@505 -- # nvmfpid=3681869 00:15:39.711 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@506 -- # waitforlisten 3681869 00:15:39.711 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:39.711 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 3681869 ']' 00:15:39.711 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:39.711 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:39.711 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:39.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:39.711 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:39.711 08:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:39.711 [2024-10-01 08:30:30.622598] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:15:39.711 [2024-10-01 08:30:30.622651] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:39.711 [2024-10-01 08:30:30.690219] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:39.711 [2024-10-01 08:30:30.754107] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:39.711 [2024-10-01 08:30:30.754146] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:39.711 [2024-10-01 08:30:30.754154] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:39.711 [2024-10-01 08:30:30.754161] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:39.711 [2024-10-01 08:30:30.754167] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:39.711 [2024-10-01 08:30:30.755904] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:39.711 [2024-10-01 08:30:30.756019] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:39.711 [2024-10-01 08:30:30.756121] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:39.711 [2024-10-01 08:30:30.756121] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:15:39.711 08:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:39.711 08:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:15:39.711 08:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:39.711 08:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:39.711 08:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:39.711 08:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:39.711 08:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:39.711 08:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode10148 00:15:39.972 [2024-10-01 08:30:31.613653] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:15:39.972 08:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:15:39.972 { 00:15:39.972 "nqn": "nqn.2016-06.io.spdk:cnode10148", 00:15:39.972 "tgt_name": "foobar", 00:15:39.972 "method": "nvmf_create_subsystem", 00:15:39.972 "req_id": 1 00:15:39.972 } 00:15:39.972 Got JSON-RPC error response 00:15:39.972 response: 00:15:39.972 { 00:15:39.972 "code": -32603, 00:15:39.972 "message": "Unable to find target foobar" 00:15:39.972 }' 00:15:39.972 08:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:15:39.972 { 00:15:39.972 "nqn": "nqn.2016-06.io.spdk:cnode10148", 00:15:39.972 "tgt_name": "foobar", 00:15:39.972 "method": "nvmf_create_subsystem", 00:15:39.972 "req_id": 1 00:15:39.972 } 00:15:39.972 Got JSON-RPC error response 00:15:39.972 response: 00:15:39.972 { 00:15:39.972 "code": -32603, 00:15:39.972 "message": "Unable to find target foobar" 00:15:39.972 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:15:39.972 08:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:15:39.972 08:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode12513 00:15:40.232 [2024-10-01 08:30:31.806321] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12513: invalid serial number 'SPDKISFASTANDAWESOME' 00:15:40.232 08:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:15:40.232 { 00:15:40.232 "nqn": "nqn.2016-06.io.spdk:cnode12513", 00:15:40.232 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:40.232 "method": "nvmf_create_subsystem", 00:15:40.232 "req_id": 1 00:15:40.232 } 00:15:40.232 Got JSON-RPC error response 00:15:40.232 response: 00:15:40.232 { 00:15:40.232 "code": -32602, 00:15:40.232 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:40.232 }' 00:15:40.232 08:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:15:40.232 { 00:15:40.232 "nqn": "nqn.2016-06.io.spdk:cnode12513", 00:15:40.232 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:40.232 "method": "nvmf_create_subsystem", 00:15:40.232 "req_id": 1 00:15:40.232 } 00:15:40.232 Got JSON-RPC error response 00:15:40.232 response: 00:15:40.232 { 00:15:40.232 "code": -32602, 00:15:40.232 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:40.233 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:40.233 08:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:15:40.233 08:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode1588 00:15:40.233 [2024-10-01 08:30:31.998887] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1588: invalid model number 'SPDK_Controller' 00:15:40.233 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:15:40.233 { 00:15:40.233 "nqn": "nqn.2016-06.io.spdk:cnode1588", 00:15:40.233 "model_number": "SPDK_Controller\u001f", 00:15:40.233 "method": "nvmf_create_subsystem", 00:15:40.233 "req_id": 1 00:15:40.233 } 00:15:40.233 Got JSON-RPC error response 00:15:40.233 response: 00:15:40.233 { 00:15:40.233 "code": -32602, 00:15:40.233 "message": "Invalid MN SPDK_Controller\u001f" 00:15:40.233 }' 00:15:40.233 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:15:40.233 { 00:15:40.233 "nqn": "nqn.2016-06.io.spdk:cnode1588", 00:15:40.233 "model_number": "SPDK_Controller\u001f", 00:15:40.233 "method": "nvmf_create_subsystem", 00:15:40.233 "req_id": 1 00:15:40.233 } 00:15:40.233 Got JSON-RPC error response 00:15:40.233 response: 00:15:40.233 { 00:15:40.233 "code": -32602, 00:15:40.233 "message": "Invalid MN SPDK_Controller\u001f" 00:15:40.233 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:40.233 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:15:40.233 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:15:40.233 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:40.233 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:15:40.233 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:15:40.233 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:40.233 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:40.233 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:15:40.233 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:15:40.233 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:15:40.233 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:40.233 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:40.233 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:15:40.233 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:15:40.233 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:15:40.233 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:40.233 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:40.233 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:15:40.493 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:15:40.493 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:15:40.493 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:40.493 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:40.493 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:15:40.493 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:15:40.493 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:15:40.493 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:40.493 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:40.493 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:15:40.493 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:15:40.493 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:15:40.493 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:40.493 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:40.493 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:15:40.493 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:15:40.493 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:15:40.493 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:40.493 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:40.493 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:15:40.493 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:15:40.493 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ + == \- ]] 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '+8%Zho\[XzvsMiy.04fh' 00:15:40.494 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '+8%Zho\[XzvsMiy.04fh' nqn.2016-06.io.spdk:cnode20578 00:15:40.755 [2024-10-01 08:30:32.348041] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20578: invalid serial number '+8%Zho\[XzvsMiy.04fh' 00:15:40.755 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:15:40.755 { 00:15:40.755 "nqn": "nqn.2016-06.io.spdk:cnode20578", 00:15:40.755 "serial_number": "+8%\u007fZho\\[XzvsMiy.04fh", 00:15:40.755 "method": "nvmf_create_subsystem", 00:15:40.755 "req_id": 1 00:15:40.755 } 00:15:40.755 Got JSON-RPC error response 00:15:40.755 response: 00:15:40.755 { 00:15:40.755 "code": -32602, 00:15:40.755 "message": "Invalid SN +8%\u007fZho\\[XzvsMiy.04fh" 00:15:40.755 }' 00:15:40.755 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:15:40.755 { 00:15:40.755 "nqn": "nqn.2016-06.io.spdk:cnode20578", 00:15:40.755 "serial_number": "+8%\u007fZho\\[XzvsMiy.04fh", 00:15:40.755 "method": "nvmf_create_subsystem", 00:15:40.755 "req_id": 1 00:15:40.755 } 00:15:40.755 Got JSON-RPC error response 00:15:40.755 response: 00:15:40.755 { 00:15:40.755 "code": -32602, 00:15:40.755 "message": "Invalid SN +8%\u007fZho\\[XzvsMiy.04fh" 00:15:40.755 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:40.755 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:15:40.755 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:15:40.755 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:40.755 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:15:40.755 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:15:40.755 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:40.755 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:40.755 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:15:40.755 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:15:40.755 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:15:40.755 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:40.755 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:40.755 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:15:40.755 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:15:40.755 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:15:40.755 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:40.755 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:40.755 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:15:40.755 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:15:40.755 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:15:40.755 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:40.755 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:40.755 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:15:40.755 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:15:40.755 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:15:40.755 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:40.755 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:40.755 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:15:40.755 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:15:40.755 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:15:40.755 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:40.755 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:40.755 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:40.756 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:40.757 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:15:40.757 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:15:40.757 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:15:40.757 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:40.757 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:40.757 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:15:40.757 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 3 == \- ]] 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '3HPsq|OrFaX>d4ao.mYcw1IUd,J1ZXlrl+}cGD4N5' 00:15:41.018 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '3HPsq|OrFaX>d4ao.mYcw1IUd,J1ZXlrl+}cGD4N5' nqn.2016-06.io.spdk:cnode30495 00:15:41.018 [2024-10-01 08:30:32.837614] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30495: invalid model number '3HPsq|OrFaX>d4ao.mYcw1IUd,J1ZXlrl+}cGD4N5' 00:15:41.279 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:15:41.279 { 00:15:41.279 "nqn": "nqn.2016-06.io.spdk:cnode30495", 00:15:41.279 "model_number": "3HPsq|OrFaX>d4ao.mYcw1IUd,J1ZXlrl+}cGD4N5", 00:15:41.279 "method": "nvmf_create_subsystem", 00:15:41.279 "req_id": 1 00:15:41.279 } 00:15:41.279 Got JSON-RPC error response 00:15:41.279 response: 00:15:41.279 { 00:15:41.279 "code": -32602, 00:15:41.279 "message": "Invalid MN 3HPsq|OrFaX>d4ao.mYcw1IUd,J1ZXlrl+}cGD4N5" 00:15:41.279 }' 00:15:41.279 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:15:41.279 { 00:15:41.279 "nqn": "nqn.2016-06.io.spdk:cnode30495", 00:15:41.279 "model_number": "3HPsq|OrFaX>d4ao.mYcw1IUd,J1ZXlrl+}cGD4N5", 00:15:41.279 "method": "nvmf_create_subsystem", 00:15:41.279 "req_id": 1 00:15:41.279 } 00:15:41.279 Got JSON-RPC error response 00:15:41.279 response: 00:15:41.279 { 00:15:41.279 "code": -32602, 00:15:41.279 "message": "Invalid MN 3HPsq|OrFaX>d4ao.mYcw1IUd,J1ZXlrl+}cGD4N5" 00:15:41.279 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:41.279 08:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:15:41.279 [2024-10-01 08:30:33.022324] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:41.279 08:30:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:15:41.539 08:30:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:15:41.539 08:30:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:15:41.539 08:30:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:15:41.539 08:30:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:15:41.539 08:30:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:15:41.798 [2024-10-01 08:30:33.391463] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:15:41.799 08:30:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:15:41.799 { 00:15:41.799 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:41.799 "listen_address": { 00:15:41.799 "trtype": "tcp", 00:15:41.799 "traddr": "", 00:15:41.799 "trsvcid": "4421" 00:15:41.799 }, 00:15:41.799 "method": "nvmf_subsystem_remove_listener", 00:15:41.799 "req_id": 1 00:15:41.799 } 00:15:41.799 Got JSON-RPC error response 00:15:41.799 response: 00:15:41.799 { 00:15:41.799 "code": -32602, 00:15:41.799 "message": "Invalid parameters" 00:15:41.799 }' 00:15:41.799 08:30:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:15:41.799 { 00:15:41.799 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:41.799 "listen_address": { 00:15:41.799 "trtype": "tcp", 00:15:41.799 "traddr": "", 00:15:41.799 "trsvcid": "4421" 00:15:41.799 }, 00:15:41.799 "method": "nvmf_subsystem_remove_listener", 00:15:41.799 "req_id": 1 00:15:41.799 } 00:15:41.799 Got JSON-RPC error response 00:15:41.799 response: 00:15:41.799 { 00:15:41.799 "code": -32602, 00:15:41.799 "message": "Invalid parameters" 00:15:41.799 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:15:41.799 08:30:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20491 -i 0 00:15:41.799 [2024-10-01 08:30:33.576008] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20491: invalid cntlid range [0-65519] 00:15:41.799 08:30:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:15:41.799 { 00:15:41.799 "nqn": "nqn.2016-06.io.spdk:cnode20491", 00:15:41.799 "min_cntlid": 0, 00:15:41.799 "method": "nvmf_create_subsystem", 00:15:41.799 "req_id": 1 00:15:41.799 } 00:15:41.799 Got JSON-RPC error response 00:15:41.799 response: 00:15:41.799 { 00:15:41.799 "code": -32602, 00:15:41.799 "message": "Invalid cntlid range [0-65519]" 00:15:41.799 }' 00:15:41.799 08:30:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:15:41.799 { 00:15:41.799 "nqn": "nqn.2016-06.io.spdk:cnode20491", 00:15:41.799 "min_cntlid": 0, 00:15:41.799 "method": "nvmf_create_subsystem", 00:15:41.799 "req_id": 1 00:15:41.799 } 00:15:41.799 Got JSON-RPC error response 00:15:41.799 response: 00:15:41.799 { 00:15:41.799 "code": -32602, 00:15:41.799 "message": "Invalid cntlid range [0-65519]" 00:15:41.799 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:41.799 08:30:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode32661 -i 65520 00:15:42.059 [2024-10-01 08:30:33.764618] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32661: invalid cntlid range [65520-65519] 00:15:42.059 08:30:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:15:42.059 { 00:15:42.059 "nqn": "nqn.2016-06.io.spdk:cnode32661", 00:15:42.059 "min_cntlid": 65520, 00:15:42.059 "method": "nvmf_create_subsystem", 00:15:42.059 "req_id": 1 00:15:42.059 } 00:15:42.059 Got JSON-RPC error response 00:15:42.059 response: 00:15:42.059 { 00:15:42.059 "code": -32602, 00:15:42.059 "message": "Invalid cntlid range [65520-65519]" 00:15:42.059 }' 00:15:42.059 08:30:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:15:42.059 { 00:15:42.059 "nqn": "nqn.2016-06.io.spdk:cnode32661", 00:15:42.059 "min_cntlid": 65520, 00:15:42.059 "method": "nvmf_create_subsystem", 00:15:42.059 "req_id": 1 00:15:42.059 } 00:15:42.059 Got JSON-RPC error response 00:15:42.059 response: 00:15:42.059 { 00:15:42.059 "code": -32602, 00:15:42.059 "message": "Invalid cntlid range [65520-65519]" 00:15:42.059 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:42.059 08:30:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23021 -I 0 00:15:42.319 [2024-10-01 08:30:33.945177] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23021: invalid cntlid range [1-0] 00:15:42.319 08:30:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:15:42.319 { 00:15:42.319 "nqn": "nqn.2016-06.io.spdk:cnode23021", 00:15:42.319 "max_cntlid": 0, 00:15:42.319 "method": "nvmf_create_subsystem", 00:15:42.319 "req_id": 1 00:15:42.319 } 00:15:42.319 Got JSON-RPC error response 00:15:42.319 response: 00:15:42.319 { 00:15:42.319 "code": -32602, 00:15:42.319 "message": "Invalid cntlid range [1-0]" 00:15:42.319 }' 00:15:42.319 08:30:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:15:42.319 { 00:15:42.319 "nqn": "nqn.2016-06.io.spdk:cnode23021", 00:15:42.319 "max_cntlid": 0, 00:15:42.319 "method": "nvmf_create_subsystem", 00:15:42.319 "req_id": 1 00:15:42.319 } 00:15:42.319 Got JSON-RPC error response 00:15:42.319 response: 00:15:42.319 { 00:15:42.319 "code": -32602, 00:15:42.319 "message": "Invalid cntlid range [1-0]" 00:15:42.319 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:42.319 08:30:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1110 -I 65520 00:15:42.319 [2024-10-01 08:30:34.133758] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1110: invalid cntlid range [1-65520] 00:15:42.579 08:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:15:42.579 { 00:15:42.579 "nqn": "nqn.2016-06.io.spdk:cnode1110", 00:15:42.579 "max_cntlid": 65520, 00:15:42.579 "method": "nvmf_create_subsystem", 00:15:42.579 "req_id": 1 00:15:42.579 } 00:15:42.579 Got JSON-RPC error response 00:15:42.579 response: 00:15:42.579 { 00:15:42.579 "code": -32602, 00:15:42.579 "message": "Invalid cntlid range [1-65520]" 00:15:42.579 }' 00:15:42.579 08:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:15:42.579 { 00:15:42.579 "nqn": "nqn.2016-06.io.spdk:cnode1110", 00:15:42.579 "max_cntlid": 65520, 00:15:42.579 "method": "nvmf_create_subsystem", 00:15:42.579 "req_id": 1 00:15:42.579 } 00:15:42.579 Got JSON-RPC error response 00:15:42.579 response: 00:15:42.579 { 00:15:42.579 "code": -32602, 00:15:42.579 "message": "Invalid cntlid range [1-65520]" 00:15:42.579 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:42.579 08:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8014 -i 6 -I 5 00:15:42.579 [2024-10-01 08:30:34.306310] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8014: invalid cntlid range [6-5] 00:15:42.579 08:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:15:42.579 { 00:15:42.579 "nqn": "nqn.2016-06.io.spdk:cnode8014", 00:15:42.579 "min_cntlid": 6, 00:15:42.579 "max_cntlid": 5, 00:15:42.579 "method": "nvmf_create_subsystem", 00:15:42.579 "req_id": 1 00:15:42.579 } 00:15:42.579 Got JSON-RPC error response 00:15:42.579 response: 00:15:42.579 { 00:15:42.579 "code": -32602, 00:15:42.579 "message": "Invalid cntlid range [6-5]" 00:15:42.579 }' 00:15:42.579 08:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:15:42.579 { 00:15:42.579 "nqn": "nqn.2016-06.io.spdk:cnode8014", 00:15:42.579 "min_cntlid": 6, 00:15:42.579 "max_cntlid": 5, 00:15:42.579 "method": "nvmf_create_subsystem", 00:15:42.579 "req_id": 1 00:15:42.579 } 00:15:42.579 Got JSON-RPC error response 00:15:42.579 response: 00:15:42.579 { 00:15:42.579 "code": -32602, 00:15:42.579 "message": "Invalid cntlid range [6-5]" 00:15:42.579 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:42.579 08:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:15:42.838 08:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:15:42.838 { 00:15:42.838 "name": "foobar", 00:15:42.838 "method": "nvmf_delete_target", 00:15:42.838 "req_id": 1 00:15:42.838 } 00:15:42.838 Got JSON-RPC error response 00:15:42.838 response: 00:15:42.838 { 00:15:42.838 "code": -32602, 00:15:42.838 "message": "The specified target doesn'\''t exist, cannot delete it." 00:15:42.838 }' 00:15:42.838 08:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:15:42.838 { 00:15:42.838 "name": "foobar", 00:15:42.838 "method": "nvmf_delete_target", 00:15:42.838 "req_id": 1 00:15:42.838 } 00:15:42.838 Got JSON-RPC error response 00:15:42.838 response: 00:15:42.838 { 00:15:42.838 "code": -32602, 00:15:42.838 "message": "The specified target doesn't exist, cannot delete it." 00:15:42.838 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:15:42.838 08:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:15:42.838 08:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:15:42.838 08:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # nvmfcleanup 00:15:42.838 08:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:15:42.838 08:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:42.838 08:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:15:42.838 08:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:42.838 08:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:42.838 rmmod nvme_tcp 00:15:42.838 rmmod nvme_fabrics 00:15:42.838 rmmod nvme_keyring 00:15:42.838 08:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:42.838 08:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:15:42.838 08:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:15:42.838 08:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@513 -- # '[' -n 3681869 ']' 00:15:42.838 08:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@514 -- # killprocess 3681869 00:15:42.838 08:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 3681869 ']' 00:15:42.838 08:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 3681869 00:15:42.838 08:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:15:42.838 08:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:42.838 08:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3681869 00:15:42.838 08:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:42.838 08:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:42.838 08:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3681869' 00:15:42.838 killing process with pid 3681869 00:15:42.838 08:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 3681869 00:15:42.838 08:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 3681869 00:15:43.098 08:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:15:43.098 08:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:15:43.098 08:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:15:43.098 08:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:15:43.098 08:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@787 -- # iptables-save 00:15:43.098 08:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:15:43.098 08:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@787 -- # iptables-restore 00:15:43.098 08:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:43.098 08:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:43.098 08:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:43.098 08:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:43.098 08:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:45.011 08:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:45.012 00:15:45.012 real 0m13.669s 00:15:45.012 user 0m20.297s 00:15:45.012 sys 0m6.271s 00:15:45.012 08:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:45.012 08:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:45.012 ************************************ 00:15:45.012 END TEST nvmf_invalid 00:15:45.012 ************************************ 00:15:45.280 08:30:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:45.280 08:30:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:45.280 08:30:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:45.280 08:30:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:45.280 ************************************ 00:15:45.280 START TEST nvmf_connect_stress 00:15:45.280 ************************************ 00:15:45.280 08:30:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:45.280 * Looking for test storage... 00:15:45.280 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:45.280 08:30:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:45.280 08:30:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:15:45.280 08:30:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:45.280 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:45.280 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:45.280 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:45.280 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:45.280 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:15:45.280 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:15:45.280 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:15:45.280 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:15:45.280 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:15:45.280 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:15:45.280 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:15:45.280 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:45.280 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:15:45.280 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:15:45.280 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:45.280 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:45.280 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:15:45.280 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:15:45.280 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:45.280 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:15:45.280 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:15:45.280 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:15:45.280 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:15:45.280 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:45.280 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:15:45.280 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:15:45.280 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:45.280 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:45.280 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:15:45.280 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:45.280 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:45.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:45.280 --rc genhtml_branch_coverage=1 00:15:45.280 --rc genhtml_function_coverage=1 00:15:45.280 --rc genhtml_legend=1 00:15:45.280 --rc geninfo_all_blocks=1 00:15:45.280 --rc geninfo_unexecuted_blocks=1 00:15:45.280 00:15:45.280 ' 00:15:45.280 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:45.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:45.280 --rc genhtml_branch_coverage=1 00:15:45.280 --rc genhtml_function_coverage=1 00:15:45.280 --rc genhtml_legend=1 00:15:45.280 --rc geninfo_all_blocks=1 00:15:45.280 --rc geninfo_unexecuted_blocks=1 00:15:45.280 00:15:45.280 ' 00:15:45.280 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:45.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:45.280 --rc genhtml_branch_coverage=1 00:15:45.280 --rc genhtml_function_coverage=1 00:15:45.280 --rc genhtml_legend=1 00:15:45.280 --rc geninfo_all_blocks=1 00:15:45.280 --rc geninfo_unexecuted_blocks=1 00:15:45.280 00:15:45.280 ' 00:15:45.280 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:45.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:45.280 --rc genhtml_branch_coverage=1 00:15:45.280 --rc genhtml_function_coverage=1 00:15:45.280 --rc genhtml_legend=1 00:15:45.280 --rc geninfo_all_blocks=1 00:15:45.280 --rc geninfo_unexecuted_blocks=1 00:15:45.280 00:15:45.280 ' 00:15:45.280 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:45.280 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:15:45.280 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:45.280 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:45.281 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:45.281 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:45.281 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:45.281 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:45.281 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:45.281 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:45.281 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:45.281 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:45.541 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:45.541 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:45.541 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:45.541 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:45.541 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:45.541 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:45.541 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:45.541 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:15:45.541 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:45.541 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:45.541 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:45.542 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.542 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.542 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.542 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:15:45.542 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.542 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:15:45.542 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:45.542 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:45.542 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:45.542 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:45.542 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:45.542 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:45.542 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:45.542 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:45.542 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:45.542 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:45.542 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:15:45.542 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:15:45.542 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:45.542 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@472 -- # prepare_net_devs 00:15:45.542 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@434 -- # local -g is_hw=no 00:15:45.542 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@436 -- # remove_spdk_ns 00:15:45.542 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:45.542 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:45.542 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:45.542 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:15:45.542 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:15:45.542 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:15:45.542 08:30:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:53.683 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:53.683 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:15:53.683 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:53.683 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:53.683 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:53.683 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:53.683 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:53.683 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:15:53.683 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:53.683 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:15:53.683 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:15:53.683 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:15:53.683 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:15:53.683 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:15:53.683 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:15:53.683 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:53.683 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:53.683 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:53.683 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:53.683 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:53.683 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:53.683 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:53.683 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:53.683 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:53.683 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:53.683 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:53.683 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:15:53.683 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:15:53.683 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:15:53.683 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:15:53.683 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:15:53.683 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:15:53.683 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:15:53.683 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:53.683 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:53.683 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:15:53.683 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:15:53.683 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:53.683 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:53.683 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:15:53.683 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:15:53.683 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:53.683 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:53.683 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:15:53.683 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:15:53.683 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:53.683 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:53.683 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:15:53.683 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:15:53.683 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:15:53.683 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:15:53.683 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:15:53.683 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:53.683 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:15:53.683 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:53.683 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:15:53.683 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:15:53.683 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:53.683 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:53.683 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:53.683 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:15:53.683 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:15:53.683 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:53.683 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:15:53.683 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:53.683 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:15:53.683 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:15:53.683 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:53.683 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:53.683 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:53.683 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:15:53.683 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:15:53.683 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # is_hw=yes 00:15:53.684 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:15:53.684 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:15:53.684 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:15:53.684 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:53.684 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:53.684 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:53.684 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:53.684 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:53.684 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:53.684 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:53.684 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:53.684 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:53.684 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:53.684 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:53.684 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:53.684 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:53.684 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:53.684 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:53.684 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:53.684 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:53.684 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:53.684 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:53.684 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:53.684 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:53.684 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:53.684 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:53.684 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:53.684 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.672 ms 00:15:53.684 00:15:53.684 --- 10.0.0.2 ping statistics --- 00:15:53.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:53.684 rtt min/avg/max/mdev = 0.672/0.672/0.672/0.000 ms 00:15:53.684 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:53.684 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:53.684 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.307 ms 00:15:53.684 00:15:53.684 --- 10.0.0.1 ping statistics --- 00:15:53.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:53.684 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:15:53.684 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:53.684 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # return 0 00:15:53.684 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:15:53.684 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:53.684 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:15:53.684 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:15:53.684 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:53.684 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:15:53.684 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:15:53.684 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:15:53.684 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:53.684 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:53.684 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:53.684 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@505 -- # nvmfpid=3687055 00:15:53.684 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@506 -- # waitforlisten 3687055 00:15:53.684 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:53.684 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 3687055 ']' 00:15:53.684 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:53.684 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:53.684 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:53.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:53.684 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:53.684 08:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:53.684 [2024-10-01 08:30:44.588794] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:15:53.684 [2024-10-01 08:30:44.588860] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:53.684 [2024-10-01 08:30:44.677009] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:53.684 [2024-10-01 08:30:44.768743] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:53.684 [2024-10-01 08:30:44.768801] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:53.684 [2024-10-01 08:30:44.768809] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:53.684 [2024-10-01 08:30:44.768817] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:53.684 [2024-10-01 08:30:44.768823] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:53.684 [2024-10-01 08:30:44.770371] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:53.684 [2024-10-01 08:30:44.770538] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:53.684 [2024-10-01 08:30:44.770539] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:15:53.684 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:53.684 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:15:53.684 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:53.684 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:53.684 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:53.684 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:53.684 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:53.684 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.684 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:53.684 [2024-10-01 08:30:45.437284] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:53.684 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.684 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:53.684 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.684 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:53.684 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.684 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:53.684 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.684 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:53.684 [2024-10-01 08:30:45.471512] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:53.684 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.684 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:53.684 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.684 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:53.684 NULL1 00:15:53.684 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.684 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3687292 00:15:53.684 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:53.684 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:15:53.684 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:53.684 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:15:53.684 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:53.684 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:53.684 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:53.684 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:53.947 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:53.947 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:53.947 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:53.947 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:53.947 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:53.947 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:53.947 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:53.947 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:53.947 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:53.947 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:53.947 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:53.947 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:53.947 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:53.947 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:53.947 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:53.947 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:53.947 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:53.947 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:53.947 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:53.947 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:53.947 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:53.947 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:53.947 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:53.947 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:53.947 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:53.947 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:53.947 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:53.947 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:53.947 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:53.947 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:53.947 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:53.947 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:53.947 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:53.947 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:53.947 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:53.947 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:53.947 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3687292 00:15:53.947 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:53.947 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.947 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:54.208 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.208 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3687292 00:15:54.208 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:54.208 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.208 08:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:54.470 08:30:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.470 08:30:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3687292 00:15:54.470 08:30:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:54.470 08:30:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.470 08:30:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:55.042 08:30:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.042 08:30:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3687292 00:15:55.042 08:30:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:55.042 08:30:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.042 08:30:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:55.303 08:30:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.303 08:30:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3687292 00:15:55.303 08:30:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:55.303 08:30:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.303 08:30:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:55.564 08:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.564 08:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3687292 00:15:55.564 08:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:55.564 08:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.564 08:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:55.823 08:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.823 08:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3687292 00:15:55.823 08:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:55.823 08:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.823 08:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:56.084 08:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.084 08:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3687292 00:15:56.084 08:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:56.084 08:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.084 08:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:56.656 08:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.656 08:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3687292 00:15:56.656 08:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:56.656 08:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.656 08:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:56.916 08:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.916 08:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3687292 00:15:56.916 08:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:56.917 08:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.917 08:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:57.177 08:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.177 08:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3687292 00:15:57.177 08:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:57.177 08:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.177 08:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:57.436 08:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.436 08:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3687292 00:15:57.436 08:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:57.436 08:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.436 08:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:57.697 08:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.697 08:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3687292 00:15:57.697 08:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:57.697 08:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.697 08:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:58.268 08:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.268 08:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3687292 00:15:58.268 08:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:58.269 08:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.269 08:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:58.529 08:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.529 08:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3687292 00:15:58.529 08:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:58.529 08:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.529 08:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:58.789 08:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.789 08:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3687292 00:15:58.789 08:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:58.789 08:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.789 08:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:59.049 08:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.049 08:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3687292 00:15:59.049 08:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:59.049 08:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.049 08:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:59.310 08:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.310 08:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3687292 00:15:59.310 08:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:59.310 08:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.310 08:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:59.880 08:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.880 08:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3687292 00:15:59.880 08:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:59.880 08:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.880 08:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:00.141 08:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.141 08:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3687292 00:16:00.141 08:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:00.141 08:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.141 08:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:00.402 08:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.402 08:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3687292 00:16:00.402 08:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:00.402 08:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.402 08:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:00.705 08:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.705 08:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3687292 00:16:00.705 08:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:00.705 08:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.705 08:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:00.992 08:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.992 08:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3687292 00:16:00.992 08:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:00.992 08:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.992 08:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:01.278 08:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.278 08:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3687292 00:16:01.278 08:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:01.278 08:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.278 08:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:01.870 08:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.870 08:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3687292 00:16:01.870 08:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:01.870 08:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.870 08:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:02.132 08:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.132 08:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3687292 00:16:02.132 08:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:02.132 08:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.132 08:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:02.392 08:30:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.392 08:30:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3687292 00:16:02.392 08:30:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:02.392 08:30:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.392 08:30:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:02.653 08:30:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.653 08:30:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3687292 00:16:02.653 08:30:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:02.653 08:30:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.653 08:30:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:02.913 08:30:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.913 08:30:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3687292 00:16:02.913 08:30:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:02.913 08:30:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.913 08:30:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:03.492 08:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.492 08:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3687292 00:16:03.492 08:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:03.492 08:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.492 08:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:03.752 08:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.752 08:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3687292 00:16:03.752 08:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:03.752 08:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.752 08:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:04.013 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:04.013 08:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.013 08:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3687292 00:16:04.013 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3687292) - No such process 00:16:04.013 08:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3687292 00:16:04.013 08:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:04.013 08:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:04.013 08:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:16:04.013 08:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # nvmfcleanup 00:16:04.013 08:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:16:04.013 08:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:04.013 08:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:16:04.013 08:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:04.013 08:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:04.013 rmmod nvme_tcp 00:16:04.013 rmmod nvme_fabrics 00:16:04.013 rmmod nvme_keyring 00:16:04.013 08:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:04.013 08:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:16:04.013 08:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:16:04.013 08:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@513 -- # '[' -n 3687055 ']' 00:16:04.013 08:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@514 -- # killprocess 3687055 00:16:04.013 08:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 3687055 ']' 00:16:04.013 08:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 3687055 00:16:04.013 08:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:16:04.013 08:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:04.013 08:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3687055 00:16:04.013 08:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:04.013 08:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:04.013 08:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3687055' 00:16:04.013 killing process with pid 3687055 00:16:04.013 08:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 3687055 00:16:04.013 08:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 3687055 00:16:04.275 08:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:16:04.275 08:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:16:04.275 08:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:16:04.275 08:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:16:04.275 08:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@787 -- # iptables-save 00:16:04.275 08:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:16:04.275 08:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@787 -- # iptables-restore 00:16:04.275 08:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:04.275 08:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:04.275 08:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:04.275 08:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:04.275 08:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:06.823 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:06.823 00:16:06.823 real 0m21.145s 00:16:06.823 user 0m42.031s 00:16:06.823 sys 0m9.134s 00:16:06.823 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:06.823 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:06.823 ************************************ 00:16:06.823 END TEST nvmf_connect_stress 00:16:06.823 ************************************ 00:16:06.823 08:30:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:16:06.823 08:30:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:06.823 08:30:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:06.823 08:30:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:06.823 ************************************ 00:16:06.823 START TEST nvmf_fused_ordering 00:16:06.823 ************************************ 00:16:06.823 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:16:06.823 * Looking for test storage... 00:16:06.823 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:06.823 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:06.823 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lcov --version 00:16:06.823 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:06.823 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:06.823 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:06.823 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:06.823 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:06.823 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:16:06.823 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:16:06.823 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:16:06.823 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:16:06.823 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:16:06.823 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:16:06.823 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:16:06.823 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:06.823 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:16:06.823 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:16:06.823 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:06.823 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:06.823 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:16:06.823 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:16:06.823 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:06.823 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:16:06.823 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:16:06.823 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:16:06.823 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:16:06.823 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:06.823 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:16:06.823 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:16:06.823 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:06.823 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:06.823 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:16:06.823 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:06.824 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:06.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:06.824 --rc genhtml_branch_coverage=1 00:16:06.824 --rc genhtml_function_coverage=1 00:16:06.824 --rc genhtml_legend=1 00:16:06.824 --rc geninfo_all_blocks=1 00:16:06.824 --rc geninfo_unexecuted_blocks=1 00:16:06.824 00:16:06.824 ' 00:16:06.824 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:06.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:06.824 --rc genhtml_branch_coverage=1 00:16:06.824 --rc genhtml_function_coverage=1 00:16:06.824 --rc genhtml_legend=1 00:16:06.824 --rc geninfo_all_blocks=1 00:16:06.824 --rc geninfo_unexecuted_blocks=1 00:16:06.824 00:16:06.824 ' 00:16:06.824 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:06.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:06.824 --rc genhtml_branch_coverage=1 00:16:06.824 --rc genhtml_function_coverage=1 00:16:06.824 --rc genhtml_legend=1 00:16:06.824 --rc geninfo_all_blocks=1 00:16:06.824 --rc geninfo_unexecuted_blocks=1 00:16:06.824 00:16:06.824 ' 00:16:06.824 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:06.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:06.824 --rc genhtml_branch_coverage=1 00:16:06.824 --rc genhtml_function_coverage=1 00:16:06.824 --rc genhtml_legend=1 00:16:06.824 --rc geninfo_all_blocks=1 00:16:06.824 --rc geninfo_unexecuted_blocks=1 00:16:06.824 00:16:06.824 ' 00:16:06.824 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:06.824 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:16:06.824 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:06.824 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:06.824 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:06.824 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:06.824 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:06.824 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:06.824 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:06.824 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:06.824 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:06.824 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:06.824 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:06.824 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:06.824 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:06.824 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:06.824 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:06.824 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:06.824 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:06.824 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:16:06.824 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:06.824 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:06.824 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:06.824 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.824 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.824 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.824 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:16:06.824 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.824 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:16:06.824 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:06.824 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:06.824 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:06.824 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:06.824 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:06.824 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:06.824 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:06.824 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:06.824 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:06.824 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:06.824 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:16:06.824 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:16:06.824 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:06.824 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@472 -- # prepare_net_devs 00:16:06.824 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@434 -- # local -g is_hw=no 00:16:06.824 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@436 -- # remove_spdk_ns 00:16:06.824 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:06.824 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:06.824 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:06.824 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:16:06.824 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:16:06.824 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:16:06.824 08:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:14.970 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:14.970 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:16:14.970 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:14.970 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:14.970 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:14.970 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:14.970 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:14.970 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:16:14.970 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:14.970 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:16:14.970 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:16:14.970 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:16:14.970 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:16:14.970 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:16:14.970 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:16:14.970 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:14.970 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:14.970 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:14.970 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:14.970 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:14.970 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:14.970 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:14.970 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:14.970 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:14.970 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:14.970 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:14.970 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:16:14.970 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:14.971 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:14.971 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ up == up ]] 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:14.971 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ up == up ]] 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:14.971 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # is_hw=yes 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:14.971 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:14.971 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.653 ms 00:16:14.971 00:16:14.971 --- 10.0.0.2 ping statistics --- 00:16:14.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:14.971 rtt min/avg/max/mdev = 0.653/0.653/0.653/0.000 ms 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:14.971 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:14.971 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:16:14.971 00:16:14.971 --- 10.0.0.1 ping statistics --- 00:16:14.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:14.971 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # return 0 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@505 -- # nvmfpid=3693443 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@506 -- # waitforlisten 3693443 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 3693443 ']' 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:14.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:14.971 08:31:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:14.971 [2024-10-01 08:31:05.806554] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:16:14.971 [2024-10-01 08:31:05.806621] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:14.971 [2024-10-01 08:31:05.895848] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:14.971 [2024-10-01 08:31:05.988826] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:14.972 [2024-10-01 08:31:05.988883] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:14.972 [2024-10-01 08:31:05.988891] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:14.972 [2024-10-01 08:31:05.988899] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:14.972 [2024-10-01 08:31:05.988905] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:14.972 [2024-10-01 08:31:05.989682] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:14.972 08:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:14.972 08:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:16:14.972 08:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:16:14.972 08:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:14.972 08:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:14.972 08:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:14.972 08:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:14.972 08:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.972 08:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:14.972 [2024-10-01 08:31:06.656173] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:14.972 08:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.972 08:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:14.972 08:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.972 08:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:14.972 08:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.972 08:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:14.972 08:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.972 08:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:14.972 [2024-10-01 08:31:06.680436] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:14.972 08:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.972 08:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:14.972 08:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.972 08:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:14.972 NULL1 00:16:14.972 08:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.972 08:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:16:14.972 08:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.972 08:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:14.972 08:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.972 08:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:16:14.972 08:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.972 08:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:14.972 08:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.972 08:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:16:14.972 [2024-10-01 08:31:06.751174] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:16:14.972 [2024-10-01 08:31:06.751216] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3693790 ] 00:16:15.546 Attached to nqn.2016-06.io.spdk:cnode1 00:16:15.546 Namespace ID: 1 size: 1GB 00:16:15.546 fused_ordering(0) 00:16:15.546 fused_ordering(1) 00:16:15.546 fused_ordering(2) 00:16:15.546 fused_ordering(3) 00:16:15.546 fused_ordering(4) 00:16:15.546 fused_ordering(5) 00:16:15.546 fused_ordering(6) 00:16:15.546 fused_ordering(7) 00:16:15.546 fused_ordering(8) 00:16:15.546 fused_ordering(9) 00:16:15.546 fused_ordering(10) 00:16:15.546 fused_ordering(11) 00:16:15.546 fused_ordering(12) 00:16:15.546 fused_ordering(13) 00:16:15.546 fused_ordering(14) 00:16:15.546 fused_ordering(15) 00:16:15.546 fused_ordering(16) 00:16:15.546 fused_ordering(17) 00:16:15.546 fused_ordering(18) 00:16:15.546 fused_ordering(19) 00:16:15.546 fused_ordering(20) 00:16:15.546 fused_ordering(21) 00:16:15.546 fused_ordering(22) 00:16:15.546 fused_ordering(23) 00:16:15.546 fused_ordering(24) 00:16:15.546 fused_ordering(25) 00:16:15.546 fused_ordering(26) 00:16:15.546 fused_ordering(27) 00:16:15.546 fused_ordering(28) 00:16:15.546 fused_ordering(29) 00:16:15.546 fused_ordering(30) 00:16:15.546 fused_ordering(31) 00:16:15.546 fused_ordering(32) 00:16:15.546 fused_ordering(33) 00:16:15.546 fused_ordering(34) 00:16:15.546 fused_ordering(35) 00:16:15.546 fused_ordering(36) 00:16:15.546 fused_ordering(37) 00:16:15.546 fused_ordering(38) 00:16:15.546 fused_ordering(39) 00:16:15.546 fused_ordering(40) 00:16:15.546 fused_ordering(41) 00:16:15.546 fused_ordering(42) 00:16:15.546 fused_ordering(43) 00:16:15.546 fused_ordering(44) 00:16:15.546 fused_ordering(45) 00:16:15.546 fused_ordering(46) 00:16:15.546 fused_ordering(47) 00:16:15.546 fused_ordering(48) 00:16:15.546 fused_ordering(49) 00:16:15.546 fused_ordering(50) 00:16:15.546 fused_ordering(51) 00:16:15.546 fused_ordering(52) 00:16:15.546 fused_ordering(53) 00:16:15.546 fused_ordering(54) 00:16:15.546 fused_ordering(55) 00:16:15.546 fused_ordering(56) 00:16:15.546 fused_ordering(57) 00:16:15.546 fused_ordering(58) 00:16:15.546 fused_ordering(59) 00:16:15.546 fused_ordering(60) 00:16:15.546 fused_ordering(61) 00:16:15.546 fused_ordering(62) 00:16:15.546 fused_ordering(63) 00:16:15.546 fused_ordering(64) 00:16:15.546 fused_ordering(65) 00:16:15.546 fused_ordering(66) 00:16:15.546 fused_ordering(67) 00:16:15.546 fused_ordering(68) 00:16:15.546 fused_ordering(69) 00:16:15.546 fused_ordering(70) 00:16:15.546 fused_ordering(71) 00:16:15.546 fused_ordering(72) 00:16:15.546 fused_ordering(73) 00:16:15.546 fused_ordering(74) 00:16:15.546 fused_ordering(75) 00:16:15.546 fused_ordering(76) 00:16:15.546 fused_ordering(77) 00:16:15.546 fused_ordering(78) 00:16:15.546 fused_ordering(79) 00:16:15.546 fused_ordering(80) 00:16:15.546 fused_ordering(81) 00:16:15.546 fused_ordering(82) 00:16:15.546 fused_ordering(83) 00:16:15.546 fused_ordering(84) 00:16:15.546 fused_ordering(85) 00:16:15.546 fused_ordering(86) 00:16:15.546 fused_ordering(87) 00:16:15.546 fused_ordering(88) 00:16:15.546 fused_ordering(89) 00:16:15.546 fused_ordering(90) 00:16:15.546 fused_ordering(91) 00:16:15.546 fused_ordering(92) 00:16:15.546 fused_ordering(93) 00:16:15.546 fused_ordering(94) 00:16:15.546 fused_ordering(95) 00:16:15.546 fused_ordering(96) 00:16:15.546 fused_ordering(97) 00:16:15.546 fused_ordering(98) 00:16:15.546 fused_ordering(99) 00:16:15.546 fused_ordering(100) 00:16:15.546 fused_ordering(101) 00:16:15.546 fused_ordering(102) 00:16:15.546 fused_ordering(103) 00:16:15.546 fused_ordering(104) 00:16:15.546 fused_ordering(105) 00:16:15.546 fused_ordering(106) 00:16:15.546 fused_ordering(107) 00:16:15.546 fused_ordering(108) 00:16:15.546 fused_ordering(109) 00:16:15.546 fused_ordering(110) 00:16:15.546 fused_ordering(111) 00:16:15.546 fused_ordering(112) 00:16:15.546 fused_ordering(113) 00:16:15.546 fused_ordering(114) 00:16:15.546 fused_ordering(115) 00:16:15.546 fused_ordering(116) 00:16:15.546 fused_ordering(117) 00:16:15.546 fused_ordering(118) 00:16:15.546 fused_ordering(119) 00:16:15.546 fused_ordering(120) 00:16:15.546 fused_ordering(121) 00:16:15.546 fused_ordering(122) 00:16:15.546 fused_ordering(123) 00:16:15.546 fused_ordering(124) 00:16:15.546 fused_ordering(125) 00:16:15.546 fused_ordering(126) 00:16:15.546 fused_ordering(127) 00:16:15.546 fused_ordering(128) 00:16:15.546 fused_ordering(129) 00:16:15.546 fused_ordering(130) 00:16:15.546 fused_ordering(131) 00:16:15.546 fused_ordering(132) 00:16:15.546 fused_ordering(133) 00:16:15.546 fused_ordering(134) 00:16:15.546 fused_ordering(135) 00:16:15.547 fused_ordering(136) 00:16:15.547 fused_ordering(137) 00:16:15.547 fused_ordering(138) 00:16:15.547 fused_ordering(139) 00:16:15.547 fused_ordering(140) 00:16:15.547 fused_ordering(141) 00:16:15.547 fused_ordering(142) 00:16:15.547 fused_ordering(143) 00:16:15.547 fused_ordering(144) 00:16:15.547 fused_ordering(145) 00:16:15.547 fused_ordering(146) 00:16:15.547 fused_ordering(147) 00:16:15.547 fused_ordering(148) 00:16:15.547 fused_ordering(149) 00:16:15.547 fused_ordering(150) 00:16:15.547 fused_ordering(151) 00:16:15.547 fused_ordering(152) 00:16:15.547 fused_ordering(153) 00:16:15.547 fused_ordering(154) 00:16:15.547 fused_ordering(155) 00:16:15.547 fused_ordering(156) 00:16:15.547 fused_ordering(157) 00:16:15.547 fused_ordering(158) 00:16:15.547 fused_ordering(159) 00:16:15.547 fused_ordering(160) 00:16:15.547 fused_ordering(161) 00:16:15.547 fused_ordering(162) 00:16:15.547 fused_ordering(163) 00:16:15.547 fused_ordering(164) 00:16:15.547 fused_ordering(165) 00:16:15.547 fused_ordering(166) 00:16:15.547 fused_ordering(167) 00:16:15.547 fused_ordering(168) 00:16:15.547 fused_ordering(169) 00:16:15.547 fused_ordering(170) 00:16:15.547 fused_ordering(171) 00:16:15.547 fused_ordering(172) 00:16:15.547 fused_ordering(173) 00:16:15.547 fused_ordering(174) 00:16:15.547 fused_ordering(175) 00:16:15.547 fused_ordering(176) 00:16:15.547 fused_ordering(177) 00:16:15.547 fused_ordering(178) 00:16:15.547 fused_ordering(179) 00:16:15.547 fused_ordering(180) 00:16:15.547 fused_ordering(181) 00:16:15.547 fused_ordering(182) 00:16:15.547 fused_ordering(183) 00:16:15.547 fused_ordering(184) 00:16:15.547 fused_ordering(185) 00:16:15.547 fused_ordering(186) 00:16:15.547 fused_ordering(187) 00:16:15.547 fused_ordering(188) 00:16:15.547 fused_ordering(189) 00:16:15.547 fused_ordering(190) 00:16:15.547 fused_ordering(191) 00:16:15.547 fused_ordering(192) 00:16:15.547 fused_ordering(193) 00:16:15.547 fused_ordering(194) 00:16:15.547 fused_ordering(195) 00:16:15.547 fused_ordering(196) 00:16:15.547 fused_ordering(197) 00:16:15.547 fused_ordering(198) 00:16:15.547 fused_ordering(199) 00:16:15.547 fused_ordering(200) 00:16:15.547 fused_ordering(201) 00:16:15.547 fused_ordering(202) 00:16:15.547 fused_ordering(203) 00:16:15.547 fused_ordering(204) 00:16:15.547 fused_ordering(205) 00:16:15.808 fused_ordering(206) 00:16:15.808 fused_ordering(207) 00:16:15.808 fused_ordering(208) 00:16:15.808 fused_ordering(209) 00:16:15.808 fused_ordering(210) 00:16:15.808 fused_ordering(211) 00:16:15.808 fused_ordering(212) 00:16:15.808 fused_ordering(213) 00:16:15.808 fused_ordering(214) 00:16:15.808 fused_ordering(215) 00:16:15.808 fused_ordering(216) 00:16:15.808 fused_ordering(217) 00:16:15.808 fused_ordering(218) 00:16:15.808 fused_ordering(219) 00:16:15.808 fused_ordering(220) 00:16:15.808 fused_ordering(221) 00:16:15.808 fused_ordering(222) 00:16:15.808 fused_ordering(223) 00:16:15.808 fused_ordering(224) 00:16:15.808 fused_ordering(225) 00:16:15.808 fused_ordering(226) 00:16:15.808 fused_ordering(227) 00:16:15.808 fused_ordering(228) 00:16:15.808 fused_ordering(229) 00:16:15.808 fused_ordering(230) 00:16:15.808 fused_ordering(231) 00:16:15.808 fused_ordering(232) 00:16:15.808 fused_ordering(233) 00:16:15.808 fused_ordering(234) 00:16:15.808 fused_ordering(235) 00:16:15.808 fused_ordering(236) 00:16:15.808 fused_ordering(237) 00:16:15.808 fused_ordering(238) 00:16:15.808 fused_ordering(239) 00:16:15.808 fused_ordering(240) 00:16:15.808 fused_ordering(241) 00:16:15.808 fused_ordering(242) 00:16:15.808 fused_ordering(243) 00:16:15.808 fused_ordering(244) 00:16:15.808 fused_ordering(245) 00:16:15.808 fused_ordering(246) 00:16:15.808 fused_ordering(247) 00:16:15.808 fused_ordering(248) 00:16:15.808 fused_ordering(249) 00:16:15.808 fused_ordering(250) 00:16:15.808 fused_ordering(251) 00:16:15.808 fused_ordering(252) 00:16:15.808 fused_ordering(253) 00:16:15.808 fused_ordering(254) 00:16:15.808 fused_ordering(255) 00:16:15.808 fused_ordering(256) 00:16:15.808 fused_ordering(257) 00:16:15.808 fused_ordering(258) 00:16:15.808 fused_ordering(259) 00:16:15.808 fused_ordering(260) 00:16:15.808 fused_ordering(261) 00:16:15.808 fused_ordering(262) 00:16:15.808 fused_ordering(263) 00:16:15.808 fused_ordering(264) 00:16:15.808 fused_ordering(265) 00:16:15.808 fused_ordering(266) 00:16:15.808 fused_ordering(267) 00:16:15.808 fused_ordering(268) 00:16:15.808 fused_ordering(269) 00:16:15.808 fused_ordering(270) 00:16:15.808 fused_ordering(271) 00:16:15.808 fused_ordering(272) 00:16:15.808 fused_ordering(273) 00:16:15.808 fused_ordering(274) 00:16:15.808 fused_ordering(275) 00:16:15.808 fused_ordering(276) 00:16:15.808 fused_ordering(277) 00:16:15.808 fused_ordering(278) 00:16:15.808 fused_ordering(279) 00:16:15.808 fused_ordering(280) 00:16:15.808 fused_ordering(281) 00:16:15.808 fused_ordering(282) 00:16:15.808 fused_ordering(283) 00:16:15.808 fused_ordering(284) 00:16:15.808 fused_ordering(285) 00:16:15.808 fused_ordering(286) 00:16:15.809 fused_ordering(287) 00:16:15.809 fused_ordering(288) 00:16:15.809 fused_ordering(289) 00:16:15.809 fused_ordering(290) 00:16:15.809 fused_ordering(291) 00:16:15.809 fused_ordering(292) 00:16:15.809 fused_ordering(293) 00:16:15.809 fused_ordering(294) 00:16:15.809 fused_ordering(295) 00:16:15.809 fused_ordering(296) 00:16:15.809 fused_ordering(297) 00:16:15.809 fused_ordering(298) 00:16:15.809 fused_ordering(299) 00:16:15.809 fused_ordering(300) 00:16:15.809 fused_ordering(301) 00:16:15.809 fused_ordering(302) 00:16:15.809 fused_ordering(303) 00:16:15.809 fused_ordering(304) 00:16:15.809 fused_ordering(305) 00:16:15.809 fused_ordering(306) 00:16:15.809 fused_ordering(307) 00:16:15.809 fused_ordering(308) 00:16:15.809 fused_ordering(309) 00:16:15.809 fused_ordering(310) 00:16:15.809 fused_ordering(311) 00:16:15.809 fused_ordering(312) 00:16:15.809 fused_ordering(313) 00:16:15.809 fused_ordering(314) 00:16:15.809 fused_ordering(315) 00:16:15.809 fused_ordering(316) 00:16:15.809 fused_ordering(317) 00:16:15.809 fused_ordering(318) 00:16:15.809 fused_ordering(319) 00:16:15.809 fused_ordering(320) 00:16:15.809 fused_ordering(321) 00:16:15.809 fused_ordering(322) 00:16:15.809 fused_ordering(323) 00:16:15.809 fused_ordering(324) 00:16:15.809 fused_ordering(325) 00:16:15.809 fused_ordering(326) 00:16:15.809 fused_ordering(327) 00:16:15.809 fused_ordering(328) 00:16:15.809 fused_ordering(329) 00:16:15.809 fused_ordering(330) 00:16:15.809 fused_ordering(331) 00:16:15.809 fused_ordering(332) 00:16:15.809 fused_ordering(333) 00:16:15.809 fused_ordering(334) 00:16:15.809 fused_ordering(335) 00:16:15.809 fused_ordering(336) 00:16:15.809 fused_ordering(337) 00:16:15.809 fused_ordering(338) 00:16:15.809 fused_ordering(339) 00:16:15.809 fused_ordering(340) 00:16:15.809 fused_ordering(341) 00:16:15.809 fused_ordering(342) 00:16:15.809 fused_ordering(343) 00:16:15.809 fused_ordering(344) 00:16:15.809 fused_ordering(345) 00:16:15.809 fused_ordering(346) 00:16:15.809 fused_ordering(347) 00:16:15.809 fused_ordering(348) 00:16:15.809 fused_ordering(349) 00:16:15.809 fused_ordering(350) 00:16:15.809 fused_ordering(351) 00:16:15.809 fused_ordering(352) 00:16:15.809 fused_ordering(353) 00:16:15.809 fused_ordering(354) 00:16:15.809 fused_ordering(355) 00:16:15.809 fused_ordering(356) 00:16:15.809 fused_ordering(357) 00:16:15.809 fused_ordering(358) 00:16:15.809 fused_ordering(359) 00:16:15.809 fused_ordering(360) 00:16:15.809 fused_ordering(361) 00:16:15.809 fused_ordering(362) 00:16:15.809 fused_ordering(363) 00:16:15.809 fused_ordering(364) 00:16:15.809 fused_ordering(365) 00:16:15.809 fused_ordering(366) 00:16:15.809 fused_ordering(367) 00:16:15.809 fused_ordering(368) 00:16:15.809 fused_ordering(369) 00:16:15.809 fused_ordering(370) 00:16:15.809 fused_ordering(371) 00:16:15.809 fused_ordering(372) 00:16:15.809 fused_ordering(373) 00:16:15.809 fused_ordering(374) 00:16:15.809 fused_ordering(375) 00:16:15.809 fused_ordering(376) 00:16:15.809 fused_ordering(377) 00:16:15.809 fused_ordering(378) 00:16:15.809 fused_ordering(379) 00:16:15.809 fused_ordering(380) 00:16:15.809 fused_ordering(381) 00:16:15.809 fused_ordering(382) 00:16:15.809 fused_ordering(383) 00:16:15.809 fused_ordering(384) 00:16:15.809 fused_ordering(385) 00:16:15.809 fused_ordering(386) 00:16:15.809 fused_ordering(387) 00:16:15.809 fused_ordering(388) 00:16:15.809 fused_ordering(389) 00:16:15.809 fused_ordering(390) 00:16:15.809 fused_ordering(391) 00:16:15.809 fused_ordering(392) 00:16:15.809 fused_ordering(393) 00:16:15.809 fused_ordering(394) 00:16:15.809 fused_ordering(395) 00:16:15.809 fused_ordering(396) 00:16:15.809 fused_ordering(397) 00:16:15.809 fused_ordering(398) 00:16:15.809 fused_ordering(399) 00:16:15.809 fused_ordering(400) 00:16:15.809 fused_ordering(401) 00:16:15.809 fused_ordering(402) 00:16:15.809 fused_ordering(403) 00:16:15.809 fused_ordering(404) 00:16:15.809 fused_ordering(405) 00:16:15.809 fused_ordering(406) 00:16:15.809 fused_ordering(407) 00:16:15.809 fused_ordering(408) 00:16:15.809 fused_ordering(409) 00:16:15.809 fused_ordering(410) 00:16:16.070 fused_ordering(411) 00:16:16.070 fused_ordering(412) 00:16:16.070 fused_ordering(413) 00:16:16.070 fused_ordering(414) 00:16:16.070 fused_ordering(415) 00:16:16.070 fused_ordering(416) 00:16:16.070 fused_ordering(417) 00:16:16.070 fused_ordering(418) 00:16:16.070 fused_ordering(419) 00:16:16.070 fused_ordering(420) 00:16:16.070 fused_ordering(421) 00:16:16.070 fused_ordering(422) 00:16:16.070 fused_ordering(423) 00:16:16.070 fused_ordering(424) 00:16:16.070 fused_ordering(425) 00:16:16.070 fused_ordering(426) 00:16:16.070 fused_ordering(427) 00:16:16.070 fused_ordering(428) 00:16:16.070 fused_ordering(429) 00:16:16.070 fused_ordering(430) 00:16:16.070 fused_ordering(431) 00:16:16.070 fused_ordering(432) 00:16:16.070 fused_ordering(433) 00:16:16.070 fused_ordering(434) 00:16:16.070 fused_ordering(435) 00:16:16.070 fused_ordering(436) 00:16:16.070 fused_ordering(437) 00:16:16.071 fused_ordering(438) 00:16:16.071 fused_ordering(439) 00:16:16.071 fused_ordering(440) 00:16:16.071 fused_ordering(441) 00:16:16.071 fused_ordering(442) 00:16:16.071 fused_ordering(443) 00:16:16.071 fused_ordering(444) 00:16:16.071 fused_ordering(445) 00:16:16.071 fused_ordering(446) 00:16:16.071 fused_ordering(447) 00:16:16.071 fused_ordering(448) 00:16:16.071 fused_ordering(449) 00:16:16.071 fused_ordering(450) 00:16:16.071 fused_ordering(451) 00:16:16.071 fused_ordering(452) 00:16:16.071 fused_ordering(453) 00:16:16.071 fused_ordering(454) 00:16:16.071 fused_ordering(455) 00:16:16.071 fused_ordering(456) 00:16:16.071 fused_ordering(457) 00:16:16.071 fused_ordering(458) 00:16:16.071 fused_ordering(459) 00:16:16.071 fused_ordering(460) 00:16:16.071 fused_ordering(461) 00:16:16.071 fused_ordering(462) 00:16:16.071 fused_ordering(463) 00:16:16.071 fused_ordering(464) 00:16:16.071 fused_ordering(465) 00:16:16.071 fused_ordering(466) 00:16:16.071 fused_ordering(467) 00:16:16.071 fused_ordering(468) 00:16:16.071 fused_ordering(469) 00:16:16.071 fused_ordering(470) 00:16:16.071 fused_ordering(471) 00:16:16.071 fused_ordering(472) 00:16:16.071 fused_ordering(473) 00:16:16.071 fused_ordering(474) 00:16:16.071 fused_ordering(475) 00:16:16.071 fused_ordering(476) 00:16:16.071 fused_ordering(477) 00:16:16.071 fused_ordering(478) 00:16:16.071 fused_ordering(479) 00:16:16.071 fused_ordering(480) 00:16:16.071 fused_ordering(481) 00:16:16.071 fused_ordering(482) 00:16:16.071 fused_ordering(483) 00:16:16.071 fused_ordering(484) 00:16:16.071 fused_ordering(485) 00:16:16.071 fused_ordering(486) 00:16:16.071 fused_ordering(487) 00:16:16.071 fused_ordering(488) 00:16:16.071 fused_ordering(489) 00:16:16.071 fused_ordering(490) 00:16:16.071 fused_ordering(491) 00:16:16.071 fused_ordering(492) 00:16:16.071 fused_ordering(493) 00:16:16.071 fused_ordering(494) 00:16:16.071 fused_ordering(495) 00:16:16.071 fused_ordering(496) 00:16:16.071 fused_ordering(497) 00:16:16.071 fused_ordering(498) 00:16:16.071 fused_ordering(499) 00:16:16.071 fused_ordering(500) 00:16:16.071 fused_ordering(501) 00:16:16.071 fused_ordering(502) 00:16:16.071 fused_ordering(503) 00:16:16.071 fused_ordering(504) 00:16:16.071 fused_ordering(505) 00:16:16.071 fused_ordering(506) 00:16:16.071 fused_ordering(507) 00:16:16.071 fused_ordering(508) 00:16:16.071 fused_ordering(509) 00:16:16.071 fused_ordering(510) 00:16:16.071 fused_ordering(511) 00:16:16.071 fused_ordering(512) 00:16:16.071 fused_ordering(513) 00:16:16.071 fused_ordering(514) 00:16:16.071 fused_ordering(515) 00:16:16.071 fused_ordering(516) 00:16:16.071 fused_ordering(517) 00:16:16.071 fused_ordering(518) 00:16:16.071 fused_ordering(519) 00:16:16.071 fused_ordering(520) 00:16:16.071 fused_ordering(521) 00:16:16.071 fused_ordering(522) 00:16:16.071 fused_ordering(523) 00:16:16.071 fused_ordering(524) 00:16:16.071 fused_ordering(525) 00:16:16.071 fused_ordering(526) 00:16:16.071 fused_ordering(527) 00:16:16.071 fused_ordering(528) 00:16:16.071 fused_ordering(529) 00:16:16.071 fused_ordering(530) 00:16:16.071 fused_ordering(531) 00:16:16.071 fused_ordering(532) 00:16:16.071 fused_ordering(533) 00:16:16.071 fused_ordering(534) 00:16:16.071 fused_ordering(535) 00:16:16.071 fused_ordering(536) 00:16:16.071 fused_ordering(537) 00:16:16.071 fused_ordering(538) 00:16:16.071 fused_ordering(539) 00:16:16.071 fused_ordering(540) 00:16:16.071 fused_ordering(541) 00:16:16.071 fused_ordering(542) 00:16:16.071 fused_ordering(543) 00:16:16.071 fused_ordering(544) 00:16:16.071 fused_ordering(545) 00:16:16.071 fused_ordering(546) 00:16:16.071 fused_ordering(547) 00:16:16.071 fused_ordering(548) 00:16:16.071 fused_ordering(549) 00:16:16.071 fused_ordering(550) 00:16:16.071 fused_ordering(551) 00:16:16.071 fused_ordering(552) 00:16:16.071 fused_ordering(553) 00:16:16.071 fused_ordering(554) 00:16:16.071 fused_ordering(555) 00:16:16.071 fused_ordering(556) 00:16:16.071 fused_ordering(557) 00:16:16.071 fused_ordering(558) 00:16:16.071 fused_ordering(559) 00:16:16.071 fused_ordering(560) 00:16:16.071 fused_ordering(561) 00:16:16.071 fused_ordering(562) 00:16:16.071 fused_ordering(563) 00:16:16.071 fused_ordering(564) 00:16:16.071 fused_ordering(565) 00:16:16.071 fused_ordering(566) 00:16:16.071 fused_ordering(567) 00:16:16.071 fused_ordering(568) 00:16:16.071 fused_ordering(569) 00:16:16.071 fused_ordering(570) 00:16:16.071 fused_ordering(571) 00:16:16.071 fused_ordering(572) 00:16:16.071 fused_ordering(573) 00:16:16.071 fused_ordering(574) 00:16:16.071 fused_ordering(575) 00:16:16.071 fused_ordering(576) 00:16:16.071 fused_ordering(577) 00:16:16.071 fused_ordering(578) 00:16:16.071 fused_ordering(579) 00:16:16.071 fused_ordering(580) 00:16:16.071 fused_ordering(581) 00:16:16.071 fused_ordering(582) 00:16:16.071 fused_ordering(583) 00:16:16.071 fused_ordering(584) 00:16:16.071 fused_ordering(585) 00:16:16.071 fused_ordering(586) 00:16:16.071 fused_ordering(587) 00:16:16.071 fused_ordering(588) 00:16:16.071 fused_ordering(589) 00:16:16.071 fused_ordering(590) 00:16:16.071 fused_ordering(591) 00:16:16.071 fused_ordering(592) 00:16:16.071 fused_ordering(593) 00:16:16.071 fused_ordering(594) 00:16:16.071 fused_ordering(595) 00:16:16.071 fused_ordering(596) 00:16:16.071 fused_ordering(597) 00:16:16.071 fused_ordering(598) 00:16:16.071 fused_ordering(599) 00:16:16.071 fused_ordering(600) 00:16:16.071 fused_ordering(601) 00:16:16.071 fused_ordering(602) 00:16:16.071 fused_ordering(603) 00:16:16.071 fused_ordering(604) 00:16:16.071 fused_ordering(605) 00:16:16.071 fused_ordering(606) 00:16:16.071 fused_ordering(607) 00:16:16.071 fused_ordering(608) 00:16:16.071 fused_ordering(609) 00:16:16.071 fused_ordering(610) 00:16:16.071 fused_ordering(611) 00:16:16.071 fused_ordering(612) 00:16:16.071 fused_ordering(613) 00:16:16.071 fused_ordering(614) 00:16:16.071 fused_ordering(615) 00:16:16.643 fused_ordering(616) 00:16:16.643 fused_ordering(617) 00:16:16.643 fused_ordering(618) 00:16:16.643 fused_ordering(619) 00:16:16.643 fused_ordering(620) 00:16:16.643 fused_ordering(621) 00:16:16.643 fused_ordering(622) 00:16:16.643 fused_ordering(623) 00:16:16.643 fused_ordering(624) 00:16:16.643 fused_ordering(625) 00:16:16.643 fused_ordering(626) 00:16:16.643 fused_ordering(627) 00:16:16.643 fused_ordering(628) 00:16:16.643 fused_ordering(629) 00:16:16.643 fused_ordering(630) 00:16:16.643 fused_ordering(631) 00:16:16.643 fused_ordering(632) 00:16:16.643 fused_ordering(633) 00:16:16.643 fused_ordering(634) 00:16:16.643 fused_ordering(635) 00:16:16.643 fused_ordering(636) 00:16:16.643 fused_ordering(637) 00:16:16.643 fused_ordering(638) 00:16:16.643 fused_ordering(639) 00:16:16.643 fused_ordering(640) 00:16:16.643 fused_ordering(641) 00:16:16.643 fused_ordering(642) 00:16:16.643 fused_ordering(643) 00:16:16.643 fused_ordering(644) 00:16:16.643 fused_ordering(645) 00:16:16.643 fused_ordering(646) 00:16:16.643 fused_ordering(647) 00:16:16.643 fused_ordering(648) 00:16:16.643 fused_ordering(649) 00:16:16.643 fused_ordering(650) 00:16:16.643 fused_ordering(651) 00:16:16.643 fused_ordering(652) 00:16:16.643 fused_ordering(653) 00:16:16.643 fused_ordering(654) 00:16:16.643 fused_ordering(655) 00:16:16.643 fused_ordering(656) 00:16:16.643 fused_ordering(657) 00:16:16.643 fused_ordering(658) 00:16:16.643 fused_ordering(659) 00:16:16.643 fused_ordering(660) 00:16:16.643 fused_ordering(661) 00:16:16.643 fused_ordering(662) 00:16:16.643 fused_ordering(663) 00:16:16.643 fused_ordering(664) 00:16:16.643 fused_ordering(665) 00:16:16.643 fused_ordering(666) 00:16:16.643 fused_ordering(667) 00:16:16.643 fused_ordering(668) 00:16:16.643 fused_ordering(669) 00:16:16.643 fused_ordering(670) 00:16:16.643 fused_ordering(671) 00:16:16.643 fused_ordering(672) 00:16:16.643 fused_ordering(673) 00:16:16.643 fused_ordering(674) 00:16:16.643 fused_ordering(675) 00:16:16.643 fused_ordering(676) 00:16:16.643 fused_ordering(677) 00:16:16.643 fused_ordering(678) 00:16:16.643 fused_ordering(679) 00:16:16.643 fused_ordering(680) 00:16:16.643 fused_ordering(681) 00:16:16.643 fused_ordering(682) 00:16:16.643 fused_ordering(683) 00:16:16.643 fused_ordering(684) 00:16:16.643 fused_ordering(685) 00:16:16.643 fused_ordering(686) 00:16:16.643 fused_ordering(687) 00:16:16.643 fused_ordering(688) 00:16:16.643 fused_ordering(689) 00:16:16.643 fused_ordering(690) 00:16:16.643 fused_ordering(691) 00:16:16.643 fused_ordering(692) 00:16:16.643 fused_ordering(693) 00:16:16.643 fused_ordering(694) 00:16:16.643 fused_ordering(695) 00:16:16.643 fused_ordering(696) 00:16:16.643 fused_ordering(697) 00:16:16.643 fused_ordering(698) 00:16:16.643 fused_ordering(699) 00:16:16.643 fused_ordering(700) 00:16:16.643 fused_ordering(701) 00:16:16.643 fused_ordering(702) 00:16:16.643 fused_ordering(703) 00:16:16.643 fused_ordering(704) 00:16:16.643 fused_ordering(705) 00:16:16.643 fused_ordering(706) 00:16:16.643 fused_ordering(707) 00:16:16.643 fused_ordering(708) 00:16:16.643 fused_ordering(709) 00:16:16.643 fused_ordering(710) 00:16:16.643 fused_ordering(711) 00:16:16.643 fused_ordering(712) 00:16:16.643 fused_ordering(713) 00:16:16.643 fused_ordering(714) 00:16:16.643 fused_ordering(715) 00:16:16.643 fused_ordering(716) 00:16:16.643 fused_ordering(717) 00:16:16.643 fused_ordering(718) 00:16:16.643 fused_ordering(719) 00:16:16.643 fused_ordering(720) 00:16:16.643 fused_ordering(721) 00:16:16.643 fused_ordering(722) 00:16:16.643 fused_ordering(723) 00:16:16.643 fused_ordering(724) 00:16:16.643 fused_ordering(725) 00:16:16.643 fused_ordering(726) 00:16:16.643 fused_ordering(727) 00:16:16.643 fused_ordering(728) 00:16:16.643 fused_ordering(729) 00:16:16.644 fused_ordering(730) 00:16:16.644 fused_ordering(731) 00:16:16.644 fused_ordering(732) 00:16:16.644 fused_ordering(733) 00:16:16.644 fused_ordering(734) 00:16:16.644 fused_ordering(735) 00:16:16.644 fused_ordering(736) 00:16:16.644 fused_ordering(737) 00:16:16.644 fused_ordering(738) 00:16:16.644 fused_ordering(739) 00:16:16.644 fused_ordering(740) 00:16:16.644 fused_ordering(741) 00:16:16.644 fused_ordering(742) 00:16:16.644 fused_ordering(743) 00:16:16.644 fused_ordering(744) 00:16:16.644 fused_ordering(745) 00:16:16.644 fused_ordering(746) 00:16:16.644 fused_ordering(747) 00:16:16.644 fused_ordering(748) 00:16:16.644 fused_ordering(749) 00:16:16.644 fused_ordering(750) 00:16:16.644 fused_ordering(751) 00:16:16.644 fused_ordering(752) 00:16:16.644 fused_ordering(753) 00:16:16.644 fused_ordering(754) 00:16:16.644 fused_ordering(755) 00:16:16.644 fused_ordering(756) 00:16:16.644 fused_ordering(757) 00:16:16.644 fused_ordering(758) 00:16:16.644 fused_ordering(759) 00:16:16.644 fused_ordering(760) 00:16:16.644 fused_ordering(761) 00:16:16.644 fused_ordering(762) 00:16:16.644 fused_ordering(763) 00:16:16.644 fused_ordering(764) 00:16:16.644 fused_ordering(765) 00:16:16.644 fused_ordering(766) 00:16:16.644 fused_ordering(767) 00:16:16.644 fused_ordering(768) 00:16:16.644 fused_ordering(769) 00:16:16.644 fused_ordering(770) 00:16:16.644 fused_ordering(771) 00:16:16.644 fused_ordering(772) 00:16:16.644 fused_ordering(773) 00:16:16.644 fused_ordering(774) 00:16:16.644 fused_ordering(775) 00:16:16.644 fused_ordering(776) 00:16:16.644 fused_ordering(777) 00:16:16.644 fused_ordering(778) 00:16:16.644 fused_ordering(779) 00:16:16.644 fused_ordering(780) 00:16:16.644 fused_ordering(781) 00:16:16.644 fused_ordering(782) 00:16:16.644 fused_ordering(783) 00:16:16.644 fused_ordering(784) 00:16:16.644 fused_ordering(785) 00:16:16.644 fused_ordering(786) 00:16:16.644 fused_ordering(787) 00:16:16.644 fused_ordering(788) 00:16:16.644 fused_ordering(789) 00:16:16.644 fused_ordering(790) 00:16:16.644 fused_ordering(791) 00:16:16.644 fused_ordering(792) 00:16:16.644 fused_ordering(793) 00:16:16.644 fused_ordering(794) 00:16:16.644 fused_ordering(795) 00:16:16.644 fused_ordering(796) 00:16:16.644 fused_ordering(797) 00:16:16.644 fused_ordering(798) 00:16:16.644 fused_ordering(799) 00:16:16.644 fused_ordering(800) 00:16:16.644 fused_ordering(801) 00:16:16.644 fused_ordering(802) 00:16:16.644 fused_ordering(803) 00:16:16.644 fused_ordering(804) 00:16:16.644 fused_ordering(805) 00:16:16.644 fused_ordering(806) 00:16:16.644 fused_ordering(807) 00:16:16.644 fused_ordering(808) 00:16:16.644 fused_ordering(809) 00:16:16.644 fused_ordering(810) 00:16:16.644 fused_ordering(811) 00:16:16.644 fused_ordering(812) 00:16:16.644 fused_ordering(813) 00:16:16.644 fused_ordering(814) 00:16:16.644 fused_ordering(815) 00:16:16.644 fused_ordering(816) 00:16:16.644 fused_ordering(817) 00:16:16.644 fused_ordering(818) 00:16:16.644 fused_ordering(819) 00:16:16.644 fused_ordering(820) 00:16:17.214 fused_ordering(821) 00:16:17.214 fused_ordering(822) 00:16:17.214 fused_ordering(823) 00:16:17.214 fused_ordering(824) 00:16:17.214 fused_ordering(825) 00:16:17.214 fused_ordering(826) 00:16:17.214 fused_ordering(827) 00:16:17.214 fused_ordering(828) 00:16:17.214 fused_ordering(829) 00:16:17.214 fused_ordering(830) 00:16:17.214 fused_ordering(831) 00:16:17.214 fused_ordering(832) 00:16:17.214 fused_ordering(833) 00:16:17.214 fused_ordering(834) 00:16:17.214 fused_ordering(835) 00:16:17.214 fused_ordering(836) 00:16:17.214 fused_ordering(837) 00:16:17.214 fused_ordering(838) 00:16:17.214 fused_ordering(839) 00:16:17.214 fused_ordering(840) 00:16:17.214 fused_ordering(841) 00:16:17.214 fused_ordering(842) 00:16:17.214 fused_ordering(843) 00:16:17.214 fused_ordering(844) 00:16:17.214 fused_ordering(845) 00:16:17.214 fused_ordering(846) 00:16:17.214 fused_ordering(847) 00:16:17.214 fused_ordering(848) 00:16:17.214 fused_ordering(849) 00:16:17.214 fused_ordering(850) 00:16:17.214 fused_ordering(851) 00:16:17.214 fused_ordering(852) 00:16:17.214 fused_ordering(853) 00:16:17.214 fused_ordering(854) 00:16:17.214 fused_ordering(855) 00:16:17.214 fused_ordering(856) 00:16:17.214 fused_ordering(857) 00:16:17.214 fused_ordering(858) 00:16:17.214 fused_ordering(859) 00:16:17.214 fused_ordering(860) 00:16:17.214 fused_ordering(861) 00:16:17.214 fused_ordering(862) 00:16:17.214 fused_ordering(863) 00:16:17.214 fused_ordering(864) 00:16:17.214 fused_ordering(865) 00:16:17.214 fused_ordering(866) 00:16:17.214 fused_ordering(867) 00:16:17.214 fused_ordering(868) 00:16:17.214 fused_ordering(869) 00:16:17.214 fused_ordering(870) 00:16:17.214 fused_ordering(871) 00:16:17.214 fused_ordering(872) 00:16:17.214 fused_ordering(873) 00:16:17.214 fused_ordering(874) 00:16:17.214 fused_ordering(875) 00:16:17.214 fused_ordering(876) 00:16:17.214 fused_ordering(877) 00:16:17.214 fused_ordering(878) 00:16:17.214 fused_ordering(879) 00:16:17.214 fused_ordering(880) 00:16:17.214 fused_ordering(881) 00:16:17.214 fused_ordering(882) 00:16:17.214 fused_ordering(883) 00:16:17.214 fused_ordering(884) 00:16:17.214 fused_ordering(885) 00:16:17.214 fused_ordering(886) 00:16:17.214 fused_ordering(887) 00:16:17.214 fused_ordering(888) 00:16:17.214 fused_ordering(889) 00:16:17.214 fused_ordering(890) 00:16:17.214 fused_ordering(891) 00:16:17.214 fused_ordering(892) 00:16:17.214 fused_ordering(893) 00:16:17.214 fused_ordering(894) 00:16:17.214 fused_ordering(895) 00:16:17.214 fused_ordering(896) 00:16:17.214 fused_ordering(897) 00:16:17.214 fused_ordering(898) 00:16:17.214 fused_ordering(899) 00:16:17.214 fused_ordering(900) 00:16:17.214 fused_ordering(901) 00:16:17.214 fused_ordering(902) 00:16:17.214 fused_ordering(903) 00:16:17.214 fused_ordering(904) 00:16:17.214 fused_ordering(905) 00:16:17.214 fused_ordering(906) 00:16:17.214 fused_ordering(907) 00:16:17.214 fused_ordering(908) 00:16:17.214 fused_ordering(909) 00:16:17.214 fused_ordering(910) 00:16:17.214 fused_ordering(911) 00:16:17.214 fused_ordering(912) 00:16:17.214 fused_ordering(913) 00:16:17.214 fused_ordering(914) 00:16:17.214 fused_ordering(915) 00:16:17.214 fused_ordering(916) 00:16:17.214 fused_ordering(917) 00:16:17.214 fused_ordering(918) 00:16:17.214 fused_ordering(919) 00:16:17.214 fused_ordering(920) 00:16:17.215 fused_ordering(921) 00:16:17.215 fused_ordering(922) 00:16:17.215 fused_ordering(923) 00:16:17.215 fused_ordering(924) 00:16:17.215 fused_ordering(925) 00:16:17.215 fused_ordering(926) 00:16:17.215 fused_ordering(927) 00:16:17.215 fused_ordering(928) 00:16:17.215 fused_ordering(929) 00:16:17.215 fused_ordering(930) 00:16:17.215 fused_ordering(931) 00:16:17.215 fused_ordering(932) 00:16:17.215 fused_ordering(933) 00:16:17.215 fused_ordering(934) 00:16:17.215 fused_ordering(935) 00:16:17.215 fused_ordering(936) 00:16:17.215 fused_ordering(937) 00:16:17.215 fused_ordering(938) 00:16:17.215 fused_ordering(939) 00:16:17.215 fused_ordering(940) 00:16:17.215 fused_ordering(941) 00:16:17.215 fused_ordering(942) 00:16:17.215 fused_ordering(943) 00:16:17.215 fused_ordering(944) 00:16:17.215 fused_ordering(945) 00:16:17.215 fused_ordering(946) 00:16:17.215 fused_ordering(947) 00:16:17.215 fused_ordering(948) 00:16:17.215 fused_ordering(949) 00:16:17.215 fused_ordering(950) 00:16:17.215 fused_ordering(951) 00:16:17.215 fused_ordering(952) 00:16:17.215 fused_ordering(953) 00:16:17.215 fused_ordering(954) 00:16:17.215 fused_ordering(955) 00:16:17.215 fused_ordering(956) 00:16:17.215 fused_ordering(957) 00:16:17.215 fused_ordering(958) 00:16:17.215 fused_ordering(959) 00:16:17.215 fused_ordering(960) 00:16:17.215 fused_ordering(961) 00:16:17.215 fused_ordering(962) 00:16:17.215 fused_ordering(963) 00:16:17.215 fused_ordering(964) 00:16:17.215 fused_ordering(965) 00:16:17.215 fused_ordering(966) 00:16:17.215 fused_ordering(967) 00:16:17.215 fused_ordering(968) 00:16:17.215 fused_ordering(969) 00:16:17.215 fused_ordering(970) 00:16:17.215 fused_ordering(971) 00:16:17.215 fused_ordering(972) 00:16:17.215 fused_ordering(973) 00:16:17.215 fused_ordering(974) 00:16:17.215 fused_ordering(975) 00:16:17.215 fused_ordering(976) 00:16:17.215 fused_ordering(977) 00:16:17.215 fused_ordering(978) 00:16:17.215 fused_ordering(979) 00:16:17.215 fused_ordering(980) 00:16:17.215 fused_ordering(981) 00:16:17.215 fused_ordering(982) 00:16:17.215 fused_ordering(983) 00:16:17.215 fused_ordering(984) 00:16:17.215 fused_ordering(985) 00:16:17.215 fused_ordering(986) 00:16:17.215 fused_ordering(987) 00:16:17.215 fused_ordering(988) 00:16:17.215 fused_ordering(989) 00:16:17.215 fused_ordering(990) 00:16:17.215 fused_ordering(991) 00:16:17.215 fused_ordering(992) 00:16:17.215 fused_ordering(993) 00:16:17.215 fused_ordering(994) 00:16:17.215 fused_ordering(995) 00:16:17.215 fused_ordering(996) 00:16:17.215 fused_ordering(997) 00:16:17.215 fused_ordering(998) 00:16:17.215 fused_ordering(999) 00:16:17.215 fused_ordering(1000) 00:16:17.215 fused_ordering(1001) 00:16:17.215 fused_ordering(1002) 00:16:17.215 fused_ordering(1003) 00:16:17.215 fused_ordering(1004) 00:16:17.215 fused_ordering(1005) 00:16:17.215 fused_ordering(1006) 00:16:17.215 fused_ordering(1007) 00:16:17.215 fused_ordering(1008) 00:16:17.215 fused_ordering(1009) 00:16:17.215 fused_ordering(1010) 00:16:17.215 fused_ordering(1011) 00:16:17.215 fused_ordering(1012) 00:16:17.215 fused_ordering(1013) 00:16:17.215 fused_ordering(1014) 00:16:17.215 fused_ordering(1015) 00:16:17.215 fused_ordering(1016) 00:16:17.215 fused_ordering(1017) 00:16:17.215 fused_ordering(1018) 00:16:17.215 fused_ordering(1019) 00:16:17.215 fused_ordering(1020) 00:16:17.215 fused_ordering(1021) 00:16:17.215 fused_ordering(1022) 00:16:17.215 fused_ordering(1023) 00:16:17.215 08:31:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:16:17.215 08:31:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:16:17.215 08:31:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # nvmfcleanup 00:16:17.215 08:31:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:16:17.215 08:31:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:17.215 08:31:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:16:17.215 08:31:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:17.215 08:31:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:17.215 rmmod nvme_tcp 00:16:17.215 rmmod nvme_fabrics 00:16:17.215 rmmod nvme_keyring 00:16:17.215 08:31:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:17.215 08:31:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:16:17.215 08:31:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:16:17.215 08:31:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@513 -- # '[' -n 3693443 ']' 00:16:17.215 08:31:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@514 -- # killprocess 3693443 00:16:17.215 08:31:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 3693443 ']' 00:16:17.215 08:31:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 3693443 00:16:17.215 08:31:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:16:17.215 08:31:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:17.215 08:31:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3693443 00:16:17.477 08:31:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:17.477 08:31:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:17.477 08:31:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3693443' 00:16:17.477 killing process with pid 3693443 00:16:17.477 08:31:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 3693443 00:16:17.477 08:31:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 3693443 00:16:17.477 08:31:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:16:17.477 08:31:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:16:17.477 08:31:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:16:17.477 08:31:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:16:17.477 08:31:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@787 -- # iptables-save 00:16:17.477 08:31:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:16:17.477 08:31:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@787 -- # iptables-restore 00:16:17.477 08:31:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:17.477 08:31:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:17.477 08:31:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:17.477 08:31:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:17.477 08:31:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:20.031 00:16:20.031 real 0m13.165s 00:16:20.031 user 0m6.908s 00:16:20.031 sys 0m6.906s 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:20.031 ************************************ 00:16:20.031 END TEST nvmf_fused_ordering 00:16:20.031 ************************************ 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:20.031 ************************************ 00:16:20.031 START TEST nvmf_ns_masking 00:16:20.031 ************************************ 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:16:20.031 * Looking for test storage... 00:16:20.031 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lcov --version 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:20.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.031 --rc genhtml_branch_coverage=1 00:16:20.031 --rc genhtml_function_coverage=1 00:16:20.031 --rc genhtml_legend=1 00:16:20.031 --rc geninfo_all_blocks=1 00:16:20.031 --rc geninfo_unexecuted_blocks=1 00:16:20.031 00:16:20.031 ' 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:20.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.031 --rc genhtml_branch_coverage=1 00:16:20.031 --rc genhtml_function_coverage=1 00:16:20.031 --rc genhtml_legend=1 00:16:20.031 --rc geninfo_all_blocks=1 00:16:20.031 --rc geninfo_unexecuted_blocks=1 00:16:20.031 00:16:20.031 ' 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:20.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.031 --rc genhtml_branch_coverage=1 00:16:20.031 --rc genhtml_function_coverage=1 00:16:20.031 --rc genhtml_legend=1 00:16:20.031 --rc geninfo_all_blocks=1 00:16:20.031 --rc geninfo_unexecuted_blocks=1 00:16:20.031 00:16:20.031 ' 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:20.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.031 --rc genhtml_branch_coverage=1 00:16:20.031 --rc genhtml_function_coverage=1 00:16:20.031 --rc genhtml_legend=1 00:16:20.031 --rc geninfo_all_blocks=1 00:16:20.031 --rc geninfo_unexecuted_blocks=1 00:16:20.031 00:16:20.031 ' 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.031 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.032 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.032 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:16:20.032 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.032 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:16:20.032 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:20.032 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:20.032 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:20.032 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:20.032 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:20.032 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:20.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:20.032 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:20.032 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:20.032 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:20.032 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:20.032 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:16:20.032 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:16:20.032 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:16:20.032 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=dc72ef56-8d1b-47bc-adde-44f2b0b7c03e 00:16:20.032 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:16:20.032 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=fec46b60-db2a-4a37-b221-bdd3ff03bb90 00:16:20.032 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:16:20.032 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:16:20.032 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:16:20.032 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:16:20.032 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=3681bd4d-f27c-4c17-a5f3-f4e18be8980d 00:16:20.032 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:16:20.032 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:16:20.032 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:20.032 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@472 -- # prepare_net_devs 00:16:20.032 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@434 -- # local -g is_hw=no 00:16:20.032 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@436 -- # remove_spdk_ns 00:16:20.032 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:20.032 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:20.032 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:20.032 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:16:20.032 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:16:20.032 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:16:20.032 08:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:26.616 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:26.616 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:16:26.616 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:26.616 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:26.616 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:26.616 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:26.616 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:26.616 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:16:26.616 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:26.616 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:16:26.616 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:16:26.616 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:16:26.616 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:16:26.616 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:16:26.616 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:16:26.616 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:26.616 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:26.616 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:26.616 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:26.616 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:26.616 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:26.617 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:26.617 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:26.617 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:26.617 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:26.617 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:26.617 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:16:26.617 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:16:26.617 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:16:26.617 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:16:26.617 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:16:26.617 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:16:26.617 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:16:26.617 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:26.617 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:26.617 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:16:26.617 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:16:26.617 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:26.617 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:26.617 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:16:26.617 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:16:26.617 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:26.617 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:26.617 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:16:26.617 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:16:26.617 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:26.617 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:26.617 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:16:26.617 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:16:26.617 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:16:26.617 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:16:26.617 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:16:26.617 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:26.617 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:16:26.617 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:26.617 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ up == up ]] 00:16:26.617 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:16:26.617 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:26.617 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:26.617 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:26.617 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:16:26.617 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:16:26.617 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:26.617 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:16:26.617 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:26.617 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ up == up ]] 00:16:26.617 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:16:26.617 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:26.617 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:26.617 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:26.617 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:16:26.617 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:16:26.617 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # is_hw=yes 00:16:26.617 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:16:26.617 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:16:26.617 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:16:26.617 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:26.617 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:26.617 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:26.617 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:26.617 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:26.617 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:26.617 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:26.617 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:26.617 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:26.617 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:26.617 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:26.617 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:26.617 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:26.617 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:26.617 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:26.878 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:26.878 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:26.878 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:26.878 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:26.878 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:26.878 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:26.878 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:26.878 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:26.878 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:26.878 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.653 ms 00:16:26.878 00:16:26.878 --- 10.0.0.2 ping statistics --- 00:16:26.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:26.878 rtt min/avg/max/mdev = 0.653/0.653/0.653/0.000 ms 00:16:26.878 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:27.139 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:27.139 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:16:27.139 00:16:27.139 --- 10.0.0.1 ping statistics --- 00:16:27.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.139 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:16:27.139 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:27.139 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # return 0 00:16:27.139 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:16:27.139 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:27.139 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:16:27.139 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:16:27.139 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:27.139 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:16:27.139 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:16:27.139 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:16:27.139 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:16:27.139 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:27.139 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:27.139 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@505 -- # nvmfpid=3698413 00:16:27.139 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@506 -- # waitforlisten 3698413 00:16:27.139 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:27.139 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 3698413 ']' 00:16:27.139 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.139 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:27.139 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:27.139 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:27.139 08:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:27.139 [2024-10-01 08:31:18.825384] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:16:27.139 [2024-10-01 08:31:18.825459] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:27.139 [2024-10-01 08:31:18.900192] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.398 [2024-10-01 08:31:18.973219] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:27.398 [2024-10-01 08:31:18.973261] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:27.398 [2024-10-01 08:31:18.973268] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:27.398 [2024-10-01 08:31:18.973275] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:27.398 [2024-10-01 08:31:18.973281] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:27.398 [2024-10-01 08:31:18.973874] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.967 08:31:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:27.967 08:31:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:16:27.967 08:31:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:16:27.967 08:31:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:27.967 08:31:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:27.967 08:31:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:27.967 08:31:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:28.228 [2024-10-01 08:31:19.826540] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:28.228 08:31:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:16:28.228 08:31:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:16:28.228 08:31:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:28.228 Malloc1 00:16:28.488 08:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:28.488 Malloc2 00:16:28.488 08:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:28.748 08:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:16:29.008 08:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:29.008 [2024-10-01 08:31:20.769954] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:29.008 08:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:16:29.008 08:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 3681bd4d-f27c-4c17-a5f3-f4e18be8980d -a 10.0.0.2 -s 4420 -i 4 00:16:29.269 08:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:16:29.269 08:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:16:29.269 08:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:29.269 08:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:29.269 08:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:16:31.178 08:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:31.178 08:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:31.178 08:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:31.178 08:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:31.178 08:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:31.178 08:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:16:31.178 08:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:31.178 08:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:31.438 08:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:31.438 08:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:31.438 08:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:16:31.438 08:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:31.438 08:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:31.438 [ 0]:0x1 00:16:31.438 08:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:31.438 08:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:31.438 08:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e83de50c282840c4be322dfa5d36e86f 00:16:31.438 08:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e83de50c282840c4be322dfa5d36e86f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:31.438 08:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:16:31.699 08:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:16:31.699 08:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:31.699 08:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:31.699 [ 0]:0x1 00:16:31.699 08:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:31.699 08:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:31.699 08:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e83de50c282840c4be322dfa5d36e86f 00:16:31.699 08:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e83de50c282840c4be322dfa5d36e86f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:31.699 08:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:16:31.699 08:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:31.699 08:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:31.699 [ 1]:0x2 00:16:31.699 08:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:31.699 08:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:31.699 08:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9734c985ebb04c86b3ec4767dd52505d 00:16:31.699 08:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9734c985ebb04c86b3ec4767dd52505d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:31.699 08:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:16:31.699 08:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:31.961 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:31.961 08:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:32.222 08:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:16:32.222 08:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:16:32.222 08:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 3681bd4d-f27c-4c17-a5f3-f4e18be8980d -a 10.0.0.2 -s 4420 -i 4 00:16:32.482 08:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:16:32.482 08:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:16:32.482 08:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:32.482 08:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:16:32.482 08:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:16:32.482 08:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:16:34.394 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:34.394 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:34.394 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:34.394 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:34.394 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:34.394 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:16:34.394 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:34.394 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:34.655 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:34.655 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:34.655 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:16:34.655 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:34.655 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:34.655 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:34.655 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:34.655 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:34.655 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:34.655 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:34.655 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:34.655 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:34.655 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:34.655 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:34.655 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:34.655 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:34.655 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:34.656 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:34.656 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:34.656 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:34.656 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:16:34.656 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:34.656 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:34.656 [ 0]:0x2 00:16:34.656 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:34.656 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:34.656 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9734c985ebb04c86b3ec4767dd52505d 00:16:34.656 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9734c985ebb04c86b3ec4767dd52505d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:34.656 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:34.916 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:16:34.916 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:34.916 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:34.916 [ 0]:0x1 00:16:34.916 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:34.916 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:34.916 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e83de50c282840c4be322dfa5d36e86f 00:16:34.916 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e83de50c282840c4be322dfa5d36e86f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:34.916 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:16:34.916 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:34.916 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:34.916 [ 1]:0x2 00:16:34.916 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:34.916 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:34.916 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9734c985ebb04c86b3ec4767dd52505d 00:16:34.916 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9734c985ebb04c86b3ec4767dd52505d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:34.917 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:35.177 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:16:35.177 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:35.177 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:35.177 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:35.177 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:35.177 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:35.177 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:35.177 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:35.177 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:35.177 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:35.177 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:35.177 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:35.177 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:35.177 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:35.177 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:35.177 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:35.177 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:35.177 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:35.177 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:16:35.177 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:35.177 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:35.177 [ 0]:0x2 00:16:35.177 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:35.177 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:35.177 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9734c985ebb04c86b3ec4767dd52505d 00:16:35.177 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9734c985ebb04c86b3ec4767dd52505d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:35.177 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:16:35.177 08:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:35.437 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:35.437 08:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:35.698 08:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:16:35.698 08:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 3681bd4d-f27c-4c17-a5f3-f4e18be8980d -a 10.0.0.2 -s 4420 -i 4 00:16:35.698 08:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:35.698 08:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:16:35.698 08:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:35.698 08:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:16:35.698 08:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:16:35.698 08:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:16:38.243 08:31:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:38.243 08:31:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:38.243 08:31:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:38.244 08:31:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:16:38.244 08:31:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:38.244 08:31:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:16:38.244 08:31:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:38.244 08:31:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:38.244 08:31:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:38.244 08:31:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:38.244 08:31:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:16:38.244 08:31:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:38.244 08:31:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:38.244 [ 0]:0x1 00:16:38.244 08:31:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:38.244 08:31:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:38.244 08:31:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e83de50c282840c4be322dfa5d36e86f 00:16:38.244 08:31:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e83de50c282840c4be322dfa5d36e86f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:38.244 08:31:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:16:38.244 08:31:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:38.244 08:31:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:38.244 [ 1]:0x2 00:16:38.244 08:31:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:38.244 08:31:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:38.244 08:31:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9734c985ebb04c86b3ec4767dd52505d 00:16:38.244 08:31:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9734c985ebb04c86b3ec4767dd52505d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:38.244 08:31:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:38.244 08:31:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:16:38.244 08:31:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:38.244 08:31:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:38.244 08:31:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:38.244 08:31:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:38.244 08:31:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:38.244 08:31:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:38.244 08:31:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:38.244 08:31:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:38.244 08:31:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:38.244 08:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:38.244 08:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:38.244 08:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:38.244 08:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:38.244 08:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:38.244 08:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:38.244 08:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:38.244 08:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:38.244 08:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:16:38.244 08:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:38.244 08:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:38.244 [ 0]:0x2 00:16:38.244 08:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:38.244 08:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:38.506 08:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9734c985ebb04c86b3ec4767dd52505d 00:16:38.506 08:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9734c985ebb04c86b3ec4767dd52505d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:38.506 08:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:38.506 08:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:38.506 08:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:38.506 08:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:38.506 08:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:38.506 08:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:38.506 08:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:38.506 08:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:38.506 08:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:38.506 08:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:38.506 08:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:38.506 08:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:38.506 [2024-10-01 08:31:30.265123] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:16:38.506 request: 00:16:38.506 { 00:16:38.506 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:38.506 "nsid": 2, 00:16:38.506 "host": "nqn.2016-06.io.spdk:host1", 00:16:38.506 "method": "nvmf_ns_remove_host", 00:16:38.506 "req_id": 1 00:16:38.506 } 00:16:38.506 Got JSON-RPC error response 00:16:38.506 response: 00:16:38.506 { 00:16:38.506 "code": -32602, 00:16:38.506 "message": "Invalid parameters" 00:16:38.506 } 00:16:38.506 08:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:38.506 08:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:38.506 08:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:38.506 08:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:38.506 08:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:16:38.506 08:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:38.506 08:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:38.506 08:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:38.506 08:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:38.506 08:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:38.506 08:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:38.506 08:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:38.506 08:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:38.506 08:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:38.506 08:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:38.506 08:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:38.767 08:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:38.767 08:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:38.767 08:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:38.767 08:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:38.767 08:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:38.767 08:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:38.767 08:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:16:38.767 08:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:38.767 08:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:38.767 [ 0]:0x2 00:16:38.767 08:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:38.767 08:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:38.767 08:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9734c985ebb04c86b3ec4767dd52505d 00:16:38.767 08:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9734c985ebb04c86b3ec4767dd52505d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:38.767 08:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:16:38.767 08:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:39.028 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:39.028 08:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3700750 00:16:39.028 08:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:16:39.028 08:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:16:39.028 08:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3700750 /var/tmp/host.sock 00:16:39.028 08:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 3700750 ']' 00:16:39.028 08:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:16:39.028 08:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:39.028 08:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:39.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:39.028 08:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:39.028 08:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:39.028 [2024-10-01 08:31:30.657462] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:16:39.028 [2024-10-01 08:31:30.657519] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3700750 ] 00:16:39.028 [2024-10-01 08:31:30.735195] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:39.028 [2024-10-01 08:31:30.799590] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:39.971 08:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:39.971 08:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:16:39.971 08:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:39.971 08:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:39.971 08:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid dc72ef56-8d1b-47bc-adde-44f2b0b7c03e 00:16:39.971 08:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@783 -- # tr -d - 00:16:39.971 08:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g DC72EF568D1B47BCADDE44F2B0B7C03E -i 00:16:40.232 08:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid fec46b60-db2a-4a37-b221-bdd3ff03bb90 00:16:40.232 08:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@783 -- # tr -d - 00:16:40.232 08:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g FEC46B60DB2A4A37B221BDD3FF03BB90 -i 00:16:40.494 08:31:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:40.494 08:31:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:16:40.755 08:31:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:16:40.755 08:31:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:16:41.016 nvme0n1 00:16:41.016 08:31:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:16:41.016 08:31:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:16:41.277 nvme1n2 00:16:41.538 08:31:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:16:41.538 08:31:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:16:41.538 08:31:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:16:41.538 08:31:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:16:41.538 08:31:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:16:41.538 08:31:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:16:41.538 08:31:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:16:41.538 08:31:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:16:41.538 08:31:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:16:41.799 08:31:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ dc72ef56-8d1b-47bc-adde-44f2b0b7c03e == \d\c\7\2\e\f\5\6\-\8\d\1\b\-\4\7\b\c\-\a\d\d\e\-\4\4\f\2\b\0\b\7\c\0\3\e ]] 00:16:41.799 08:31:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:16:41.799 08:31:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:16:41.799 08:31:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:16:42.061 08:31:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ fec46b60-db2a-4a37-b221-bdd3ff03bb90 == \f\e\c\4\6\b\6\0\-\d\b\2\a\-\4\a\3\7\-\b\2\2\1\-\b\d\d\3\f\f\0\3\b\b\9\0 ]] 00:16:42.061 08:31:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 3700750 00:16:42.061 08:31:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 3700750 ']' 00:16:42.061 08:31:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 3700750 00:16:42.061 08:31:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:16:42.061 08:31:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:42.061 08:31:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3700750 00:16:42.061 08:31:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:42.061 08:31:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:42.061 08:31:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3700750' 00:16:42.061 killing process with pid 3700750 00:16:42.061 08:31:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 3700750 00:16:42.061 08:31:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 3700750 00:16:42.322 08:31:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:42.322 08:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:16:42.322 08:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:16:42.322 08:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # nvmfcleanup 00:16:42.322 08:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:16:42.322 08:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:42.322 08:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:16:42.322 08:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:42.322 08:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:42.322 rmmod nvme_tcp 00:16:42.322 rmmod nvme_fabrics 00:16:42.584 rmmod nvme_keyring 00:16:42.584 08:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:42.584 08:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:16:42.584 08:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:16:42.584 08:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@513 -- # '[' -n 3698413 ']' 00:16:42.584 08:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@514 -- # killprocess 3698413 00:16:42.584 08:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 3698413 ']' 00:16:42.584 08:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 3698413 00:16:42.584 08:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:16:42.584 08:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:42.584 08:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3698413 00:16:42.584 08:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:42.584 08:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:42.584 08:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3698413' 00:16:42.584 killing process with pid 3698413 00:16:42.584 08:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 3698413 00:16:42.584 08:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 3698413 00:16:42.584 08:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:16:42.584 08:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:16:42.584 08:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:16:42.584 08:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:16:42.584 08:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # iptables-save 00:16:42.584 08:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:16:42.845 08:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # iptables-restore 00:16:42.845 08:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:42.845 08:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:42.845 08:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:42.845 08:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:42.845 08:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:44.756 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:44.756 00:16:44.756 real 0m25.121s 00:16:44.756 user 0m25.561s 00:16:44.756 sys 0m7.562s 00:16:44.756 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:44.756 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:44.756 ************************************ 00:16:44.756 END TEST nvmf_ns_masking 00:16:44.756 ************************************ 00:16:44.756 08:31:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:16:44.756 08:31:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:44.756 08:31:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:44.756 08:31:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:44.756 08:31:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:44.756 ************************************ 00:16:44.756 START TEST nvmf_nvme_cli 00:16:44.756 ************************************ 00:16:44.756 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:45.019 * Looking for test storage... 00:16:45.019 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:45.019 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:45.019 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # lcov --version 00:16:45.019 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:45.019 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:45.019 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:45.019 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:45.019 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:45.019 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:16:45.019 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:16:45.019 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:16:45.019 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:16:45.019 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:16:45.019 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:16:45.019 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:16:45.019 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:45.019 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:16:45.019 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:16:45.019 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:45.019 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:45.019 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:16:45.019 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:16:45.019 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:45.019 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:16:45.019 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:16:45.019 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:16:45.019 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:16:45.019 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:45.019 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:16:45.019 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:16:45.019 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:45.019 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:45.019 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:16:45.019 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:45.019 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:45.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.019 --rc genhtml_branch_coverage=1 00:16:45.019 --rc genhtml_function_coverage=1 00:16:45.019 --rc genhtml_legend=1 00:16:45.019 --rc geninfo_all_blocks=1 00:16:45.019 --rc geninfo_unexecuted_blocks=1 00:16:45.019 00:16:45.019 ' 00:16:45.019 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:45.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.019 --rc genhtml_branch_coverage=1 00:16:45.019 --rc genhtml_function_coverage=1 00:16:45.019 --rc genhtml_legend=1 00:16:45.019 --rc geninfo_all_blocks=1 00:16:45.019 --rc geninfo_unexecuted_blocks=1 00:16:45.019 00:16:45.019 ' 00:16:45.019 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:45.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.019 --rc genhtml_branch_coverage=1 00:16:45.019 --rc genhtml_function_coverage=1 00:16:45.019 --rc genhtml_legend=1 00:16:45.019 --rc geninfo_all_blocks=1 00:16:45.019 --rc geninfo_unexecuted_blocks=1 00:16:45.019 00:16:45.019 ' 00:16:45.019 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:45.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.019 --rc genhtml_branch_coverage=1 00:16:45.019 --rc genhtml_function_coverage=1 00:16:45.019 --rc genhtml_legend=1 00:16:45.019 --rc geninfo_all_blocks=1 00:16:45.019 --rc geninfo_unexecuted_blocks=1 00:16:45.019 00:16:45.019 ' 00:16:45.019 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:45.019 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:16:45.019 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:45.019 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:45.019 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:45.019 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:45.019 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:45.019 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:45.019 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:45.019 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:45.019 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:45.019 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:45.019 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:45.019 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:45.019 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:45.019 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:45.019 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:45.019 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:45.019 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:45.019 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:16:45.019 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:45.019 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:45.019 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:45.019 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.019 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.019 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.020 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:16:45.020 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.020 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:16:45.020 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:45.020 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:45.020 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:45.020 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:45.020 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:45.020 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:45.020 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:45.020 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:45.020 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:45.020 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:45.020 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:45.020 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:45.020 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:16:45.020 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:16:45.020 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:16:45.020 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:45.020 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@472 -- # prepare_net_devs 00:16:45.020 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@434 -- # local -g is_hw=no 00:16:45.020 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@436 -- # remove_spdk_ns 00:16:45.020 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:45.020 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:45.020 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:45.020 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:16:45.020 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:16:45.020 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:16:45.020 08:31:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:53.162 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:53.162 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:16:53.162 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:53.162 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:53.162 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:53.162 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:53.162 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:53.162 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:16:53.162 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:53.162 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:16:53.162 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:16:53.162 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:16:53.162 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:16:53.162 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:16:53.162 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:16:53.162 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:53.162 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:53.162 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:53.162 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:53.162 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:53.162 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:53.162 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:53.162 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:53.162 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:53.162 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:53.162 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:53.162 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:16:53.162 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:16:53.162 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:16:53.162 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:53.163 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:53.163 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ up == up ]] 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:53.163 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ up == up ]] 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:53.163 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # is_hw=yes 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:53.163 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:53.163 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.610 ms 00:16:53.163 00:16:53.163 --- 10.0.0.2 ping statistics --- 00:16:53.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:53.163 rtt min/avg/max/mdev = 0.610/0.610/0.610/0.000 ms 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:53.163 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:53.163 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.251 ms 00:16:53.163 00:16:53.163 --- 10.0.0.1 ping statistics --- 00:16:53.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:53.163 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # return 0 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:16:53.163 08:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:16:53.163 08:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:16:53.163 08:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:16:53.163 08:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:53.163 08:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:53.163 08:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@505 -- # nvmfpid=3705677 00:16:53.163 08:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@506 -- # waitforlisten 3705677 00:16:53.164 08:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:53.164 08:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 3705677 ']' 00:16:53.164 08:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:53.164 08:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:53.164 08:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:53.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:53.164 08:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:53.164 08:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:53.164 [2024-10-01 08:31:44.092586] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:16:53.164 [2024-10-01 08:31:44.092638] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:53.164 [2024-10-01 08:31:44.159445] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:53.164 [2024-10-01 08:31:44.224102] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:53.164 [2024-10-01 08:31:44.224140] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:53.164 [2024-10-01 08:31:44.224148] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:53.164 [2024-10-01 08:31:44.224155] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:53.164 [2024-10-01 08:31:44.224161] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:53.164 [2024-10-01 08:31:44.225912] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:53.164 [2024-10-01 08:31:44.226026] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:53.164 [2024-10-01 08:31:44.226120] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:53.164 [2024-10-01 08:31:44.226120] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:16:53.164 08:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:53.164 08:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:16:53.164 08:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:16:53.164 08:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:53.164 08:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:53.164 08:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:53.164 08:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:53.164 08:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.164 08:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:53.164 [2024-10-01 08:31:44.939291] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:53.164 08:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.164 08:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:53.164 08:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.164 08:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:53.164 Malloc0 00:16:53.164 08:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.164 08:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:53.164 08:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.164 08:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:53.424 Malloc1 00:16:53.424 08:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.424 08:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:16:53.424 08:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.424 08:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:53.424 08:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.424 08:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:53.424 08:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.424 08:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:53.424 08:31:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.424 08:31:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:53.424 08:31:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.424 08:31:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:53.424 08:31:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.424 08:31:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:53.424 08:31:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.424 08:31:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:53.424 [2024-10-01 08:31:45.029180] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:53.424 08:31:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.424 08:31:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:53.424 08:31:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.424 08:31:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:53.424 08:31:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.424 08:31:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:16:53.424 00:16:53.424 Discovery Log Number of Records 2, Generation counter 2 00:16:53.424 =====Discovery Log Entry 0====== 00:16:53.424 trtype: tcp 00:16:53.424 adrfam: ipv4 00:16:53.424 subtype: current discovery subsystem 00:16:53.424 treq: not required 00:16:53.424 portid: 0 00:16:53.424 trsvcid: 4420 00:16:53.424 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:53.424 traddr: 10.0.0.2 00:16:53.424 eflags: explicit discovery connections, duplicate discovery information 00:16:53.424 sectype: none 00:16:53.424 =====Discovery Log Entry 1====== 00:16:53.424 trtype: tcp 00:16:53.424 adrfam: ipv4 00:16:53.424 subtype: nvme subsystem 00:16:53.424 treq: not required 00:16:53.424 portid: 0 00:16:53.424 trsvcid: 4420 00:16:53.424 subnqn: nqn.2016-06.io.spdk:cnode1 00:16:53.424 traddr: 10.0.0.2 00:16:53.424 eflags: none 00:16:53.424 sectype: none 00:16:53.424 08:31:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:16:53.424 08:31:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:16:53.424 08:31:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@546 -- # local dev _ 00:16:53.424 08:31:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:16:53.424 08:31:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@545 -- # nvme list 00:16:53.424 08:31:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ Node == /dev/nvme* ]] 00:16:53.424 08:31:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:16:53.424 08:31:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ --------------------- == /dev/nvme* ]] 00:16:53.424 08:31:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:16:53.424 08:31:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:16:53.424 08:31:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:55.332 08:31:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:55.332 08:31:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:16:55.332 08:31:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:55.332 08:31:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:16:55.332 08:31:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:16:55.332 08:31:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:16:57.238 08:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:57.238 08:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:57.238 08:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:57.238 08:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:16:57.238 08:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:57.238 08:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:16:57.238 08:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:16:57.238 08:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@546 -- # local dev _ 00:16:57.238 08:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:16:57.238 08:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@545 -- # nvme list 00:16:57.238 08:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ Node == /dev/nvme* ]] 00:16:57.238 08:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:16:57.238 08:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ --------------------- == /dev/nvme* ]] 00:16:57.238 08:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:16:57.238 08:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:57.238 08:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme0n1 00:16:57.238 08:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:16:57.238 08:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:57.238 08:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme0n2 00:16:57.238 08:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:16:57.238 08:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:16:57.238 /dev/nvme0n2 ]] 00:16:57.238 08:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:16:57.238 08:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:16:57.238 08:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@546 -- # local dev _ 00:16:57.238 08:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:16:57.238 08:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@545 -- # nvme list 00:16:57.497 08:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ Node == /dev/nvme* ]] 00:16:57.497 08:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:16:57.497 08:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ --------------------- == /dev/nvme* ]] 00:16:57.497 08:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:16:57.497 08:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:57.497 08:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme0n1 00:16:57.497 08:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:16:57.497 08:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:57.497 08:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme0n2 00:16:57.497 08:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:16:57.497 08:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:16:57.497 08:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:57.757 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:57.757 08:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:57.757 08:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:16:57.757 08:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:57.757 08:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:57.757 08:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:57.757 08:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:57.757 08:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:16:57.757 08:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:16:57.757 08:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:57.757 08:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.757 08:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:57.757 08:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.758 08:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:57.758 08:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:16:57.758 08:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # nvmfcleanup 00:16:57.758 08:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:16:57.758 08:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:57.758 08:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:16:57.758 08:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:57.758 08:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:57.758 rmmod nvme_tcp 00:16:57.758 rmmod nvme_fabrics 00:16:57.758 rmmod nvme_keyring 00:16:57.758 08:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:57.758 08:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:16:57.758 08:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:16:57.758 08:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@513 -- # '[' -n 3705677 ']' 00:16:57.758 08:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@514 -- # killprocess 3705677 00:16:57.758 08:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 3705677 ']' 00:16:57.758 08:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 3705677 00:16:57.758 08:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:16:57.758 08:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:57.758 08:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3705677 00:16:57.758 08:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:57.758 08:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:57.758 08:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3705677' 00:16:57.758 killing process with pid 3705677 00:16:57.758 08:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 3705677 00:16:57.758 08:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 3705677 00:16:58.019 08:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:16:58.019 08:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:16:58.019 08:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:16:58.019 08:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:16:58.019 08:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@787 -- # iptables-save 00:16:58.019 08:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:16:58.019 08:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@787 -- # iptables-restore 00:16:58.019 08:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:58.019 08:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:58.019 08:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:58.019 08:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:58.019 08:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:00.033 08:31:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:00.033 00:17:00.033 real 0m15.225s 00:17:00.033 user 0m23.946s 00:17:00.033 sys 0m6.146s 00:17:00.033 08:31:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:00.033 08:31:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:00.033 ************************************ 00:17:00.033 END TEST nvmf_nvme_cli 00:17:00.033 ************************************ 00:17:00.033 08:31:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:17:00.033 08:31:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:00.033 08:31:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:00.033 08:31:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:00.033 08:31:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:00.294 ************************************ 00:17:00.294 START TEST nvmf_vfio_user 00:17:00.294 ************************************ 00:17:00.294 08:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:00.294 * Looking for test storage... 00:17:00.295 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:00.295 08:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:00.295 08:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # lcov --version 00:17:00.295 08:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:00.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.295 --rc genhtml_branch_coverage=1 00:17:00.295 --rc genhtml_function_coverage=1 00:17:00.295 --rc genhtml_legend=1 00:17:00.295 --rc geninfo_all_blocks=1 00:17:00.295 --rc geninfo_unexecuted_blocks=1 00:17:00.295 00:17:00.295 ' 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:00.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.295 --rc genhtml_branch_coverage=1 00:17:00.295 --rc genhtml_function_coverage=1 00:17:00.295 --rc genhtml_legend=1 00:17:00.295 --rc geninfo_all_blocks=1 00:17:00.295 --rc geninfo_unexecuted_blocks=1 00:17:00.295 00:17:00.295 ' 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:00.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.295 --rc genhtml_branch_coverage=1 00:17:00.295 --rc genhtml_function_coverage=1 00:17:00.295 --rc genhtml_legend=1 00:17:00.295 --rc geninfo_all_blocks=1 00:17:00.295 --rc geninfo_unexecuted_blocks=1 00:17:00.295 00:17:00.295 ' 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:00.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.295 --rc genhtml_branch_coverage=1 00:17:00.295 --rc genhtml_function_coverage=1 00:17:00.295 --rc genhtml_legend=1 00:17:00.295 --rc geninfo_all_blocks=1 00:17:00.295 --rc geninfo_unexecuted_blocks=1 00:17:00.295 00:17:00.295 ' 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:00.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:00.295 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:00.557 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:00.557 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:00.557 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:17:00.557 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:00.557 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:00.557 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:00.557 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:17:00.557 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:17:00.557 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:17:00.557 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:17:00.557 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3707470 00:17:00.557 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3707470' 00:17:00.557 Process pid: 3707470 00:17:00.557 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:00.557 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3707470 00:17:00.557 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 3707470 ']' 00:17:00.557 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:17:00.557 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:00.557 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:00.557 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:00.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:00.557 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:00.557 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:00.557 [2024-10-01 08:31:52.179670] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:17:00.557 [2024-10-01 08:31:52.179744] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:00.557 [2024-10-01 08:31:52.243503] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:00.557 [2024-10-01 08:31:52.307664] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:00.557 [2024-10-01 08:31:52.307704] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:00.557 [2024-10-01 08:31:52.307712] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:00.557 [2024-10-01 08:31:52.307718] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:00.557 [2024-10-01 08:31:52.307724] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:00.557 [2024-10-01 08:31:52.309378] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:00.557 [2024-10-01 08:31:52.309489] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:00.557 [2024-10-01 08:31:52.309643] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.557 [2024-10-01 08:31:52.309644] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:17:01.498 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:01.498 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:17:01.498 08:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:17:02.441 08:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:17:02.441 08:31:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:02.441 08:31:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:02.441 08:31:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:02.441 08:31:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:02.441 08:31:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:02.702 Malloc1 00:17:02.702 08:31:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:02.962 08:31:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:02.962 08:31:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:03.222 08:31:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:03.223 08:31:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:03.223 08:31:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:03.484 Malloc2 00:17:03.484 08:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:03.484 08:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:03.746 08:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:04.007 08:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:17:04.007 08:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:17:04.007 08:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:04.007 08:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:17:04.007 08:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:17:04.007 08:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:17:04.007 [2024-10-01 08:31:55.683503] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:17:04.007 [2024-10-01 08:31:55.683549] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3708190 ] 00:17:04.007 [2024-10-01 08:31:55.715821] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:17:04.007 [2024-10-01 08:31:55.722290] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:04.007 [2024-10-01 08:31:55.722313] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f37ba152000 00:17:04.007 [2024-10-01 08:31:55.723290] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:04.007 [2024-10-01 08:31:55.724290] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:04.007 [2024-10-01 08:31:55.725292] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:04.007 [2024-10-01 08:31:55.729999] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:04.007 [2024-10-01 08:31:55.730321] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:04.007 [2024-10-01 08:31:55.731330] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:04.007 [2024-10-01 08:31:55.732336] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:04.007 [2024-10-01 08:31:55.733342] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:04.007 [2024-10-01 08:31:55.734359] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:04.007 [2024-10-01 08:31:55.734369] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f37ba147000 00:17:04.007 [2024-10-01 08:31:55.735696] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:04.007 [2024-10-01 08:31:55.751608] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:17:04.007 [2024-10-01 08:31:55.751631] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:17:04.007 [2024-10-01 08:31:55.756486] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:04.007 [2024-10-01 08:31:55.756530] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:17:04.007 [2024-10-01 08:31:55.756617] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:17:04.007 [2024-10-01 08:31:55.756633] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:17:04.007 [2024-10-01 08:31:55.756639] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:17:04.007 [2024-10-01 08:31:55.757488] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:17:04.007 [2024-10-01 08:31:55.757497] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:17:04.007 [2024-10-01 08:31:55.757504] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:17:04.007 [2024-10-01 08:31:55.758508] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:04.007 [2024-10-01 08:31:55.758517] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:17:04.007 [2024-10-01 08:31:55.758524] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:17:04.007 [2024-10-01 08:31:55.759493] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:17:04.007 [2024-10-01 08:31:55.759501] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:04.007 [2024-10-01 08:31:55.760502] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:17:04.007 [2024-10-01 08:31:55.760510] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:17:04.007 [2024-10-01 08:31:55.760519] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:17:04.007 [2024-10-01 08:31:55.760526] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:04.007 [2024-10-01 08:31:55.760631] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:17:04.007 [2024-10-01 08:31:55.760636] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:04.007 [2024-10-01 08:31:55.760641] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:17:04.007 [2024-10-01 08:31:55.761510] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:17:04.007 [2024-10-01 08:31:55.762507] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:17:04.007 [2024-10-01 08:31:55.763519] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:04.007 [2024-10-01 08:31:55.764519] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:04.007 [2024-10-01 08:31:55.764585] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:04.007 [2024-10-01 08:31:55.765528] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:17:04.007 [2024-10-01 08:31:55.765536] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:04.007 [2024-10-01 08:31:55.765540] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:17:04.007 [2024-10-01 08:31:55.765562] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:17:04.007 [2024-10-01 08:31:55.765569] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:17:04.007 [2024-10-01 08:31:55.765583] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:04.007 [2024-10-01 08:31:55.765588] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:04.007 [2024-10-01 08:31:55.765592] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:04.007 [2024-10-01 08:31:55.765605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:04.007 [2024-10-01 08:31:55.765653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:17:04.007 [2024-10-01 08:31:55.765661] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:17:04.007 [2024-10-01 08:31:55.765666] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:17:04.007 [2024-10-01 08:31:55.765671] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:17:04.007 [2024-10-01 08:31:55.765675] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:17:04.007 [2024-10-01 08:31:55.765680] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:17:04.007 [2024-10-01 08:31:55.765687] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:17:04.007 [2024-10-01 08:31:55.765692] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:17:04.007 [2024-10-01 08:31:55.765700] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:17:04.007 [2024-10-01 08:31:55.765710] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:17:04.007 [2024-10-01 08:31:55.765719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:17:04.007 [2024-10-01 08:31:55.765730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.007 [2024-10-01 08:31:55.765739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.007 [2024-10-01 08:31:55.765747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.007 [2024-10-01 08:31:55.765755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.007 [2024-10-01 08:31:55.765760] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:04.007 [2024-10-01 08:31:55.765769] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:04.007 [2024-10-01 08:31:55.765778] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:17:04.007 [2024-10-01 08:31:55.765790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:17:04.007 [2024-10-01 08:31:55.765795] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:17:04.007 [2024-10-01 08:31:55.765800] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:04.007 [2024-10-01 08:31:55.765807] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:17:04.007 [2024-10-01 08:31:55.765815] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:17:04.007 [2024-10-01 08:31:55.765824] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:04.007 [2024-10-01 08:31:55.765831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:17:04.007 [2024-10-01 08:31:55.765892] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:17:04.007 [2024-10-01 08:31:55.765900] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:17:04.007 [2024-10-01 08:31:55.765908] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:17:04.007 [2024-10-01 08:31:55.765912] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:17:04.007 [2024-10-01 08:31:55.765916] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:04.007 [2024-10-01 08:31:55.765922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:17:04.007 [2024-10-01 08:31:55.765931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:17:04.007 [2024-10-01 08:31:55.765939] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:17:04.007 [2024-10-01 08:31:55.765950] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:17:04.007 [2024-10-01 08:31:55.765958] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:17:04.007 [2024-10-01 08:31:55.765966] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:04.008 [2024-10-01 08:31:55.765970] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:04.008 [2024-10-01 08:31:55.765973] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:04.008 [2024-10-01 08:31:55.765979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:04.008 [2024-10-01 08:31:55.765997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:17:04.008 [2024-10-01 08:31:55.766009] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:04.008 [2024-10-01 08:31:55.766017] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:04.008 [2024-10-01 08:31:55.766024] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:04.008 [2024-10-01 08:31:55.766028] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:04.008 [2024-10-01 08:31:55.766032] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:04.008 [2024-10-01 08:31:55.766038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:04.008 [2024-10-01 08:31:55.766050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:17:04.008 [2024-10-01 08:31:55.766058] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:04.008 [2024-10-01 08:31:55.766064] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:17:04.008 [2024-10-01 08:31:55.766073] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:17:04.008 [2024-10-01 08:31:55.766079] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:17:04.008 [2024-10-01 08:31:55.766084] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:04.008 [2024-10-01 08:31:55.766090] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:17:04.008 [2024-10-01 08:31:55.766095] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:17:04.008 [2024-10-01 08:31:55.766099] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:17:04.008 [2024-10-01 08:31:55.766105] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:17:04.008 [2024-10-01 08:31:55.766122] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:17:04.008 [2024-10-01 08:31:55.766134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:17:04.008 [2024-10-01 08:31:55.766146] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:17:04.008 [2024-10-01 08:31:55.766158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:17:04.008 [2024-10-01 08:31:55.766169] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:17:04.008 [2024-10-01 08:31:55.766176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:17:04.008 [2024-10-01 08:31:55.766187] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:04.008 [2024-10-01 08:31:55.766194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:17:04.008 [2024-10-01 08:31:55.766207] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:17:04.008 [2024-10-01 08:31:55.766212] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:17:04.008 [2024-10-01 08:31:55.766216] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:17:04.008 [2024-10-01 08:31:55.766219] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:17:04.008 [2024-10-01 08:31:55.766223] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:17:04.008 [2024-10-01 08:31:55.766229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:17:04.008 [2024-10-01 08:31:55.766237] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:17:04.008 [2024-10-01 08:31:55.766241] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:17:04.008 [2024-10-01 08:31:55.766245] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:04.008 [2024-10-01 08:31:55.766250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:17:04.008 [2024-10-01 08:31:55.766258] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:17:04.008 [2024-10-01 08:31:55.766262] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:04.008 [2024-10-01 08:31:55.766265] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:04.008 [2024-10-01 08:31:55.766271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:04.008 [2024-10-01 08:31:55.766279] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:17:04.008 [2024-10-01 08:31:55.766283] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:17:04.008 [2024-10-01 08:31:55.766287] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:04.008 [2024-10-01 08:31:55.766293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:17:04.008 [2024-10-01 08:31:55.766300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:17:04.008 [2024-10-01 08:31:55.766311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:17:04.008 [2024-10-01 08:31:55.766322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:17:04.008 [2024-10-01 08:31:55.766331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:17:04.008 ===================================================== 00:17:04.008 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:04.008 ===================================================== 00:17:04.008 Controller Capabilities/Features 00:17:04.008 ================================ 00:17:04.008 Vendor ID: 4e58 00:17:04.008 Subsystem Vendor ID: 4e58 00:17:04.008 Serial Number: SPDK1 00:17:04.008 Model Number: SPDK bdev Controller 00:17:04.008 Firmware Version: 25.01 00:17:04.008 Recommended Arb Burst: 6 00:17:04.008 IEEE OUI Identifier: 8d 6b 50 00:17:04.008 Multi-path I/O 00:17:04.008 May have multiple subsystem ports: Yes 00:17:04.008 May have multiple controllers: Yes 00:17:04.008 Associated with SR-IOV VF: No 00:17:04.008 Max Data Transfer Size: 131072 00:17:04.008 Max Number of Namespaces: 32 00:17:04.008 Max Number of I/O Queues: 127 00:17:04.008 NVMe Specification Version (VS): 1.3 00:17:04.008 NVMe Specification Version (Identify): 1.3 00:17:04.008 Maximum Queue Entries: 256 00:17:04.008 Contiguous Queues Required: Yes 00:17:04.008 Arbitration Mechanisms Supported 00:17:04.008 Weighted Round Robin: Not Supported 00:17:04.008 Vendor Specific: Not Supported 00:17:04.008 Reset Timeout: 15000 ms 00:17:04.008 Doorbell Stride: 4 bytes 00:17:04.008 NVM Subsystem Reset: Not Supported 00:17:04.008 Command Sets Supported 00:17:04.008 NVM Command Set: Supported 00:17:04.008 Boot Partition: Not Supported 00:17:04.008 Memory Page Size Minimum: 4096 bytes 00:17:04.008 Memory Page Size Maximum: 4096 bytes 00:17:04.008 Persistent Memory Region: Not Supported 00:17:04.008 Optional Asynchronous Events Supported 00:17:04.008 Namespace Attribute Notices: Supported 00:17:04.008 Firmware Activation Notices: Not Supported 00:17:04.008 ANA Change Notices: Not Supported 00:17:04.008 PLE Aggregate Log Change Notices: Not Supported 00:17:04.008 LBA Status Info Alert Notices: Not Supported 00:17:04.008 EGE Aggregate Log Change Notices: Not Supported 00:17:04.008 Normal NVM Subsystem Shutdown event: Not Supported 00:17:04.008 Zone Descriptor Change Notices: Not Supported 00:17:04.008 Discovery Log Change Notices: Not Supported 00:17:04.008 Controller Attributes 00:17:04.008 128-bit Host Identifier: Supported 00:17:04.008 Non-Operational Permissive Mode: Not Supported 00:17:04.008 NVM Sets: Not Supported 00:17:04.008 Read Recovery Levels: Not Supported 00:17:04.008 Endurance Groups: Not Supported 00:17:04.008 Predictable Latency Mode: Not Supported 00:17:04.008 Traffic Based Keep ALive: Not Supported 00:17:04.008 Namespace Granularity: Not Supported 00:17:04.008 SQ Associations: Not Supported 00:17:04.008 UUID List: Not Supported 00:17:04.008 Multi-Domain Subsystem: Not Supported 00:17:04.008 Fixed Capacity Management: Not Supported 00:17:04.008 Variable Capacity Management: Not Supported 00:17:04.008 Delete Endurance Group: Not Supported 00:17:04.008 Delete NVM Set: Not Supported 00:17:04.008 Extended LBA Formats Supported: Not Supported 00:17:04.008 Flexible Data Placement Supported: Not Supported 00:17:04.008 00:17:04.008 Controller Memory Buffer Support 00:17:04.008 ================================ 00:17:04.008 Supported: No 00:17:04.008 00:17:04.008 Persistent Memory Region Support 00:17:04.008 ================================ 00:17:04.008 Supported: No 00:17:04.008 00:17:04.008 Admin Command Set Attributes 00:17:04.008 ============================ 00:17:04.008 Security Send/Receive: Not Supported 00:17:04.008 Format NVM: Not Supported 00:17:04.008 Firmware Activate/Download: Not Supported 00:17:04.008 Namespace Management: Not Supported 00:17:04.008 Device Self-Test: Not Supported 00:17:04.008 Directives: Not Supported 00:17:04.008 NVMe-MI: Not Supported 00:17:04.008 Virtualization Management: Not Supported 00:17:04.008 Doorbell Buffer Config: Not Supported 00:17:04.008 Get LBA Status Capability: Not Supported 00:17:04.008 Command & Feature Lockdown Capability: Not Supported 00:17:04.008 Abort Command Limit: 4 00:17:04.008 Async Event Request Limit: 4 00:17:04.008 Number of Firmware Slots: N/A 00:17:04.008 Firmware Slot 1 Read-Only: N/A 00:17:04.008 Firmware Activation Without Reset: N/A 00:17:04.008 Multiple Update Detection Support: N/A 00:17:04.008 Firmware Update Granularity: No Information Provided 00:17:04.008 Per-Namespace SMART Log: No 00:17:04.008 Asymmetric Namespace Access Log Page: Not Supported 00:17:04.008 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:17:04.008 Command Effects Log Page: Supported 00:17:04.008 Get Log Page Extended Data: Supported 00:17:04.008 Telemetry Log Pages: Not Supported 00:17:04.008 Persistent Event Log Pages: Not Supported 00:17:04.008 Supported Log Pages Log Page: May Support 00:17:04.008 Commands Supported & Effects Log Page: Not Supported 00:17:04.008 Feature Identifiers & Effects Log Page:May Support 00:17:04.008 NVMe-MI Commands & Effects Log Page: May Support 00:17:04.008 Data Area 4 for Telemetry Log: Not Supported 00:17:04.008 Error Log Page Entries Supported: 128 00:17:04.008 Keep Alive: Supported 00:17:04.008 Keep Alive Granularity: 10000 ms 00:17:04.008 00:17:04.008 NVM Command Set Attributes 00:17:04.008 ========================== 00:17:04.008 Submission Queue Entry Size 00:17:04.008 Max: 64 00:17:04.008 Min: 64 00:17:04.008 Completion Queue Entry Size 00:17:04.008 Max: 16 00:17:04.008 Min: 16 00:17:04.008 Number of Namespaces: 32 00:17:04.008 Compare Command: Supported 00:17:04.008 Write Uncorrectable Command: Not Supported 00:17:04.008 Dataset Management Command: Supported 00:17:04.008 Write Zeroes Command: Supported 00:17:04.008 Set Features Save Field: Not Supported 00:17:04.008 Reservations: Not Supported 00:17:04.008 Timestamp: Not Supported 00:17:04.008 Copy: Supported 00:17:04.008 Volatile Write Cache: Present 00:17:04.008 Atomic Write Unit (Normal): 1 00:17:04.008 Atomic Write Unit (PFail): 1 00:17:04.008 Atomic Compare & Write Unit: 1 00:17:04.008 Fused Compare & Write: Supported 00:17:04.008 Scatter-Gather List 00:17:04.008 SGL Command Set: Supported (Dword aligned) 00:17:04.008 SGL Keyed: Not Supported 00:17:04.008 SGL Bit Bucket Descriptor: Not Supported 00:17:04.008 SGL Metadata Pointer: Not Supported 00:17:04.008 Oversized SGL: Not Supported 00:17:04.008 SGL Metadata Address: Not Supported 00:17:04.008 SGL Offset: Not Supported 00:17:04.008 Transport SGL Data Block: Not Supported 00:17:04.008 Replay Protected Memory Block: Not Supported 00:17:04.008 00:17:04.008 Firmware Slot Information 00:17:04.008 ========================= 00:17:04.008 Active slot: 1 00:17:04.008 Slot 1 Firmware Revision: 25.01 00:17:04.008 00:17:04.008 00:17:04.008 Commands Supported and Effects 00:17:04.008 ============================== 00:17:04.008 Admin Commands 00:17:04.008 -------------- 00:17:04.008 Get Log Page (02h): Supported 00:17:04.008 Identify (06h): Supported 00:17:04.008 Abort (08h): Supported 00:17:04.008 Set Features (09h): Supported 00:17:04.008 Get Features (0Ah): Supported 00:17:04.008 Asynchronous Event Request (0Ch): Supported 00:17:04.008 Keep Alive (18h): Supported 00:17:04.008 I/O Commands 00:17:04.008 ------------ 00:17:04.008 Flush (00h): Supported LBA-Change 00:17:04.008 Write (01h): Supported LBA-Change 00:17:04.008 Read (02h): Supported 00:17:04.008 Compare (05h): Supported 00:17:04.008 Write Zeroes (08h): Supported LBA-Change 00:17:04.008 Dataset Management (09h): Supported LBA-Change 00:17:04.008 Copy (19h): Supported LBA-Change 00:17:04.008 00:17:04.008 Error Log 00:17:04.008 ========= 00:17:04.008 00:17:04.008 Arbitration 00:17:04.008 =========== 00:17:04.008 Arbitration Burst: 1 00:17:04.008 00:17:04.008 Power Management 00:17:04.008 ================ 00:17:04.008 Number of Power States: 1 00:17:04.008 Current Power State: Power State #0 00:17:04.008 Power State #0: 00:17:04.008 Max Power: 0.00 W 00:17:04.008 Non-Operational State: Operational 00:17:04.008 Entry Latency: Not Reported 00:17:04.008 Exit Latency: Not Reported 00:17:04.008 Relative Read Throughput: 0 00:17:04.008 Relative Read Latency: 0 00:17:04.008 Relative Write Throughput: 0 00:17:04.008 Relative Write Latency: 0 00:17:04.008 Idle Power: Not Reported 00:17:04.008 Active Power: Not Reported 00:17:04.008 Non-Operational Permissive Mode: Not Supported 00:17:04.008 00:17:04.008 Health Information 00:17:04.008 ================== 00:17:04.008 Critical Warnings: 00:17:04.008 Available Spare Space: OK 00:17:04.008 Temperature: OK 00:17:04.008 Device Reliability: OK 00:17:04.008 Read Only: No 00:17:04.008 Volatile Memory Backup: OK 00:17:04.008 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:04.008 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:04.008 Available Spare: 0% 00:17:04.008 Available Sp[2024-10-01 08:31:55.766428] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:17:04.008 [2024-10-01 08:31:55.766440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:17:04.008 [2024-10-01 08:31:55.766467] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:17:04.008 [2024-10-01 08:31:55.766477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.008 [2024-10-01 08:31:55.766483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.008 [2024-10-01 08:31:55.766490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.008 [2024-10-01 08:31:55.766496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.008 [2024-10-01 08:31:55.766538] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:04.008 [2024-10-01 08:31:55.766548] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:17:04.008 [2024-10-01 08:31:55.767543] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:04.008 [2024-10-01 08:31:55.767582] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:17:04.008 [2024-10-01 08:31:55.767589] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:17:04.008 [2024-10-01 08:31:55.768552] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:17:04.008 [2024-10-01 08:31:55.768563] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:17:04.008 [2024-10-01 08:31:55.768624] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:17:04.008 [2024-10-01 08:31:55.775002] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:04.008 are Threshold: 0% 00:17:04.008 Life Percentage Used: 0% 00:17:04.008 Data Units Read: 0 00:17:04.008 Data Units Written: 0 00:17:04.008 Host Read Commands: 0 00:17:04.008 Host Write Commands: 0 00:17:04.008 Controller Busy Time: 0 minutes 00:17:04.008 Power Cycles: 0 00:17:04.008 Power On Hours: 0 hours 00:17:04.009 Unsafe Shutdowns: 0 00:17:04.009 Unrecoverable Media Errors: 0 00:17:04.009 Lifetime Error Log Entries: 0 00:17:04.009 Warning Temperature Time: 0 minutes 00:17:04.009 Critical Temperature Time: 0 minutes 00:17:04.009 00:17:04.009 Number of Queues 00:17:04.009 ================ 00:17:04.009 Number of I/O Submission Queues: 127 00:17:04.009 Number of I/O Completion Queues: 127 00:17:04.009 00:17:04.009 Active Namespaces 00:17:04.009 ================= 00:17:04.009 Namespace ID:1 00:17:04.009 Error Recovery Timeout: Unlimited 00:17:04.009 Command Set Identifier: NVM (00h) 00:17:04.009 Deallocate: Supported 00:17:04.009 Deallocated/Unwritten Error: Not Supported 00:17:04.009 Deallocated Read Value: Unknown 00:17:04.009 Deallocate in Write Zeroes: Not Supported 00:17:04.009 Deallocated Guard Field: 0xFFFF 00:17:04.009 Flush: Supported 00:17:04.009 Reservation: Supported 00:17:04.009 Namespace Sharing Capabilities: Multiple Controllers 00:17:04.009 Size (in LBAs): 131072 (0GiB) 00:17:04.009 Capacity (in LBAs): 131072 (0GiB) 00:17:04.009 Utilization (in LBAs): 131072 (0GiB) 00:17:04.009 NGUID: 30BBFD471C2E46C5A9BF38442E241DD8 00:17:04.009 UUID: 30bbfd47-1c2e-46c5-a9bf-38442e241dd8 00:17:04.009 Thin Provisioning: Not Supported 00:17:04.009 Per-NS Atomic Units: Yes 00:17:04.009 Atomic Boundary Size (Normal): 0 00:17:04.009 Atomic Boundary Size (PFail): 0 00:17:04.009 Atomic Boundary Offset: 0 00:17:04.009 Maximum Single Source Range Length: 65535 00:17:04.009 Maximum Copy Length: 65535 00:17:04.009 Maximum Source Range Count: 1 00:17:04.009 NGUID/EUI64 Never Reused: No 00:17:04.009 Namespace Write Protected: No 00:17:04.009 Number of LBA Formats: 1 00:17:04.009 Current LBA Format: LBA Format #00 00:17:04.009 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:04.009 00:17:04.009 08:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:17:04.269 [2024-10-01 08:31:55.960607] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:09.553 Initializing NVMe Controllers 00:17:09.553 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:09.553 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:17:09.553 Initialization complete. Launching workers. 00:17:09.553 ======================================================== 00:17:09.553 Latency(us) 00:17:09.553 Device Information : IOPS MiB/s Average min max 00:17:09.553 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39947.82 156.05 3203.86 853.42 8753.40 00:17:09.553 ======================================================== 00:17:09.553 Total : 39947.82 156.05 3203.86 853.42 8753.40 00:17:09.553 00:17:09.553 [2024-10-01 08:32:00.978146] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:09.553 08:32:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:17:09.553 [2024-10-01 08:32:01.149999] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:14.839 Initializing NVMe Controllers 00:17:14.839 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:14.839 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:17:14.839 Initialization complete. Launching workers. 00:17:14.839 ======================================================== 00:17:14.839 Latency(us) 00:17:14.839 Device Information : IOPS MiB/s Average min max 00:17:14.839 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16052.71 62.71 7979.28 6989.89 8972.60 00:17:14.839 ======================================================== 00:17:14.839 Total : 16052.71 62.71 7979.28 6989.89 8972.60 00:17:14.839 00:17:14.839 [2024-10-01 08:32:06.192793] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:14.839 08:32:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:17:14.839 [2024-10-01 08:32:06.381630] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:20.127 [2024-10-01 08:32:11.452194] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:20.127 Initializing NVMe Controllers 00:17:20.127 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:20.127 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:20.128 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:17:20.128 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:17:20.128 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:17:20.128 Initialization complete. Launching workers. 00:17:20.128 Starting thread on core 2 00:17:20.128 Starting thread on core 3 00:17:20.128 Starting thread on core 1 00:17:20.128 08:32:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:17:20.128 [2024-10-01 08:32:11.719401] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:23.428 [2024-10-01 08:32:14.778597] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:23.428 Initializing NVMe Controllers 00:17:23.428 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:23.428 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:23.428 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:17:23.428 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:17:23.428 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:17:23.428 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:17:23.428 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:17:23.428 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:17:23.428 Initialization complete. Launching workers. 00:17:23.428 Starting thread on core 1 with urgent priority queue 00:17:23.428 Starting thread on core 2 with urgent priority queue 00:17:23.428 Starting thread on core 3 with urgent priority queue 00:17:23.428 Starting thread on core 0 with urgent priority queue 00:17:23.428 SPDK bdev Controller (SPDK1 ) core 0: 8037.67 IO/s 12.44 secs/100000 ios 00:17:23.428 SPDK bdev Controller (SPDK1 ) core 1: 13648.00 IO/s 7.33 secs/100000 ios 00:17:23.428 SPDK bdev Controller (SPDK1 ) core 2: 7083.67 IO/s 14.12 secs/100000 ios 00:17:23.428 SPDK bdev Controller (SPDK1 ) core 3: 12632.67 IO/s 7.92 secs/100000 ios 00:17:23.428 ======================================================== 00:17:23.428 00:17:23.428 08:32:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:17:23.428 [2024-10-01 08:32:15.040425] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:23.428 Initializing NVMe Controllers 00:17:23.428 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:23.428 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:23.428 Namespace ID: 1 size: 0GB 00:17:23.428 Initialization complete. 00:17:23.428 INFO: using host memory buffer for IO 00:17:23.428 Hello world! 00:17:23.428 [2024-10-01 08:32:15.077641] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:23.428 08:32:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:17:23.688 [2024-10-01 08:32:15.336052] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:24.634 Initializing NVMe Controllers 00:17:24.634 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:24.634 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:24.634 Initialization complete. Launching workers. 00:17:24.634 submit (in ns) avg, min, max = 8510.7, 3923.3, 4001784.2 00:17:24.634 complete (in ns) avg, min, max = 16654.4, 2390.0, 4001034.2 00:17:24.634 00:17:24.634 Submit histogram 00:17:24.634 ================ 00:17:24.634 Range in us Cumulative Count 00:17:24.634 3.920 - 3.947: 1.9724% ( 373) 00:17:24.634 3.947 - 3.973: 9.9677% ( 1512) 00:17:24.634 3.973 - 4.000: 21.0195% ( 2090) 00:17:24.634 4.000 - 4.027: 32.7799% ( 2224) 00:17:24.634 4.027 - 4.053: 44.2600% ( 2171) 00:17:24.634 4.053 - 4.080: 56.3640% ( 2289) 00:17:24.634 4.080 - 4.107: 73.1426% ( 3173) 00:17:24.634 4.107 - 4.133: 87.7109% ( 2755) 00:17:24.634 4.133 - 4.160: 95.5423% ( 1481) 00:17:24.634 4.160 - 4.187: 98.4295% ( 546) 00:17:24.634 4.187 - 4.213: 99.2703% ( 159) 00:17:24.634 4.213 - 4.240: 99.4289% ( 30) 00:17:24.634 4.240 - 4.267: 99.5029% ( 14) 00:17:24.634 4.267 - 4.293: 99.5082% ( 1) 00:17:24.634 4.293 - 4.320: 99.5188% ( 2) 00:17:24.634 4.373 - 4.400: 99.5241% ( 1) 00:17:24.634 4.480 - 4.507: 99.5294% ( 1) 00:17:24.634 4.613 - 4.640: 99.5347% ( 1) 00:17:24.634 4.693 - 4.720: 99.5400% ( 1) 00:17:24.634 4.853 - 4.880: 99.5452% ( 1) 00:17:24.634 5.067 - 5.093: 99.5505% ( 1) 00:17:24.634 5.120 - 5.147: 99.5558% ( 1) 00:17:24.634 5.200 - 5.227: 99.5611% ( 1) 00:17:24.634 5.440 - 5.467: 99.5664% ( 1) 00:17:24.634 5.520 - 5.547: 99.5717% ( 1) 00:17:24.634 5.680 - 5.707: 99.5770% ( 1) 00:17:24.634 5.707 - 5.733: 99.5875% ( 2) 00:17:24.634 5.760 - 5.787: 99.5981% ( 2) 00:17:24.634 5.787 - 5.813: 99.6034% ( 1) 00:17:24.634 5.813 - 5.840: 99.6087% ( 1) 00:17:24.634 6.107 - 6.133: 99.6140% ( 1) 00:17:24.634 6.160 - 6.187: 99.6193% ( 1) 00:17:24.634 6.187 - 6.213: 99.6298% ( 2) 00:17:24.634 6.880 - 6.933: 99.6351% ( 1) 00:17:24.634 6.987 - 7.040: 99.6404% ( 1) 00:17:24.634 7.147 - 7.200: 99.6457% ( 1) 00:17:24.634 7.467 - 7.520: 99.6510% ( 1) 00:17:24.634 7.573 - 7.627: 99.6563% ( 1) 00:17:24.634 7.680 - 7.733: 99.6669% ( 2) 00:17:24.634 7.733 - 7.787: 99.6721% ( 1) 00:17:24.634 7.787 - 7.840: 99.6827% ( 2) 00:17:24.634 7.840 - 7.893: 99.6880% ( 1) 00:17:24.634 7.893 - 7.947: 99.6986% ( 2) 00:17:24.634 8.000 - 8.053: 99.7145% ( 3) 00:17:24.634 8.053 - 8.107: 99.7303% ( 3) 00:17:24.634 8.107 - 8.160: 99.7462% ( 3) 00:17:24.634 8.160 - 8.213: 99.7620% ( 3) 00:17:24.634 8.213 - 8.267: 99.7832% ( 4) 00:17:24.634 8.267 - 8.320: 99.7991% ( 3) 00:17:24.634 8.320 - 8.373: 99.8096% ( 2) 00:17:24.634 8.480 - 8.533: 99.8149% ( 1) 00:17:24.634 8.587 - 8.640: 99.8202% ( 1) 00:17:24.634 8.747 - 8.800: 99.8308% ( 2) 00:17:24.634 8.800 - 8.853: 99.8361% ( 1) 00:17:24.634 8.907 - 8.960: 99.8414% ( 1) 00:17:24.634 9.120 - 9.173: 99.8572% ( 3) 00:17:24.634 9.547 - 9.600: 99.8625% ( 1) 00:17:24.634 9.653 - 9.707: 99.8731% ( 2) 00:17:24.635 10.347 - 10.400: 99.8784% ( 1) 00:17:24.635 13.493 - 13.547: 99.8837% ( 1) 00:17:24.635 15.147 - 15.253: 99.8890% ( 1) 00:17:24.635 3986.773 - 4014.080: 100.0000% ( 21) 00:17:24.635 00:17:24.635 Complete histogram 00:17:24.635 ================== 00:17:24.635 Range in us Cumulative Count 00:17:24.635 2.387 - 2.400: 0.0159% ( 3) 00:17:24.635 2.400 - 2.413: 0.8408% ( 156) 00:17:24.635 2.413 - 2.427: 1.1210% ( 53) 00:17:24.635 2.427 - 2.440: 1.2268% ( 20) 00:17:24.635 2.440 - 2.453: 1.6181% ( 74) 00:17:24.635 2.453 - 2.467: 52.8528% ( 9689) 00:17:24.635 2.467 - 2.480: 61.9322% ( 1717) 00:17:24.635 2.480 - 2.493: 74.9881% ( 2469) 00:17:24.635 2.493 - 2.507: 80.3130% ( 1007) 00:17:24.635 2.507 - 2.520: 81.9312% ( 306) 00:17:24.635 2.520 - 2.533: 86.3148% ( 829) 00:17:24.635 2.533 - 2.547: 92.4806% ( 1166) 00:17:24.635 2.547 - 2.560: 95.9600% ( 658) 00:17:24.635 2.560 - 2.573: 97.9324% ( 373) 00:17:24.635 2.573 - 2.587: 98.9213% ( 187) 00:17:24.635 2.587 - [2024-10-01 08:32:16.359628] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:24.635 2.600: 99.2015% ( 53) 00:17:24.635 2.600 - 2.613: 99.2967% ( 18) 00:17:24.635 2.667 - 2.680: 99.3020% ( 1) 00:17:24.635 2.733 - 2.747: 99.3073% ( 1) 00:17:24.635 2.920 - 2.933: 99.3126% ( 1) 00:17:24.635 3.053 - 3.067: 99.3179% ( 1) 00:17:24.635 3.173 - 3.187: 99.3231% ( 1) 00:17:24.635 3.200 - 3.213: 99.3284% ( 1) 00:17:24.635 5.200 - 5.227: 99.3337% ( 1) 00:17:24.635 5.333 - 5.360: 99.3390% ( 1) 00:17:24.635 5.547 - 5.573: 99.3443% ( 1) 00:17:24.635 5.653 - 5.680: 99.3496% ( 1) 00:17:24.635 5.707 - 5.733: 99.3549% ( 1) 00:17:24.635 5.840 - 5.867: 99.3654% ( 2) 00:17:24.635 5.920 - 5.947: 99.3707% ( 1) 00:17:24.635 5.973 - 6.000: 99.3813% ( 2) 00:17:24.635 6.027 - 6.053: 99.4025% ( 4) 00:17:24.635 6.107 - 6.133: 99.4078% ( 1) 00:17:24.635 6.187 - 6.213: 99.4130% ( 1) 00:17:24.635 6.213 - 6.240: 99.4183% ( 1) 00:17:24.635 6.240 - 6.267: 99.4289% ( 2) 00:17:24.635 6.320 - 6.347: 99.4342% ( 1) 00:17:24.635 6.347 - 6.373: 99.4395% ( 1) 00:17:24.635 6.400 - 6.427: 99.4501% ( 2) 00:17:24.635 6.480 - 6.507: 99.4659% ( 3) 00:17:24.635 6.533 - 6.560: 99.4765% ( 2) 00:17:24.635 6.587 - 6.613: 99.4871% ( 2) 00:17:24.635 6.667 - 6.693: 99.4924% ( 1) 00:17:24.635 6.773 - 6.800: 99.4976% ( 1) 00:17:24.635 6.800 - 6.827: 99.5029% ( 1) 00:17:24.635 6.827 - 6.880: 99.5135% ( 2) 00:17:24.635 6.880 - 6.933: 99.5188% ( 1) 00:17:24.635 6.933 - 6.987: 99.5241% ( 1) 00:17:24.635 6.987 - 7.040: 99.5347% ( 2) 00:17:24.635 7.253 - 7.307: 99.5400% ( 1) 00:17:24.635 7.307 - 7.360: 99.5558% ( 3) 00:17:24.635 7.413 - 7.467: 99.5664% ( 2) 00:17:24.635 7.467 - 7.520: 99.5770% ( 2) 00:17:24.635 7.520 - 7.573: 99.5928% ( 3) 00:17:24.635 7.680 - 7.733: 99.5981% ( 1) 00:17:24.635 7.893 - 7.947: 99.6034% ( 1) 00:17:24.635 8.053 - 8.107: 99.6087% ( 1) 00:17:24.635 8.107 - 8.160: 99.6140% ( 1) 00:17:24.635 8.267 - 8.320: 99.6193% ( 1) 00:17:24.635 8.320 - 8.373: 99.6246% ( 1) 00:17:24.635 8.693 - 8.747: 99.6298% ( 1) 00:17:24.635 9.867 - 9.920: 99.6351% ( 1) 00:17:24.635 12.427 - 12.480: 99.6404% ( 1) 00:17:24.635 12.853 - 12.907: 99.6457% ( 1) 00:17:24.635 3986.773 - 4014.080: 100.0000% ( 67) 00:17:24.635 00:17:24.635 08:32:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:17:24.635 08:32:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:17:24.635 08:32:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:17:24.635 08:32:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:17:24.635 08:32:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:24.895 [ 00:17:24.895 { 00:17:24.895 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:24.895 "subtype": "Discovery", 00:17:24.895 "listen_addresses": [], 00:17:24.895 "allow_any_host": true, 00:17:24.895 "hosts": [] 00:17:24.895 }, 00:17:24.895 { 00:17:24.895 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:24.895 "subtype": "NVMe", 00:17:24.895 "listen_addresses": [ 00:17:24.895 { 00:17:24.895 "trtype": "VFIOUSER", 00:17:24.895 "adrfam": "IPv4", 00:17:24.895 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:24.895 "trsvcid": "0" 00:17:24.895 } 00:17:24.895 ], 00:17:24.895 "allow_any_host": true, 00:17:24.895 "hosts": [], 00:17:24.895 "serial_number": "SPDK1", 00:17:24.895 "model_number": "SPDK bdev Controller", 00:17:24.895 "max_namespaces": 32, 00:17:24.895 "min_cntlid": 1, 00:17:24.895 "max_cntlid": 65519, 00:17:24.895 "namespaces": [ 00:17:24.895 { 00:17:24.895 "nsid": 1, 00:17:24.895 "bdev_name": "Malloc1", 00:17:24.895 "name": "Malloc1", 00:17:24.895 "nguid": "30BBFD471C2E46C5A9BF38442E241DD8", 00:17:24.895 "uuid": "30bbfd47-1c2e-46c5-a9bf-38442e241dd8" 00:17:24.895 } 00:17:24.895 ] 00:17:24.895 }, 00:17:24.895 { 00:17:24.895 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:24.895 "subtype": "NVMe", 00:17:24.895 "listen_addresses": [ 00:17:24.895 { 00:17:24.895 "trtype": "VFIOUSER", 00:17:24.895 "adrfam": "IPv4", 00:17:24.895 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:24.895 "trsvcid": "0" 00:17:24.895 } 00:17:24.895 ], 00:17:24.895 "allow_any_host": true, 00:17:24.895 "hosts": [], 00:17:24.895 "serial_number": "SPDK2", 00:17:24.895 "model_number": "SPDK bdev Controller", 00:17:24.895 "max_namespaces": 32, 00:17:24.895 "min_cntlid": 1, 00:17:24.895 "max_cntlid": 65519, 00:17:24.895 "namespaces": [ 00:17:24.895 { 00:17:24.895 "nsid": 1, 00:17:24.895 "bdev_name": "Malloc2", 00:17:24.895 "name": "Malloc2", 00:17:24.895 "nguid": "4F764CAC02BF4E5998B2B81E12AC407B", 00:17:24.895 "uuid": "4f764cac-02bf-4e59-98b2-b81e12ac407b" 00:17:24.895 } 00:17:24.895 ] 00:17:24.895 } 00:17:24.895 ] 00:17:24.895 08:32:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:24.895 08:32:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:17:24.895 08:32:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3712216 00:17:24.895 08:32:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:17:24.895 08:32:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:17:24.895 08:32:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:24.895 08:32:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:24.895 08:32:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:17:24.896 08:32:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:17:24.896 08:32:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:17:25.157 [2024-10-01 08:32:16.759444] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:25.157 Malloc3 00:17:25.157 08:32:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:17:25.157 [2024-10-01 08:32:16.952698] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:25.157 08:32:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:25.417 Asynchronous Event Request test 00:17:25.418 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:25.418 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:25.418 Registering asynchronous event callbacks... 00:17:25.418 Starting namespace attribute notice tests for all controllers... 00:17:25.418 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:25.418 aer_cb - Changed Namespace 00:17:25.418 Cleaning up... 00:17:25.418 [ 00:17:25.418 { 00:17:25.418 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:25.418 "subtype": "Discovery", 00:17:25.418 "listen_addresses": [], 00:17:25.418 "allow_any_host": true, 00:17:25.418 "hosts": [] 00:17:25.418 }, 00:17:25.418 { 00:17:25.418 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:25.418 "subtype": "NVMe", 00:17:25.418 "listen_addresses": [ 00:17:25.418 { 00:17:25.418 "trtype": "VFIOUSER", 00:17:25.418 "adrfam": "IPv4", 00:17:25.418 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:25.418 "trsvcid": "0" 00:17:25.418 } 00:17:25.418 ], 00:17:25.418 "allow_any_host": true, 00:17:25.418 "hosts": [], 00:17:25.418 "serial_number": "SPDK1", 00:17:25.418 "model_number": "SPDK bdev Controller", 00:17:25.418 "max_namespaces": 32, 00:17:25.418 "min_cntlid": 1, 00:17:25.418 "max_cntlid": 65519, 00:17:25.418 "namespaces": [ 00:17:25.418 { 00:17:25.418 "nsid": 1, 00:17:25.418 "bdev_name": "Malloc1", 00:17:25.418 "name": "Malloc1", 00:17:25.418 "nguid": "30BBFD471C2E46C5A9BF38442E241DD8", 00:17:25.418 "uuid": "30bbfd47-1c2e-46c5-a9bf-38442e241dd8" 00:17:25.418 }, 00:17:25.418 { 00:17:25.418 "nsid": 2, 00:17:25.418 "bdev_name": "Malloc3", 00:17:25.418 "name": "Malloc3", 00:17:25.418 "nguid": "5695E9F41BB2484FA1B6C555935291CF", 00:17:25.418 "uuid": "5695e9f4-1bb2-484f-a1b6-c555935291cf" 00:17:25.418 } 00:17:25.418 ] 00:17:25.418 }, 00:17:25.418 { 00:17:25.418 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:25.418 "subtype": "NVMe", 00:17:25.418 "listen_addresses": [ 00:17:25.418 { 00:17:25.418 "trtype": "VFIOUSER", 00:17:25.418 "adrfam": "IPv4", 00:17:25.418 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:25.418 "trsvcid": "0" 00:17:25.418 } 00:17:25.418 ], 00:17:25.418 "allow_any_host": true, 00:17:25.418 "hosts": [], 00:17:25.418 "serial_number": "SPDK2", 00:17:25.418 "model_number": "SPDK bdev Controller", 00:17:25.418 "max_namespaces": 32, 00:17:25.419 "min_cntlid": 1, 00:17:25.419 "max_cntlid": 65519, 00:17:25.419 "namespaces": [ 00:17:25.419 { 00:17:25.419 "nsid": 1, 00:17:25.419 "bdev_name": "Malloc2", 00:17:25.419 "name": "Malloc2", 00:17:25.419 "nguid": "4F764CAC02BF4E5998B2B81E12AC407B", 00:17:25.419 "uuid": "4f764cac-02bf-4e59-98b2-b81e12ac407b" 00:17:25.419 } 00:17:25.419 ] 00:17:25.419 } 00:17:25.419 ] 00:17:25.419 08:32:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3712216 00:17:25.419 08:32:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:25.419 08:32:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:17:25.419 08:32:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:17:25.419 08:32:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:17:25.419 [2024-10-01 08:32:17.181859] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:17:25.419 [2024-10-01 08:32:17.181898] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3712235 ] 00:17:25.419 [2024-10-01 08:32:17.213556] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:17:25.419 [2024-10-01 08:32:17.222216] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:25.419 [2024-10-01 08:32:17.222241] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fbb997f3000 00:17:25.419 [2024-10-01 08:32:17.223216] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:25.419 [2024-10-01 08:32:17.224225] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:25.419 [2024-10-01 08:32:17.225231] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:25.419 [2024-10-01 08:32:17.226240] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:25.419 [2024-10-01 08:32:17.227251] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:25.419 [2024-10-01 08:32:17.228251] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:25.419 [2024-10-01 08:32:17.229256] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:25.419 [2024-10-01 08:32:17.230262] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:25.420 [2024-10-01 08:32:17.231274] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:25.420 [2024-10-01 08:32:17.231285] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fbb997e8000 00:17:25.420 [2024-10-01 08:32:17.232610] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:25.683 [2024-10-01 08:32:17.248825] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:17:25.683 [2024-10-01 08:32:17.248850] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:17:25.683 [2024-10-01 08:32:17.253940] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:17:25.683 [2024-10-01 08:32:17.253990] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:17:25.683 [2024-10-01 08:32:17.254077] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:17:25.683 [2024-10-01 08:32:17.254094] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:17:25.683 [2024-10-01 08:32:17.254100] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:17:25.683 [2024-10-01 08:32:17.254940] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:17:25.683 [2024-10-01 08:32:17.254949] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:17:25.683 [2024-10-01 08:32:17.254957] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:17:25.683 [2024-10-01 08:32:17.255944] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:17:25.683 [2024-10-01 08:32:17.255954] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:17:25.683 [2024-10-01 08:32:17.255961] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:17:25.683 [2024-10-01 08:32:17.256948] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:17:25.683 [2024-10-01 08:32:17.256958] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:25.683 [2024-10-01 08:32:17.257952] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:17:25.683 [2024-10-01 08:32:17.257961] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:17:25.683 [2024-10-01 08:32:17.257966] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:17:25.683 [2024-10-01 08:32:17.257973] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:25.683 [2024-10-01 08:32:17.258078] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:17:25.683 [2024-10-01 08:32:17.258083] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:25.683 [2024-10-01 08:32:17.258089] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:17:25.683 [2024-10-01 08:32:17.258968] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:17:25.683 [2024-10-01 08:32:17.259968] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:17:25.683 [2024-10-01 08:32:17.260974] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:17:25.683 [2024-10-01 08:32:17.261974] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:25.683 [2024-10-01 08:32:17.262020] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:25.683 [2024-10-01 08:32:17.262984] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:17:25.683 [2024-10-01 08:32:17.262998] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:25.683 [2024-10-01 08:32:17.263004] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:17:25.683 [2024-10-01 08:32:17.263025] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:17:25.683 [2024-10-01 08:32:17.263033] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:17:25.683 [2024-10-01 08:32:17.263045] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:25.683 [2024-10-01 08:32:17.263050] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:25.683 [2024-10-01 08:32:17.263054] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:25.683 [2024-10-01 08:32:17.263066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:25.683 [2024-10-01 08:32:17.271004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:17:25.683 [2024-10-01 08:32:17.271016] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:17:25.683 [2024-10-01 08:32:17.271021] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:17:25.683 [2024-10-01 08:32:17.271025] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:17:25.684 [2024-10-01 08:32:17.271030] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:17:25.684 [2024-10-01 08:32:17.271035] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:17:25.684 [2024-10-01 08:32:17.271039] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:17:25.684 [2024-10-01 08:32:17.271044] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:17:25.684 [2024-10-01 08:32:17.271052] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:17:25.684 [2024-10-01 08:32:17.271062] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:17:25.684 [2024-10-01 08:32:17.279002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:17:25.684 [2024-10-01 08:32:17.279015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:25.684 [2024-10-01 08:32:17.279024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:25.684 [2024-10-01 08:32:17.279033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:25.684 [2024-10-01 08:32:17.279042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:25.684 [2024-10-01 08:32:17.279047] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:17:25.684 [2024-10-01 08:32:17.279057] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:25.684 [2024-10-01 08:32:17.279066] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:17:25.684 [2024-10-01 08:32:17.287002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:17:25.684 [2024-10-01 08:32:17.287019] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:17:25.684 [2024-10-01 08:32:17.287025] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:25.684 [2024-10-01 08:32:17.287031] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:17:25.684 [2024-10-01 08:32:17.287040] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:17:25.684 [2024-10-01 08:32:17.287049] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:25.684 [2024-10-01 08:32:17.295002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:17:25.684 [2024-10-01 08:32:17.295067] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:17:25.684 [2024-10-01 08:32:17.295075] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:17:25.684 [2024-10-01 08:32:17.295083] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:17:25.684 [2024-10-01 08:32:17.295088] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:17:25.684 [2024-10-01 08:32:17.295092] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:25.684 [2024-10-01 08:32:17.295098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:17:25.684 [2024-10-01 08:32:17.303001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:17:25.684 [2024-10-01 08:32:17.303012] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:17:25.684 [2024-10-01 08:32:17.303023] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:17:25.684 [2024-10-01 08:32:17.303031] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:17:25.684 [2024-10-01 08:32:17.303039] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:25.684 [2024-10-01 08:32:17.303043] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:25.684 [2024-10-01 08:32:17.303047] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:25.684 [2024-10-01 08:32:17.303053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:25.684 [2024-10-01 08:32:17.311000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:17:25.684 [2024-10-01 08:32:17.311015] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:25.684 [2024-10-01 08:32:17.311023] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:25.684 [2024-10-01 08:32:17.311031] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:25.684 [2024-10-01 08:32:17.311036] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:25.684 [2024-10-01 08:32:17.311041] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:25.684 [2024-10-01 08:32:17.311048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:25.684 [2024-10-01 08:32:17.318999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:17:25.684 [2024-10-01 08:32:17.319009] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:25.684 [2024-10-01 08:32:17.319016] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:17:25.684 [2024-10-01 08:32:17.319024] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:17:25.684 [2024-10-01 08:32:17.319030] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:17:25.684 [2024-10-01 08:32:17.319035] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:25.684 [2024-10-01 08:32:17.319040] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:17:25.684 [2024-10-01 08:32:17.319045] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:17:25.684 [2024-10-01 08:32:17.319050] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:17:25.684 [2024-10-01 08:32:17.319055] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:17:25.684 [2024-10-01 08:32:17.319071] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:17:25.684 [2024-10-01 08:32:17.327001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:17:25.684 [2024-10-01 08:32:17.327017] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:17:25.684 [2024-10-01 08:32:17.335000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:17:25.684 [2024-10-01 08:32:17.335014] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:17:25.684 [2024-10-01 08:32:17.343001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:17:25.684 [2024-10-01 08:32:17.343017] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:25.684 [2024-10-01 08:32:17.351002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:17:25.684 [2024-10-01 08:32:17.351021] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:17:25.684 [2024-10-01 08:32:17.351027] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:17:25.684 [2024-10-01 08:32:17.351031] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:17:25.684 [2024-10-01 08:32:17.351034] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:17:25.684 [2024-10-01 08:32:17.351038] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:17:25.684 [2024-10-01 08:32:17.351044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:17:25.684 [2024-10-01 08:32:17.351055] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:17:25.684 [2024-10-01 08:32:17.351059] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:17:25.684 [2024-10-01 08:32:17.351063] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:25.684 [2024-10-01 08:32:17.351069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:17:25.684 [2024-10-01 08:32:17.351077] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:17:25.684 [2024-10-01 08:32:17.351081] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:25.684 [2024-10-01 08:32:17.351085] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:25.684 [2024-10-01 08:32:17.351091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:25.684 [2024-10-01 08:32:17.351099] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:17:25.684 [2024-10-01 08:32:17.351103] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:17:25.684 [2024-10-01 08:32:17.351107] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:25.684 [2024-10-01 08:32:17.351113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:17:25.684 [2024-10-01 08:32:17.359003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:17:25.684 [2024-10-01 08:32:17.359019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:17:25.684 [2024-10-01 08:32:17.359030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:17:25.685 [2024-10-01 08:32:17.359037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:17:25.685 ===================================================== 00:17:25.685 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:25.685 ===================================================== 00:17:25.685 Controller Capabilities/Features 00:17:25.685 ================================ 00:17:25.685 Vendor ID: 4e58 00:17:25.685 Subsystem Vendor ID: 4e58 00:17:25.685 Serial Number: SPDK2 00:17:25.685 Model Number: SPDK bdev Controller 00:17:25.685 Firmware Version: 25.01 00:17:25.685 Recommended Arb Burst: 6 00:17:25.685 IEEE OUI Identifier: 8d 6b 50 00:17:25.685 Multi-path I/O 00:17:25.685 May have multiple subsystem ports: Yes 00:17:25.685 May have multiple controllers: Yes 00:17:25.685 Associated with SR-IOV VF: No 00:17:25.685 Max Data Transfer Size: 131072 00:17:25.685 Max Number of Namespaces: 32 00:17:25.685 Max Number of I/O Queues: 127 00:17:25.685 NVMe Specification Version (VS): 1.3 00:17:25.685 NVMe Specification Version (Identify): 1.3 00:17:25.685 Maximum Queue Entries: 256 00:17:25.685 Contiguous Queues Required: Yes 00:17:25.685 Arbitration Mechanisms Supported 00:17:25.685 Weighted Round Robin: Not Supported 00:17:25.685 Vendor Specific: Not Supported 00:17:25.685 Reset Timeout: 15000 ms 00:17:25.685 Doorbell Stride: 4 bytes 00:17:25.685 NVM Subsystem Reset: Not Supported 00:17:25.685 Command Sets Supported 00:17:25.685 NVM Command Set: Supported 00:17:25.685 Boot Partition: Not Supported 00:17:25.685 Memory Page Size Minimum: 4096 bytes 00:17:25.685 Memory Page Size Maximum: 4096 bytes 00:17:25.685 Persistent Memory Region: Not Supported 00:17:25.685 Optional Asynchronous Events Supported 00:17:25.685 Namespace Attribute Notices: Supported 00:17:25.685 Firmware Activation Notices: Not Supported 00:17:25.685 ANA Change Notices: Not Supported 00:17:25.685 PLE Aggregate Log Change Notices: Not Supported 00:17:25.685 LBA Status Info Alert Notices: Not Supported 00:17:25.685 EGE Aggregate Log Change Notices: Not Supported 00:17:25.685 Normal NVM Subsystem Shutdown event: Not Supported 00:17:25.685 Zone Descriptor Change Notices: Not Supported 00:17:25.685 Discovery Log Change Notices: Not Supported 00:17:25.685 Controller Attributes 00:17:25.685 128-bit Host Identifier: Supported 00:17:25.685 Non-Operational Permissive Mode: Not Supported 00:17:25.685 NVM Sets: Not Supported 00:17:25.685 Read Recovery Levels: Not Supported 00:17:25.685 Endurance Groups: Not Supported 00:17:25.685 Predictable Latency Mode: Not Supported 00:17:25.685 Traffic Based Keep ALive: Not Supported 00:17:25.685 Namespace Granularity: Not Supported 00:17:25.685 SQ Associations: Not Supported 00:17:25.685 UUID List: Not Supported 00:17:25.685 Multi-Domain Subsystem: Not Supported 00:17:25.685 Fixed Capacity Management: Not Supported 00:17:25.685 Variable Capacity Management: Not Supported 00:17:25.685 Delete Endurance Group: Not Supported 00:17:25.685 Delete NVM Set: Not Supported 00:17:25.685 Extended LBA Formats Supported: Not Supported 00:17:25.685 Flexible Data Placement Supported: Not Supported 00:17:25.685 00:17:25.685 Controller Memory Buffer Support 00:17:25.685 ================================ 00:17:25.685 Supported: No 00:17:25.685 00:17:25.685 Persistent Memory Region Support 00:17:25.685 ================================ 00:17:25.685 Supported: No 00:17:25.685 00:17:25.685 Admin Command Set Attributes 00:17:25.685 ============================ 00:17:25.685 Security Send/Receive: Not Supported 00:17:25.685 Format NVM: Not Supported 00:17:25.685 Firmware Activate/Download: Not Supported 00:17:25.685 Namespace Management: Not Supported 00:17:25.685 Device Self-Test: Not Supported 00:17:25.685 Directives: Not Supported 00:17:25.685 NVMe-MI: Not Supported 00:17:25.685 Virtualization Management: Not Supported 00:17:25.685 Doorbell Buffer Config: Not Supported 00:17:25.685 Get LBA Status Capability: Not Supported 00:17:25.685 Command & Feature Lockdown Capability: Not Supported 00:17:25.685 Abort Command Limit: 4 00:17:25.685 Async Event Request Limit: 4 00:17:25.685 Number of Firmware Slots: N/A 00:17:25.685 Firmware Slot 1 Read-Only: N/A 00:17:25.685 Firmware Activation Without Reset: N/A 00:17:25.685 Multiple Update Detection Support: N/A 00:17:25.685 Firmware Update Granularity: No Information Provided 00:17:25.685 Per-Namespace SMART Log: No 00:17:25.685 Asymmetric Namespace Access Log Page: Not Supported 00:17:25.685 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:17:25.685 Command Effects Log Page: Supported 00:17:25.685 Get Log Page Extended Data: Supported 00:17:25.685 Telemetry Log Pages: Not Supported 00:17:25.685 Persistent Event Log Pages: Not Supported 00:17:25.685 Supported Log Pages Log Page: May Support 00:17:25.685 Commands Supported & Effects Log Page: Not Supported 00:17:25.685 Feature Identifiers & Effects Log Page:May Support 00:17:25.685 NVMe-MI Commands & Effects Log Page: May Support 00:17:25.685 Data Area 4 for Telemetry Log: Not Supported 00:17:25.685 Error Log Page Entries Supported: 128 00:17:25.685 Keep Alive: Supported 00:17:25.685 Keep Alive Granularity: 10000 ms 00:17:25.685 00:17:25.685 NVM Command Set Attributes 00:17:25.685 ========================== 00:17:25.685 Submission Queue Entry Size 00:17:25.685 Max: 64 00:17:25.685 Min: 64 00:17:25.685 Completion Queue Entry Size 00:17:25.685 Max: 16 00:17:25.685 Min: 16 00:17:25.685 Number of Namespaces: 32 00:17:25.685 Compare Command: Supported 00:17:25.685 Write Uncorrectable Command: Not Supported 00:17:25.685 Dataset Management Command: Supported 00:17:25.685 Write Zeroes Command: Supported 00:17:25.685 Set Features Save Field: Not Supported 00:17:25.685 Reservations: Not Supported 00:17:25.685 Timestamp: Not Supported 00:17:25.685 Copy: Supported 00:17:25.685 Volatile Write Cache: Present 00:17:25.685 Atomic Write Unit (Normal): 1 00:17:25.685 Atomic Write Unit (PFail): 1 00:17:25.685 Atomic Compare & Write Unit: 1 00:17:25.685 Fused Compare & Write: Supported 00:17:25.685 Scatter-Gather List 00:17:25.685 SGL Command Set: Supported (Dword aligned) 00:17:25.685 SGL Keyed: Not Supported 00:17:25.685 SGL Bit Bucket Descriptor: Not Supported 00:17:25.685 SGL Metadata Pointer: Not Supported 00:17:25.685 Oversized SGL: Not Supported 00:17:25.685 SGL Metadata Address: Not Supported 00:17:25.685 SGL Offset: Not Supported 00:17:25.685 Transport SGL Data Block: Not Supported 00:17:25.685 Replay Protected Memory Block: Not Supported 00:17:25.685 00:17:25.685 Firmware Slot Information 00:17:25.685 ========================= 00:17:25.685 Active slot: 1 00:17:25.685 Slot 1 Firmware Revision: 25.01 00:17:25.685 00:17:25.685 00:17:25.685 Commands Supported and Effects 00:17:25.685 ============================== 00:17:25.685 Admin Commands 00:17:25.685 -------------- 00:17:25.685 Get Log Page (02h): Supported 00:17:25.685 Identify (06h): Supported 00:17:25.685 Abort (08h): Supported 00:17:25.685 Set Features (09h): Supported 00:17:25.685 Get Features (0Ah): Supported 00:17:25.685 Asynchronous Event Request (0Ch): Supported 00:17:25.685 Keep Alive (18h): Supported 00:17:25.685 I/O Commands 00:17:25.685 ------------ 00:17:25.685 Flush (00h): Supported LBA-Change 00:17:25.685 Write (01h): Supported LBA-Change 00:17:25.685 Read (02h): Supported 00:17:25.685 Compare (05h): Supported 00:17:25.685 Write Zeroes (08h): Supported LBA-Change 00:17:25.685 Dataset Management (09h): Supported LBA-Change 00:17:25.685 Copy (19h): Supported LBA-Change 00:17:25.685 00:17:25.685 Error Log 00:17:25.685 ========= 00:17:25.685 00:17:25.685 Arbitration 00:17:25.685 =========== 00:17:25.685 Arbitration Burst: 1 00:17:25.685 00:17:25.685 Power Management 00:17:25.685 ================ 00:17:25.685 Number of Power States: 1 00:17:25.685 Current Power State: Power State #0 00:17:25.685 Power State #0: 00:17:25.685 Max Power: 0.00 W 00:17:25.685 Non-Operational State: Operational 00:17:25.685 Entry Latency: Not Reported 00:17:25.685 Exit Latency: Not Reported 00:17:25.685 Relative Read Throughput: 0 00:17:25.685 Relative Read Latency: 0 00:17:25.685 Relative Write Throughput: 0 00:17:25.685 Relative Write Latency: 0 00:17:25.685 Idle Power: Not Reported 00:17:25.685 Active Power: Not Reported 00:17:25.685 Non-Operational Permissive Mode: Not Supported 00:17:25.685 00:17:25.685 Health Information 00:17:25.685 ================== 00:17:25.686 Critical Warnings: 00:17:25.686 Available Spare Space: OK 00:17:25.686 Temperature: OK 00:17:25.686 Device Reliability: OK 00:17:25.686 Read Only: No 00:17:25.686 Volatile Memory Backup: OK 00:17:25.686 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:25.686 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:25.686 Available Spare: 0% 00:17:25.686 Available Sp[2024-10-01 08:32:17.359133] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:17:25.686 [2024-10-01 08:32:17.367000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:17:25.686 [2024-10-01 08:32:17.367031] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:17:25.686 [2024-10-01 08:32:17.367041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.686 [2024-10-01 08:32:17.367048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.686 [2024-10-01 08:32:17.367054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.686 [2024-10-01 08:32:17.367060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.686 [2024-10-01 08:32:17.367110] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:17:25.686 [2024-10-01 08:32:17.367121] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:17:25.686 [2024-10-01 08:32:17.368113] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:25.686 [2024-10-01 08:32:17.368162] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:17:25.686 [2024-10-01 08:32:17.368169] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:17:25.686 [2024-10-01 08:32:17.369112] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:17:25.686 [2024-10-01 08:32:17.369125] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:17:25.686 [2024-10-01 08:32:17.369180] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:17:25.686 [2024-10-01 08:32:17.370560] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:25.686 are Threshold: 0% 00:17:25.686 Life Percentage Used: 0% 00:17:25.686 Data Units Read: 0 00:17:25.686 Data Units Written: 0 00:17:25.686 Host Read Commands: 0 00:17:25.686 Host Write Commands: 0 00:17:25.686 Controller Busy Time: 0 minutes 00:17:25.686 Power Cycles: 0 00:17:25.686 Power On Hours: 0 hours 00:17:25.686 Unsafe Shutdowns: 0 00:17:25.686 Unrecoverable Media Errors: 0 00:17:25.686 Lifetime Error Log Entries: 0 00:17:25.686 Warning Temperature Time: 0 minutes 00:17:25.686 Critical Temperature Time: 0 minutes 00:17:25.686 00:17:25.686 Number of Queues 00:17:25.686 ================ 00:17:25.686 Number of I/O Submission Queues: 127 00:17:25.686 Number of I/O Completion Queues: 127 00:17:25.686 00:17:25.686 Active Namespaces 00:17:25.686 ================= 00:17:25.686 Namespace ID:1 00:17:25.686 Error Recovery Timeout: Unlimited 00:17:25.686 Command Set Identifier: NVM (00h) 00:17:25.686 Deallocate: Supported 00:17:25.686 Deallocated/Unwritten Error: Not Supported 00:17:25.686 Deallocated Read Value: Unknown 00:17:25.686 Deallocate in Write Zeroes: Not Supported 00:17:25.686 Deallocated Guard Field: 0xFFFF 00:17:25.686 Flush: Supported 00:17:25.686 Reservation: Supported 00:17:25.686 Namespace Sharing Capabilities: Multiple Controllers 00:17:25.686 Size (in LBAs): 131072 (0GiB) 00:17:25.686 Capacity (in LBAs): 131072 (0GiB) 00:17:25.686 Utilization (in LBAs): 131072 (0GiB) 00:17:25.686 NGUID: 4F764CAC02BF4E5998B2B81E12AC407B 00:17:25.686 UUID: 4f764cac-02bf-4e59-98b2-b81e12ac407b 00:17:25.686 Thin Provisioning: Not Supported 00:17:25.686 Per-NS Atomic Units: Yes 00:17:25.686 Atomic Boundary Size (Normal): 0 00:17:25.686 Atomic Boundary Size (PFail): 0 00:17:25.686 Atomic Boundary Offset: 0 00:17:25.686 Maximum Single Source Range Length: 65535 00:17:25.686 Maximum Copy Length: 65535 00:17:25.686 Maximum Source Range Count: 1 00:17:25.686 NGUID/EUI64 Never Reused: No 00:17:25.686 Namespace Write Protected: No 00:17:25.686 Number of LBA Formats: 1 00:17:25.686 Current LBA Format: LBA Format #00 00:17:25.686 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:25.686 00:17:25.686 08:32:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:17:25.947 [2024-10-01 08:32:17.556388] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:31.235 Initializing NVMe Controllers 00:17:31.235 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:31.235 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:17:31.235 Initialization complete. Launching workers. 00:17:31.235 ======================================================== 00:17:31.235 Latency(us) 00:17:31.235 Device Information : IOPS MiB/s Average min max 00:17:31.235 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40010.60 156.29 3199.36 846.25 7780.36 00:17:31.235 ======================================================== 00:17:31.235 Total : 40010.60 156.29 3199.36 846.25 7780.36 00:17:31.235 00:17:31.235 [2024-10-01 08:32:22.681182] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:31.235 08:32:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:17:31.235 [2024-10-01 08:32:22.862764] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:36.520 Initializing NVMe Controllers 00:17:36.520 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:36.520 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:17:36.520 Initialization complete. Launching workers. 00:17:36.520 ======================================================== 00:17:36.520 Latency(us) 00:17:36.520 Device Information : IOPS MiB/s Average min max 00:17:36.520 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35266.00 137.76 3629.16 1099.68 10659.57 00:17:36.520 ======================================================== 00:17:36.520 Total : 35266.00 137.76 3629.16 1099.68 10659.57 00:17:36.520 00:17:36.520 [2024-10-01 08:32:27.882970] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:36.520 08:32:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:17:36.520 [2024-10-01 08:32:28.076376] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:41.808 [2024-10-01 08:32:33.216091] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:41.808 Initializing NVMe Controllers 00:17:41.808 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:41.808 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:41.808 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:17:41.808 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:17:41.808 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:17:41.808 Initialization complete. Launching workers. 00:17:41.808 Starting thread on core 2 00:17:41.808 Starting thread on core 3 00:17:41.808 Starting thread on core 1 00:17:41.808 08:32:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:17:41.808 [2024-10-01 08:32:33.470429] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:45.107 [2024-10-01 08:32:36.518852] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:45.107 Initializing NVMe Controllers 00:17:45.107 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:45.107 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:45.107 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:17:45.107 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:17:45.107 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:17:45.107 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:17:45.107 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:17:45.107 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:17:45.107 Initialization complete. Launching workers. 00:17:45.107 Starting thread on core 1 with urgent priority queue 00:17:45.107 Starting thread on core 2 with urgent priority queue 00:17:45.107 Starting thread on core 3 with urgent priority queue 00:17:45.107 Starting thread on core 0 with urgent priority queue 00:17:45.107 SPDK bdev Controller (SPDK2 ) core 0: 8369.67 IO/s 11.95 secs/100000 ios 00:17:45.107 SPDK bdev Controller (SPDK2 ) core 1: 13680.67 IO/s 7.31 secs/100000 ios 00:17:45.107 SPDK bdev Controller (SPDK2 ) core 2: 8143.67 IO/s 12.28 secs/100000 ios 00:17:45.107 SPDK bdev Controller (SPDK2 ) core 3: 11212.33 IO/s 8.92 secs/100000 ios 00:17:45.107 ======================================================== 00:17:45.107 00:17:45.108 08:32:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:17:45.108 [2024-10-01 08:32:36.787435] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:45.108 Initializing NVMe Controllers 00:17:45.108 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:45.108 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:45.108 Namespace ID: 1 size: 0GB 00:17:45.108 Initialization complete. 00:17:45.108 INFO: using host memory buffer for IO 00:17:45.108 Hello world! 00:17:45.108 [2024-10-01 08:32:36.797489] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:45.108 08:32:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:17:45.368 [2024-10-01 08:32:37.053279] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:46.752 Initializing NVMe Controllers 00:17:46.752 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:46.752 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:46.752 Initialization complete. Launching workers. 00:17:46.752 submit (in ns) avg, min, max = 7447.7, 3941.7, 4000403.3 00:17:46.752 complete (in ns) avg, min, max = 18380.0, 2382.5, 4000155.8 00:17:46.752 00:17:46.752 Submit histogram 00:17:46.752 ================ 00:17:46.752 Range in us Cumulative Count 00:17:46.752 3.920 - 3.947: 0.0839% ( 16) 00:17:46.752 3.947 - 3.973: 3.2192% ( 598) 00:17:46.752 3.973 - 4.000: 11.6605% ( 1610) 00:17:46.752 4.000 - 4.027: 22.0731% ( 1986) 00:17:46.753 4.027 - 4.053: 33.3351% ( 2148) 00:17:46.753 4.053 - 4.080: 43.9574% ( 2026) 00:17:46.753 4.080 - 4.107: 56.6455% ( 2420) 00:17:46.753 4.107 - 4.133: 74.1467% ( 3338) 00:17:46.753 4.133 - 4.160: 88.4968% ( 2737) 00:17:46.753 4.160 - 4.187: 95.8423% ( 1401) 00:17:46.753 4.187 - 4.213: 98.6788% ( 541) 00:17:46.753 4.213 - 4.240: 99.3079% ( 120) 00:17:46.753 4.240 - 4.267: 99.4285% ( 23) 00:17:46.753 4.267 - 4.293: 99.4914% ( 12) 00:17:46.753 4.293 - 4.320: 99.4967% ( 1) 00:17:46.753 4.320 - 4.347: 99.5072% ( 2) 00:17:46.753 4.347 - 4.373: 99.5124% ( 1) 00:17:46.753 4.373 - 4.400: 99.5176% ( 1) 00:17:46.753 4.667 - 4.693: 99.5229% ( 1) 00:17:46.753 4.693 - 4.720: 99.5281% ( 1) 00:17:46.753 4.907 - 4.933: 99.5439% ( 3) 00:17:46.753 4.987 - 5.013: 99.5491% ( 1) 00:17:46.753 5.040 - 5.067: 99.5543% ( 1) 00:17:46.753 5.067 - 5.093: 99.5596% ( 1) 00:17:46.753 5.173 - 5.200: 99.5648% ( 1) 00:17:46.753 5.200 - 5.227: 99.5701% ( 1) 00:17:46.753 5.253 - 5.280: 99.5753% ( 1) 00:17:46.753 5.387 - 5.413: 99.5806% ( 1) 00:17:46.753 5.707 - 5.733: 99.5858% ( 1) 00:17:46.753 5.733 - 5.760: 99.5910% ( 1) 00:17:46.753 5.760 - 5.787: 99.5963% ( 1) 00:17:46.753 6.107 - 6.133: 99.6068% ( 2) 00:17:46.753 6.133 - 6.160: 99.6120% ( 1) 00:17:46.753 6.187 - 6.213: 99.6173% ( 1) 00:17:46.753 6.213 - 6.240: 99.6225% ( 1) 00:17:46.753 6.240 - 6.267: 99.6330% ( 2) 00:17:46.753 6.320 - 6.347: 99.6382% ( 1) 00:17:46.753 6.400 - 6.427: 99.6435% ( 1) 00:17:46.753 6.453 - 6.480: 99.6487% ( 1) 00:17:46.753 6.480 - 6.507: 99.6540% ( 1) 00:17:46.753 6.533 - 6.560: 99.6592% ( 1) 00:17:46.753 6.560 - 6.587: 99.6749% ( 3) 00:17:46.753 6.587 - 6.613: 99.6802% ( 1) 00:17:46.753 6.613 - 6.640: 99.6907% ( 2) 00:17:46.753 6.987 - 7.040: 99.7064% ( 3) 00:17:46.753 7.147 - 7.200: 99.7116% ( 1) 00:17:46.753 7.200 - 7.253: 99.7169% ( 1) 00:17:46.753 7.253 - 7.307: 99.7221% ( 1) 00:17:46.753 7.413 - 7.467: 99.7274% ( 1) 00:17:46.753 7.467 - 7.520: 99.7378% ( 2) 00:17:46.753 7.520 - 7.573: 99.7536% ( 3) 00:17:46.753 7.573 - 7.627: 99.7588% ( 1) 00:17:46.753 7.680 - 7.733: 99.7746% ( 3) 00:17:46.753 7.787 - 7.840: 99.7903% ( 3) 00:17:46.753 7.840 - 7.893: 99.7955% ( 1) 00:17:46.753 7.893 - 7.947: 99.8008% ( 1) 00:17:46.753 7.947 - 8.000: 99.8113% ( 2) 00:17:46.753 8.000 - 8.053: 99.8165% ( 1) 00:17:46.753 8.053 - 8.107: 99.8217% ( 1) 00:17:46.753 8.107 - 8.160: 99.8427% ( 4) 00:17:46.753 8.213 - 8.267: 99.8637% ( 4) 00:17:46.753 8.267 - 8.320: 99.8742% ( 2) 00:17:46.753 8.320 - 8.373: 99.8847% ( 2) 00:17:46.753 8.640 - 8.693: 99.8951% ( 2) 00:17:46.753 8.693 - 8.747: 99.9004% ( 1) 00:17:46.753 8.960 - 9.013: 99.9056% ( 1) 00:17:46.753 9.013 - 9.067: 99.9109% ( 1) 00:17:46.753 11.040 - 11.093: 99.9161% ( 1) 00:17:46.753 3986.773 - 4014.080: 100.0000% ( 16) 00:17:46.753 00:17:46.753 Complete histogram 00:17:46.753 ================== 00:17:46.753 Range in us Cumulative Count 00:17:46.753 2.373 - 2.387: 0.0157% ( 3) 00:17:46.753 2.387 - 2.400: 0.8599% ( 161) 00:17:46.753 2.400 - 2.413: 1.0224% ( 31) 00:17:46.753 2.413 - 2.427: 1.2007% ( 34) 00:17:46.753 2.427 - 2.440: 18.3715% ( 3275) 00:17:46.753 2.440 - 2.453: 56.6036% ( 7292) 00:17:46.753 2.453 - 2.467: 65.0501% ( 1611) 00:17:46.753 2.467 - 2.480: 75.4889% ( 1991) 00:17:46.753 2.480 - [2024-10-01 08:32:38.151735] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:46.753 2.493: 79.9560% ( 852) 00:17:46.753 2.493 - 2.507: 82.0532% ( 400) 00:17:46.753 2.507 - 2.520: 87.2175% ( 985) 00:17:46.753 2.520 - 2.533: 93.1736% ( 1136) 00:17:46.753 2.533 - 2.547: 96.2879% ( 594) 00:17:46.753 2.547 - 2.560: 98.0129% ( 329) 00:17:46.753 2.560 - 2.573: 98.9776% ( 184) 00:17:46.753 2.573 - 2.587: 99.2450% ( 51) 00:17:46.753 2.587 - 2.600: 99.3132% ( 13) 00:17:46.753 2.600 - 2.613: 99.3184% ( 1) 00:17:46.753 2.627 - 2.640: 99.3237% ( 1) 00:17:46.753 2.933 - 2.947: 99.3289% ( 1) 00:17:46.753 3.120 - 3.133: 99.3341% ( 1) 00:17:46.753 3.187 - 3.200: 99.3446% ( 2) 00:17:46.753 3.253 - 3.267: 99.3499% ( 1) 00:17:46.753 4.427 - 4.453: 99.3551% ( 1) 00:17:46.753 4.453 - 4.480: 99.3604% ( 1) 00:17:46.753 4.480 - 4.507: 99.3656% ( 1) 00:17:46.753 4.560 - 4.587: 99.3761% ( 2) 00:17:46.753 4.693 - 4.720: 99.3813% ( 1) 00:17:46.753 4.720 - 4.747: 99.3918% ( 2) 00:17:46.753 4.773 - 4.800: 99.4023% ( 2) 00:17:46.753 4.827 - 4.853: 99.4075% ( 1) 00:17:46.753 4.880 - 4.907: 99.4128% ( 1) 00:17:46.753 5.013 - 5.040: 99.4233% ( 2) 00:17:46.753 5.467 - 5.493: 99.4285% ( 1) 00:17:46.753 5.573 - 5.600: 99.4338% ( 1) 00:17:46.753 5.920 - 5.947: 99.4390% ( 1) 00:17:46.753 5.947 - 5.973: 99.4442% ( 1) 00:17:46.753 5.973 - 6.000: 99.4495% ( 1) 00:17:46.753 6.053 - 6.080: 99.4547% ( 1) 00:17:46.753 6.107 - 6.133: 99.4600% ( 1) 00:17:46.753 6.133 - 6.160: 99.4705% ( 2) 00:17:46.753 6.160 - 6.187: 99.4757% ( 1) 00:17:46.753 6.267 - 6.293: 99.4862% ( 2) 00:17:46.753 6.293 - 6.320: 99.4914% ( 1) 00:17:46.753 6.347 - 6.373: 99.5019% ( 2) 00:17:46.753 6.427 - 6.453: 99.5072% ( 1) 00:17:46.753 6.480 - 6.507: 99.5124% ( 1) 00:17:46.753 6.507 - 6.533: 99.5176% ( 1) 00:17:46.753 6.587 - 6.613: 99.5229% ( 1) 00:17:46.753 6.613 - 6.640: 99.5281% ( 1) 00:17:46.753 6.640 - 6.667: 99.5334% ( 1) 00:17:46.753 6.720 - 6.747: 99.5386% ( 1) 00:17:46.753 6.933 - 6.987: 99.5439% ( 1) 00:17:46.753 6.987 - 7.040: 99.5491% ( 1) 00:17:46.753 7.040 - 7.093: 99.5543% ( 1) 00:17:46.753 7.093 - 7.147: 99.5596% ( 1) 00:17:46.753 7.200 - 7.253: 99.5648% ( 1) 00:17:46.753 7.253 - 7.307: 99.5701% ( 1) 00:17:46.753 7.520 - 7.573: 99.5753% ( 1) 00:17:46.753 8.160 - 8.213: 99.5806% ( 1) 00:17:46.753 9.600 - 9.653: 99.5858% ( 1) 00:17:46.753 9.707 - 9.760: 99.5910% ( 1) 00:17:46.753 11.360 - 11.413: 99.5963% ( 1) 00:17:46.753 12.373 - 12.427: 99.6015% ( 1) 00:17:46.753 3741.013 - 3768.320: 99.6068% ( 1) 00:17:46.753 3986.773 - 4014.080: 100.0000% ( 75) 00:17:46.753 00:17:46.753 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:17:46.753 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:17:46.753 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:17:46.753 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:17:46.753 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:46.753 [ 00:17:46.753 { 00:17:46.753 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:46.753 "subtype": "Discovery", 00:17:46.753 "listen_addresses": [], 00:17:46.753 "allow_any_host": true, 00:17:46.753 "hosts": [] 00:17:46.753 }, 00:17:46.753 { 00:17:46.753 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:46.753 "subtype": "NVMe", 00:17:46.753 "listen_addresses": [ 00:17:46.753 { 00:17:46.753 "trtype": "VFIOUSER", 00:17:46.753 "adrfam": "IPv4", 00:17:46.753 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:46.753 "trsvcid": "0" 00:17:46.753 } 00:17:46.753 ], 00:17:46.753 "allow_any_host": true, 00:17:46.753 "hosts": [], 00:17:46.753 "serial_number": "SPDK1", 00:17:46.753 "model_number": "SPDK bdev Controller", 00:17:46.753 "max_namespaces": 32, 00:17:46.753 "min_cntlid": 1, 00:17:46.753 "max_cntlid": 65519, 00:17:46.753 "namespaces": [ 00:17:46.753 { 00:17:46.753 "nsid": 1, 00:17:46.753 "bdev_name": "Malloc1", 00:17:46.753 "name": "Malloc1", 00:17:46.753 "nguid": "30BBFD471C2E46C5A9BF38442E241DD8", 00:17:46.753 "uuid": "30bbfd47-1c2e-46c5-a9bf-38442e241dd8" 00:17:46.753 }, 00:17:46.753 { 00:17:46.753 "nsid": 2, 00:17:46.753 "bdev_name": "Malloc3", 00:17:46.753 "name": "Malloc3", 00:17:46.753 "nguid": "5695E9F41BB2484FA1B6C555935291CF", 00:17:46.753 "uuid": "5695e9f4-1bb2-484f-a1b6-c555935291cf" 00:17:46.753 } 00:17:46.753 ] 00:17:46.753 }, 00:17:46.753 { 00:17:46.753 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:46.753 "subtype": "NVMe", 00:17:46.753 "listen_addresses": [ 00:17:46.753 { 00:17:46.753 "trtype": "VFIOUSER", 00:17:46.753 "adrfam": "IPv4", 00:17:46.753 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:46.753 "trsvcid": "0" 00:17:46.753 } 00:17:46.753 ], 00:17:46.753 "allow_any_host": true, 00:17:46.753 "hosts": [], 00:17:46.753 "serial_number": "SPDK2", 00:17:46.753 "model_number": "SPDK bdev Controller", 00:17:46.753 "max_namespaces": 32, 00:17:46.754 "min_cntlid": 1, 00:17:46.754 "max_cntlid": 65519, 00:17:46.754 "namespaces": [ 00:17:46.754 { 00:17:46.754 "nsid": 1, 00:17:46.754 "bdev_name": "Malloc2", 00:17:46.754 "name": "Malloc2", 00:17:46.754 "nguid": "4F764CAC02BF4E5998B2B81E12AC407B", 00:17:46.754 "uuid": "4f764cac-02bf-4e59-98b2-b81e12ac407b" 00:17:46.754 } 00:17:46.754 ] 00:17:46.754 } 00:17:46.754 ] 00:17:46.754 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:46.754 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3716381 00:17:46.754 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:17:46.754 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:17:46.754 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:17:46.754 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:46.754 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:46.754 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:17:46.754 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:17:46.754 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:17:46.754 [2024-10-01 08:32:38.554572] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:46.754 Malloc4 00:17:47.014 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:17:47.014 [2024-10-01 08:32:38.734745] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:47.014 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:47.014 Asynchronous Event Request test 00:17:47.014 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:47.014 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:47.014 Registering asynchronous event callbacks... 00:17:47.014 Starting namespace attribute notice tests for all controllers... 00:17:47.014 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:47.014 aer_cb - Changed Namespace 00:17:47.014 Cleaning up... 00:17:47.275 [ 00:17:47.275 { 00:17:47.275 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:47.275 "subtype": "Discovery", 00:17:47.275 "listen_addresses": [], 00:17:47.275 "allow_any_host": true, 00:17:47.275 "hosts": [] 00:17:47.275 }, 00:17:47.275 { 00:17:47.275 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:47.275 "subtype": "NVMe", 00:17:47.275 "listen_addresses": [ 00:17:47.275 { 00:17:47.275 "trtype": "VFIOUSER", 00:17:47.275 "adrfam": "IPv4", 00:17:47.275 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:47.275 "trsvcid": "0" 00:17:47.275 } 00:17:47.275 ], 00:17:47.275 "allow_any_host": true, 00:17:47.275 "hosts": [], 00:17:47.275 "serial_number": "SPDK1", 00:17:47.275 "model_number": "SPDK bdev Controller", 00:17:47.275 "max_namespaces": 32, 00:17:47.275 "min_cntlid": 1, 00:17:47.275 "max_cntlid": 65519, 00:17:47.275 "namespaces": [ 00:17:47.275 { 00:17:47.275 "nsid": 1, 00:17:47.275 "bdev_name": "Malloc1", 00:17:47.275 "name": "Malloc1", 00:17:47.275 "nguid": "30BBFD471C2E46C5A9BF38442E241DD8", 00:17:47.275 "uuid": "30bbfd47-1c2e-46c5-a9bf-38442e241dd8" 00:17:47.275 }, 00:17:47.275 { 00:17:47.275 "nsid": 2, 00:17:47.275 "bdev_name": "Malloc3", 00:17:47.275 "name": "Malloc3", 00:17:47.275 "nguid": "5695E9F41BB2484FA1B6C555935291CF", 00:17:47.275 "uuid": "5695e9f4-1bb2-484f-a1b6-c555935291cf" 00:17:47.275 } 00:17:47.275 ] 00:17:47.275 }, 00:17:47.275 { 00:17:47.275 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:47.275 "subtype": "NVMe", 00:17:47.275 "listen_addresses": [ 00:17:47.275 { 00:17:47.275 "trtype": "VFIOUSER", 00:17:47.275 "adrfam": "IPv4", 00:17:47.275 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:47.275 "trsvcid": "0" 00:17:47.275 } 00:17:47.275 ], 00:17:47.275 "allow_any_host": true, 00:17:47.275 "hosts": [], 00:17:47.275 "serial_number": "SPDK2", 00:17:47.275 "model_number": "SPDK bdev Controller", 00:17:47.275 "max_namespaces": 32, 00:17:47.275 "min_cntlid": 1, 00:17:47.275 "max_cntlid": 65519, 00:17:47.275 "namespaces": [ 00:17:47.275 { 00:17:47.275 "nsid": 1, 00:17:47.275 "bdev_name": "Malloc2", 00:17:47.275 "name": "Malloc2", 00:17:47.275 "nguid": "4F764CAC02BF4E5998B2B81E12AC407B", 00:17:47.275 "uuid": "4f764cac-02bf-4e59-98b2-b81e12ac407b" 00:17:47.275 }, 00:17:47.275 { 00:17:47.275 "nsid": 2, 00:17:47.275 "bdev_name": "Malloc4", 00:17:47.275 "name": "Malloc4", 00:17:47.275 "nguid": "B9D711E0CB294E9A98FB6677741CC2AD", 00:17:47.275 "uuid": "b9d711e0-cb29-4e9a-98fb-6677741cc2ad" 00:17:47.275 } 00:17:47.275 ] 00:17:47.275 } 00:17:47.275 ] 00:17:47.275 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3716381 00:17:47.276 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:17:47.276 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3707470 00:17:47.276 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 3707470 ']' 00:17:47.276 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 3707470 00:17:47.276 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:17:47.276 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:47.276 08:32:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3707470 00:17:47.276 08:32:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:47.276 08:32:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:47.276 08:32:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3707470' 00:17:47.276 killing process with pid 3707470 00:17:47.276 08:32:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 3707470 00:17:47.276 08:32:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 3707470 00:17:47.536 08:32:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:17:47.536 08:32:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:47.536 08:32:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:17:47.536 08:32:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:17:47.536 08:32:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:17:47.536 08:32:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3716611 00:17:47.536 08:32:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:17:47.536 08:32:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3716611' 00:17:47.536 Process pid: 3716611 00:17:47.536 08:32:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:47.536 08:32:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3716611 00:17:47.536 08:32:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 3716611 ']' 00:17:47.536 08:32:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:47.536 08:32:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:47.536 08:32:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:47.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:47.536 08:32:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:47.536 08:32:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:47.536 [2024-10-01 08:32:39.253859] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:17:47.536 [2024-10-01 08:32:39.254807] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:17:47.536 [2024-10-01 08:32:39.254850] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:47.536 [2024-10-01 08:32:39.316726] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:47.797 [2024-10-01 08:32:39.379274] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:47.797 [2024-10-01 08:32:39.379314] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:47.797 [2024-10-01 08:32:39.379322] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:47.797 [2024-10-01 08:32:39.379329] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:47.797 [2024-10-01 08:32:39.379335] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:47.797 [2024-10-01 08:32:39.380854] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:47.797 [2024-10-01 08:32:39.380969] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:47.797 [2024-10-01 08:32:39.381124] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:47.797 [2024-10-01 08:32:39.381124] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:17:47.797 [2024-10-01 08:32:39.445395] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:17:47.797 [2024-10-01 08:32:39.445729] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:17:47.797 [2024-10-01 08:32:39.446730] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:17:47.797 [2024-10-01 08:32:39.446765] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:17:47.797 [2024-10-01 08:32:39.446963] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:17:48.367 08:32:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:48.367 08:32:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:17:48.367 08:32:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:17:49.309 08:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:17:49.570 08:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:49.570 08:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:49.570 08:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:49.570 08:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:49.570 08:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:49.829 Malloc1 00:17:49.829 08:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:49.829 08:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:50.090 08:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:50.350 08:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:50.350 08:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:50.350 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:50.350 Malloc2 00:17:50.610 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:50.610 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:50.871 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:51.131 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:17:51.131 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3716611 00:17:51.131 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 3716611 ']' 00:17:51.131 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 3716611 00:17:51.131 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:17:51.131 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:51.131 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3716611 00:17:51.131 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:51.131 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:51.131 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3716611' 00:17:51.131 killing process with pid 3716611 00:17:51.131 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 3716611 00:17:51.131 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 3716611 00:17:51.131 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:17:51.392 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:51.392 00:17:51.393 real 0m51.085s 00:17:51.393 user 3m15.555s 00:17:51.393 sys 0m2.788s 00:17:51.393 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:51.393 08:32:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:51.393 ************************************ 00:17:51.393 END TEST nvmf_vfio_user 00:17:51.393 ************************************ 00:17:51.393 08:32:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:51.393 08:32:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:51.393 08:32:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:51.393 08:32:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:51.393 ************************************ 00:17:51.393 START TEST nvmf_vfio_user_nvme_compliance 00:17:51.393 ************************************ 00:17:51.393 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:51.393 * Looking for test storage... 00:17:51.393 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:17:51.393 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:51.393 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # lcov --version 00:17:51.393 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:51.393 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:51.393 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:51.393 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:51.393 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:51.668 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:17:51.668 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:17:51.668 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:17:51.668 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:17:51.668 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:17:51.668 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:17:51.668 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:17:51.668 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:51.668 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:17:51.668 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:17:51.668 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:51.669 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:51.669 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:17:51.669 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:17:51.669 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:51.669 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:17:51.669 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:17:51.669 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:17:51.669 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:17:51.669 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:51.669 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:17:51.669 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:17:51.669 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:51.669 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:51.669 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:17:51.669 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:51.669 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:51.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.669 --rc genhtml_branch_coverage=1 00:17:51.669 --rc genhtml_function_coverage=1 00:17:51.669 --rc genhtml_legend=1 00:17:51.669 --rc geninfo_all_blocks=1 00:17:51.669 --rc geninfo_unexecuted_blocks=1 00:17:51.669 00:17:51.669 ' 00:17:51.669 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:51.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.669 --rc genhtml_branch_coverage=1 00:17:51.669 --rc genhtml_function_coverage=1 00:17:51.669 --rc genhtml_legend=1 00:17:51.669 --rc geninfo_all_blocks=1 00:17:51.669 --rc geninfo_unexecuted_blocks=1 00:17:51.669 00:17:51.669 ' 00:17:51.669 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:51.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.669 --rc genhtml_branch_coverage=1 00:17:51.669 --rc genhtml_function_coverage=1 00:17:51.669 --rc genhtml_legend=1 00:17:51.669 --rc geninfo_all_blocks=1 00:17:51.669 --rc geninfo_unexecuted_blocks=1 00:17:51.669 00:17:51.669 ' 00:17:51.669 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:51.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.669 --rc genhtml_branch_coverage=1 00:17:51.669 --rc genhtml_function_coverage=1 00:17:51.669 --rc genhtml_legend=1 00:17:51.669 --rc geninfo_all_blocks=1 00:17:51.669 --rc geninfo_unexecuted_blocks=1 00:17:51.669 00:17:51.669 ' 00:17:51.669 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:51.669 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:17:51.669 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:51.669 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:51.669 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:51.669 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:51.669 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:51.669 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:51.669 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:51.669 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:51.669 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:51.669 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:51.669 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:51.669 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:51.669 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:51.669 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:51.669 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:51.669 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:51.669 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:51.669 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:17:51.669 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:51.669 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:51.669 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:51.669 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.669 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.670 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.670 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:17:51.670 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.670 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:17:51.670 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:51.670 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:51.670 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:51.670 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:51.670 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:51.670 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:51.670 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:51.670 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:51.670 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:51.670 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:51.670 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:51.670 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:51.670 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:17:51.670 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:17:51.670 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:17:51.670 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3717370 00:17:51.670 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3717370' 00:17:51.670 Process pid: 3717370 00:17:51.670 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:51.670 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:51.670 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3717370 00:17:51.670 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 3717370 ']' 00:17:51.670 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:51.670 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:51.670 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:51.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:51.670 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:51.670 08:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:51.670 [2024-10-01 08:32:43.327217] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:17:51.670 [2024-10-01 08:32:43.327294] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:51.670 [2024-10-01 08:32:43.392392] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:51.670 [2024-10-01 08:32:43.467459] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:51.670 [2024-10-01 08:32:43.467499] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:51.670 [2024-10-01 08:32:43.467507] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:51.670 [2024-10-01 08:32:43.467514] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:51.670 [2024-10-01 08:32:43.467520] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:51.670 [2024-10-01 08:32:43.468520] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:51.670 [2024-10-01 08:32:43.468638] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:51.670 [2024-10-01 08:32:43.468641] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:52.611 08:32:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:52.611 08:32:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:17:52.611 08:32:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:17:53.553 08:32:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:53.553 08:32:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:17:53.553 08:32:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:53.553 08:32:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.553 08:32:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:53.553 08:32:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.553 08:32:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:17:53.553 08:32:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:53.553 08:32:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.553 08:32:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:53.553 malloc0 00:17:53.553 08:32:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.553 08:32:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:17:53.553 08:32:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.553 08:32:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:53.553 08:32:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.553 08:32:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:53.553 08:32:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.553 08:32:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:53.553 08:32:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.553 08:32:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:53.553 08:32:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.553 08:32:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:53.553 08:32:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.553 08:32:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:17:53.553 00:17:53.553 00:17:53.553 CUnit - A unit testing framework for C - Version 2.1-3 00:17:53.553 http://cunit.sourceforge.net/ 00:17:53.553 00:17:53.553 00:17:53.553 Suite: nvme_compliance 00:17:53.553 Test: admin_identify_ctrlr_verify_dptr ...[2024-10-01 08:32:45.371226] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:53.553 [2024-10-01 08:32:45.372573] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:17:53.553 [2024-10-01 08:32:45.372585] vfio_user.c:5507:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:17:53.553 [2024-10-01 08:32:45.372590] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:17:53.553 [2024-10-01 08:32:45.374247] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:53.813 passed 00:17:53.813 Test: admin_identify_ctrlr_verify_fused ...[2024-10-01 08:32:45.468874] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:53.813 [2024-10-01 08:32:45.471888] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:53.813 passed 00:17:53.813 Test: admin_identify_ns ...[2024-10-01 08:32:45.568252] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:53.813 [2024-10-01 08:32:45.628009] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:17:53.813 [2024-10-01 08:32:45.636003] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:17:54.073 [2024-10-01 08:32:45.657119] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:54.073 passed 00:17:54.073 Test: admin_get_features_mandatory_features ...[2024-10-01 08:32:45.750778] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:54.073 [2024-10-01 08:32:45.753799] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:54.073 passed 00:17:54.073 Test: admin_get_features_optional_features ...[2024-10-01 08:32:45.848350] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:54.073 [2024-10-01 08:32:45.851376] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:54.073 passed 00:17:54.333 Test: admin_set_features_number_of_queues ...[2024-10-01 08:32:45.944488] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:54.333 [2024-10-01 08:32:46.049098] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:54.333 passed 00:17:54.333 Test: admin_get_log_page_mandatory_logs ...[2024-10-01 08:32:46.142109] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:54.333 [2024-10-01 08:32:46.146134] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:54.593 passed 00:17:54.593 Test: admin_get_log_page_with_lpo ...[2024-10-01 08:32:46.238243] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:54.593 [2024-10-01 08:32:46.306005] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:17:54.593 [2024-10-01 08:32:46.319049] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:54.593 passed 00:17:54.593 Test: fabric_property_get ...[2024-10-01 08:32:46.413123] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:54.593 [2024-10-01 08:32:46.414366] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:17:54.593 [2024-10-01 08:32:46.416137] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:54.854 passed 00:17:54.854 Test: admin_delete_io_sq_use_admin_qid ...[2024-10-01 08:32:46.510706] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:54.854 [2024-10-01 08:32:46.511969] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:17:54.854 [2024-10-01 08:32:46.513723] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:54.854 passed 00:17:54.854 Test: admin_delete_io_sq_delete_sq_twice ...[2024-10-01 08:32:46.606884] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:55.114 [2024-10-01 08:32:46.691004] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:55.114 [2024-10-01 08:32:46.707002] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:55.114 [2024-10-01 08:32:46.712080] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:55.114 passed 00:17:55.114 Test: admin_delete_io_cq_use_admin_qid ...[2024-10-01 08:32:46.803683] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:55.114 [2024-10-01 08:32:46.804934] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:17:55.114 [2024-10-01 08:32:46.806701] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:55.114 passed 00:17:55.114 Test: admin_delete_io_cq_delete_cq_first ...[2024-10-01 08:32:46.899251] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:55.374 [2024-10-01 08:32:46.974083] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:55.374 [2024-10-01 08:32:46.995005] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:55.374 [2024-10-01 08:32:47.000088] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:55.374 passed 00:17:55.374 Test: admin_create_io_cq_verify_iv_pc ...[2024-10-01 08:32:47.089712] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:55.374 [2024-10-01 08:32:47.090951] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:17:55.374 [2024-10-01 08:32:47.090972] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:17:55.374 [2024-10-01 08:32:47.092726] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:55.374 passed 00:17:55.374 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-10-01 08:32:47.185853] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:55.634 [2024-10-01 08:32:47.278002] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:17:55.634 [2024-10-01 08:32:47.286001] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:17:55.634 [2024-10-01 08:32:47.294000] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:17:55.634 [2024-10-01 08:32:47.302003] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:17:55.634 [2024-10-01 08:32:47.331090] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:55.634 passed 00:17:55.634 Test: admin_create_io_sq_verify_pc ...[2024-10-01 08:32:47.422699] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:55.634 [2024-10-01 08:32:47.439011] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:17:55.634 [2024-10-01 08:32:47.456856] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:55.894 passed 00:17:55.894 Test: admin_create_io_qp_max_qps ...[2024-10-01 08:32:47.551382] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:56.834 [2024-10-01 08:32:48.655007] nvme_ctrlr.c:5504:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:17:57.410 [2024-10-01 08:32:49.030776] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:57.410 passed 00:17:57.410 Test: admin_create_io_sq_shared_cq ...[2024-10-01 08:32:49.122950] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:57.671 [2024-10-01 08:32:49.256007] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:57.671 [2024-10-01 08:32:49.293046] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:57.671 passed 00:17:57.671 00:17:57.671 Run Summary: Type Total Ran Passed Failed Inactive 00:17:57.671 suites 1 1 n/a 0 0 00:17:57.671 tests 18 18 18 0 0 00:17:57.671 asserts 360 360 360 0 n/a 00:17:57.671 00:17:57.671 Elapsed time = 1.641 seconds 00:17:57.671 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3717370 00:17:57.671 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 3717370 ']' 00:17:57.671 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 3717370 00:17:57.671 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:17:57.671 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:57.671 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3717370 00:17:57.671 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:57.671 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:57.671 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3717370' 00:17:57.671 killing process with pid 3717370 00:17:57.671 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 3717370 00:17:57.671 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 3717370 00:17:57.938 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:17:57.938 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:17:57.938 00:17:57.938 real 0m6.523s 00:17:57.938 user 0m18.405s 00:17:57.938 sys 0m0.540s 00:17:57.938 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:57.938 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:57.938 ************************************ 00:17:57.938 END TEST nvmf_vfio_user_nvme_compliance 00:17:57.938 ************************************ 00:17:57.938 08:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:57.938 08:32:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:57.938 08:32:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:57.938 08:32:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:57.938 ************************************ 00:17:57.938 START TEST nvmf_vfio_user_fuzz 00:17:57.938 ************************************ 00:17:57.938 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:57.938 * Looking for test storage... 00:17:57.938 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:57.938 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:57.938 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:17:57.938 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:58.233 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:58.233 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:58.233 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:58.233 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:58.233 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:17:58.233 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:17:58.233 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:17:58.233 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:17:58.233 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:17:58.233 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:17:58.233 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:17:58.233 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:58.233 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:17:58.233 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:17:58.233 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:58.233 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:58.233 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:17:58.233 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:17:58.233 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:58.233 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:17:58.233 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:17:58.233 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:17:58.233 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:17:58.233 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:58.233 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:17:58.233 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:17:58.233 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:58.233 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:58.233 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:17:58.233 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:58.233 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:58.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.233 --rc genhtml_branch_coverage=1 00:17:58.233 --rc genhtml_function_coverage=1 00:17:58.233 --rc genhtml_legend=1 00:17:58.233 --rc geninfo_all_blocks=1 00:17:58.233 --rc geninfo_unexecuted_blocks=1 00:17:58.233 00:17:58.233 ' 00:17:58.233 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:58.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.233 --rc genhtml_branch_coverage=1 00:17:58.233 --rc genhtml_function_coverage=1 00:17:58.233 --rc genhtml_legend=1 00:17:58.233 --rc geninfo_all_blocks=1 00:17:58.233 --rc geninfo_unexecuted_blocks=1 00:17:58.233 00:17:58.233 ' 00:17:58.233 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:58.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.233 --rc genhtml_branch_coverage=1 00:17:58.233 --rc genhtml_function_coverage=1 00:17:58.233 --rc genhtml_legend=1 00:17:58.233 --rc geninfo_all_blocks=1 00:17:58.233 --rc geninfo_unexecuted_blocks=1 00:17:58.233 00:17:58.233 ' 00:17:58.233 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:58.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.233 --rc genhtml_branch_coverage=1 00:17:58.233 --rc genhtml_function_coverage=1 00:17:58.233 --rc genhtml_legend=1 00:17:58.233 --rc geninfo_all_blocks=1 00:17:58.233 --rc geninfo_unexecuted_blocks=1 00:17:58.233 00:17:58.233 ' 00:17:58.233 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:58.233 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:17:58.233 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:58.233 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:58.233 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:58.233 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:58.233 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:58.233 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:58.233 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:58.233 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:58.233 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:58.233 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:58.233 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:58.233 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:58.233 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:58.233 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:58.233 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:58.233 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:58.233 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:58.233 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:17:58.233 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:58.233 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:58.233 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:58.233 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.233 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.233 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.233 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:17:58.233 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.234 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:17:58.234 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:58.234 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:58.234 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:58.234 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:58.234 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:58.234 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:58.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:58.234 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:58.234 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:58.234 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:58.234 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:58.234 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:58.234 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:58.234 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:17:58.234 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:58.234 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:58.234 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:17:58.234 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3718774 00:17:58.234 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3718774' 00:17:58.234 Process pid: 3718774 00:17:58.234 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:58.234 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:58.234 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3718774 00:17:58.234 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 3718774 ']' 00:17:58.234 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:58.234 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:58.234 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:58.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:58.234 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:58.234 08:32:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:59.236 08:32:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:59.236 08:32:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:17:59.236 08:32:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:18:00.178 08:32:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:00.178 08:32:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.178 08:32:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:00.178 08:32:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.178 08:32:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:18:00.178 08:32:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:00.178 08:32:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.178 08:32:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:00.178 malloc0 00:18:00.178 08:32:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.178 08:32:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:18:00.178 08:32:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.178 08:32:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:00.178 08:32:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.178 08:32:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:00.178 08:32:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.178 08:32:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:00.178 08:32:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.178 08:32:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:00.178 08:32:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.178 08:32:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:00.178 08:32:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.178 08:32:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:18:00.178 08:32:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:18:32.311 Fuzzing completed. Shutting down the fuzz application 00:18:32.311 00:18:32.311 Dumping successful admin opcodes: 00:18:32.311 8, 9, 10, 24, 00:18:32.311 Dumping successful io opcodes: 00:18:32.311 0, 00:18:32.311 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1099966, total successful commands: 4330, random_seed: 2502346176 00:18:32.311 NS: 0x200003a1ef00 admin qp, Total commands completed: 138314, total successful commands: 1120, random_seed: 4258550208 00:18:32.311 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:18:32.311 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.311 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:32.311 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.311 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3718774 00:18:32.311 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 3718774 ']' 00:18:32.311 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 3718774 00:18:32.311 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:18:32.311 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:32.311 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3718774 00:18:32.311 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:32.311 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:32.311 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3718774' 00:18:32.311 killing process with pid 3718774 00:18:32.311 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 3718774 00:18:32.311 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 3718774 00:18:32.311 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:18:32.311 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:18:32.311 00:18:32.311 real 0m33.849s 00:18:32.311 user 0m38.137s 00:18:32.311 sys 0m25.890s 00:18:32.311 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:32.311 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:32.311 ************************************ 00:18:32.311 END TEST nvmf_vfio_user_fuzz 00:18:32.311 ************************************ 00:18:32.311 08:33:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:32.311 08:33:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:32.311 08:33:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:32.311 08:33:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:32.311 ************************************ 00:18:32.311 START TEST nvmf_auth_target 00:18:32.311 ************************************ 00:18:32.311 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:32.311 * Looking for test storage... 00:18:32.311 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:32.311 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:32.311 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lcov --version 00:18:32.311 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:32.311 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:32.311 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:32.311 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:32.311 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:32.311 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:18:32.311 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:18:32.311 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:18:32.311 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:18:32.311 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:18:32.311 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:18:32.311 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:18:32.311 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:32.311 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:18:32.311 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:18:32.311 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:32.311 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:32.311 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:18:32.311 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:18:32.311 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:32.311 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:18:32.311 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:18:32.311 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:18:32.311 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:18:32.311 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:32.311 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:18:32.311 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:18:32.311 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:32.311 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:32.311 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:18:32.312 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:32.312 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:32.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:32.312 --rc genhtml_branch_coverage=1 00:18:32.312 --rc genhtml_function_coverage=1 00:18:32.312 --rc genhtml_legend=1 00:18:32.312 --rc geninfo_all_blocks=1 00:18:32.312 --rc geninfo_unexecuted_blocks=1 00:18:32.312 00:18:32.312 ' 00:18:32.312 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:32.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:32.312 --rc genhtml_branch_coverage=1 00:18:32.312 --rc genhtml_function_coverage=1 00:18:32.312 --rc genhtml_legend=1 00:18:32.312 --rc geninfo_all_blocks=1 00:18:32.312 --rc geninfo_unexecuted_blocks=1 00:18:32.312 00:18:32.312 ' 00:18:32.312 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:32.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:32.312 --rc genhtml_branch_coverage=1 00:18:32.312 --rc genhtml_function_coverage=1 00:18:32.312 --rc genhtml_legend=1 00:18:32.312 --rc geninfo_all_blocks=1 00:18:32.312 --rc geninfo_unexecuted_blocks=1 00:18:32.312 00:18:32.312 ' 00:18:32.312 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:32.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:32.312 --rc genhtml_branch_coverage=1 00:18:32.312 --rc genhtml_function_coverage=1 00:18:32.312 --rc genhtml_legend=1 00:18:32.312 --rc geninfo_all_blocks=1 00:18:32.312 --rc geninfo_unexecuted_blocks=1 00:18:32.312 00:18:32.312 ' 00:18:32.312 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:32.312 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:32.312 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:32.312 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:32.312 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:32.312 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:32.312 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:32.312 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:32.312 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:32.312 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:32.312 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:32.312 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:32.312 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:32.312 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:32.312 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:32.312 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:32.312 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:32.312 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:32.312 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:32.312 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:18:32.312 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:32.312 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:32.312 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:32.312 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.312 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.312 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.312 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:32.312 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.312 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:18:32.312 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:32.312 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:32.312 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:32.312 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:32.312 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:32.312 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:32.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:32.312 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:32.312 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:32.312 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:32.312 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:32.312 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:32.312 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:32.312 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:32.312 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:32.312 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:32.312 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:18:32.312 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:18:32.312 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:18:32.312 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:32.312 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:18:32.312 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:18:32.312 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:18:32.312 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:32.312 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:32.312 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:32.312 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:18:32.312 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:18:32.312 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:18:32.312 08:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.899 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:38.900 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:38.900 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:38.900 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:38.900 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # is_hw=yes 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:38.900 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:39.161 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:39.161 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:39.161 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:39.161 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:39.161 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:39.161 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:39.161 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:39.161 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:39.161 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:39.161 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.660 ms 00:18:39.161 00:18:39.161 --- 10.0.0.2 ping statistics --- 00:18:39.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:39.161 rtt min/avg/max/mdev = 0.660/0.660/0.660/0.000 ms 00:18:39.161 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:39.161 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:39.161 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:18:39.161 00:18:39.161 --- 10.0.0.1 ping statistics --- 00:18:39.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:39.162 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:18:39.162 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:39.162 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # return 0 00:18:39.162 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:18:39.162 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:39.162 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:18:39.162 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:18:39.162 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:39.162 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:18:39.162 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:18:39.162 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:18:39.162 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:18:39.162 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:39.162 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.162 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # nvmfpid=3729643 00:18:39.162 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # waitforlisten 3729643 00:18:39.162 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:18:39.162 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3729643 ']' 00:18:39.162 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:39.162 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:39.162 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:39.162 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:39.162 08:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.101 08:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:40.101 08:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:40.101 08:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:18:40.101 08:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:40.101 08:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.101 08:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:40.101 08:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=3729681 00:18:40.101 08:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:40.101 08:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:18:40.101 08:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:18:40.101 08:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:18:40.101 08:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:40.101 08:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:18:40.101 08:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=null 00:18:40.101 08:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:18:40.101 08:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:40.101 08:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=515192197ecaefb76228bc3d7e6e72a64c03fcd821325057 00:18:40.101 08:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:18:40.101 08:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.Y9c 00:18:40.101 08:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 515192197ecaefb76228bc3d7e6e72a64c03fcd821325057 0 00:18:40.101 08:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 515192197ecaefb76228bc3d7e6e72a64c03fcd821325057 0 00:18:40.101 08:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:18:40.101 08:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:18:40.101 08:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=515192197ecaefb76228bc3d7e6e72a64c03fcd821325057 00:18:40.101 08:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=0 00:18:40.101 08:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:18:40.101 08:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.Y9c 00:18:40.101 08:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.Y9c 00:18:40.101 08:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.Y9c 00:18:40.101 08:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:18:40.101 08:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:18:40.101 08:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:40.101 08:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:18:40.101 08:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha512 00:18:40.101 08:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=64 00:18:40.101 08:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:40.101 08:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=54d4044fd6e9c21a5aaec9361d7525ceb525962e8f4115c8620695ce4eb92543 00:18:40.101 08:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:18:40.101 08:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.y8c 00:18:40.101 08:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 54d4044fd6e9c21a5aaec9361d7525ceb525962e8f4115c8620695ce4eb92543 3 00:18:40.101 08:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 54d4044fd6e9c21a5aaec9361d7525ceb525962e8f4115c8620695ce4eb92543 3 00:18:40.101 08:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:18:40.101 08:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:18:40.101 08:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=54d4044fd6e9c21a5aaec9361d7525ceb525962e8f4115c8620695ce4eb92543 00:18:40.101 08:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=3 00:18:40.101 08:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:18:40.362 08:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.y8c 00:18:40.362 08:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.y8c 00:18:40.362 08:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.y8c 00:18:40.362 08:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:18:40.362 08:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:18:40.362 08:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:40.362 08:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:18:40.362 08:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha256 00:18:40.362 08:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=32 00:18:40.362 08:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:40.362 08:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=6a4362dad9811cac47056d62ccd42923 00:18:40.362 08:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:18:40.362 08:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.s5g 00:18:40.362 08:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 6a4362dad9811cac47056d62ccd42923 1 00:18:40.362 08:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 6a4362dad9811cac47056d62ccd42923 1 00:18:40.362 08:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:18:40.362 08:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:18:40.362 08:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=6a4362dad9811cac47056d62ccd42923 00:18:40.362 08:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=1 00:18:40.362 08:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:18:40.362 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.s5g 00:18:40.362 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.s5g 00:18:40.363 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.s5g 00:18:40.363 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:18:40.363 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:18:40.363 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:40.363 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:18:40.363 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha384 00:18:40.363 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:18:40.363 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:40.363 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=5122da2f0492fce321f999c9fcc86e3a4e8f3a463d2f128d 00:18:40.363 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:18:40.363 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.tJI 00:18:40.363 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 5122da2f0492fce321f999c9fcc86e3a4e8f3a463d2f128d 2 00:18:40.363 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 5122da2f0492fce321f999c9fcc86e3a4e8f3a463d2f128d 2 00:18:40.363 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:18:40.363 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:18:40.363 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=5122da2f0492fce321f999c9fcc86e3a4e8f3a463d2f128d 00:18:40.363 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=2 00:18:40.363 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:18:40.363 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.tJI 00:18:40.363 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.tJI 00:18:40.363 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.tJI 00:18:40.363 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:18:40.363 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:18:40.363 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:40.363 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:18:40.363 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha384 00:18:40.363 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:18:40.363 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:40.363 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=a1db04cc8a392a44497baffd67cb3227480536bfefd8ecd4 00:18:40.363 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:18:40.363 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.nAr 00:18:40.363 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key a1db04cc8a392a44497baffd67cb3227480536bfefd8ecd4 2 00:18:40.363 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 a1db04cc8a392a44497baffd67cb3227480536bfefd8ecd4 2 00:18:40.363 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:18:40.363 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:18:40.363 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=a1db04cc8a392a44497baffd67cb3227480536bfefd8ecd4 00:18:40.363 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=2 00:18:40.363 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:18:40.363 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.nAr 00:18:40.363 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.nAr 00:18:40.363 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.nAr 00:18:40.363 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:18:40.363 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:18:40.363 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:40.363 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:18:40.363 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha256 00:18:40.363 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=32 00:18:40.363 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:40.363 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=b57549941755ef89217de5ef922777d9 00:18:40.363 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:18:40.363 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.XLU 00:18:40.363 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key b57549941755ef89217de5ef922777d9 1 00:18:40.363 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 b57549941755ef89217de5ef922777d9 1 00:18:40.363 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:18:40.363 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:18:40.363 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=b57549941755ef89217de5ef922777d9 00:18:40.363 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=1 00:18:40.363 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:18:40.624 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.XLU 00:18:40.624 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.XLU 00:18:40.624 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.XLU 00:18:40.624 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:18:40.624 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:18:40.624 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:40.624 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:18:40.624 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha512 00:18:40.624 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=64 00:18:40.624 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:40.624 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=4aeb755d410c3fd81d56d19bfa6d6842c5ff1782b41d629b99ac0818e897cd6a 00:18:40.624 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:18:40.624 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.FQI 00:18:40.624 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 4aeb755d410c3fd81d56d19bfa6d6842c5ff1782b41d629b99ac0818e897cd6a 3 00:18:40.624 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 4aeb755d410c3fd81d56d19bfa6d6842c5ff1782b41d629b99ac0818e897cd6a 3 00:18:40.624 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:18:40.624 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:18:40.624 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=4aeb755d410c3fd81d56d19bfa6d6842c5ff1782b41d629b99ac0818e897cd6a 00:18:40.624 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=3 00:18:40.624 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:18:40.624 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.FQI 00:18:40.624 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.FQI 00:18:40.624 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.FQI 00:18:40.624 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:18:40.624 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 3729643 00:18:40.624 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3729643 ']' 00:18:40.624 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:40.624 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:40.624 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:40.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:40.624 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:40.624 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.885 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:40.885 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:40.885 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 3729681 /var/tmp/host.sock 00:18:40.885 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3729681 ']' 00:18:40.885 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:18:40.885 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:40.885 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:40.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:40.885 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:40.885 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.885 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:40.885 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:40.885 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:18:40.885 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.885 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.885 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.885 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:40.885 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Y9c 00:18:40.885 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.885 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.885 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.885 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Y9c 00:18:40.885 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Y9c 00:18:41.145 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.y8c ]] 00:18:41.145 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.y8c 00:18:41.145 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.145 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.145 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.145 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.y8c 00:18:41.145 08:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.y8c 00:18:41.405 08:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:41.405 08:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.s5g 00:18:41.405 08:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.405 08:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.405 08:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.405 08:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.s5g 00:18:41.405 08:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.s5g 00:18:41.405 08:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.tJI ]] 00:18:41.405 08:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.tJI 00:18:41.405 08:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.405 08:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.405 08:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.405 08:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.tJI 00:18:41.405 08:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.tJI 00:18:41.666 08:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:41.666 08:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.nAr 00:18:41.666 08:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.666 08:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.666 08:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.666 08:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.nAr 00:18:41.666 08:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.nAr 00:18:41.927 08:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.XLU ]] 00:18:41.927 08:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.XLU 00:18:41.927 08:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.927 08:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.927 08:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.927 08:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.XLU 00:18:41.927 08:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.XLU 00:18:41.927 08:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:41.927 08:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.FQI 00:18:41.927 08:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.927 08:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.927 08:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.927 08:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.FQI 00:18:41.927 08:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.FQI 00:18:42.188 08:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:18:42.188 08:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:42.188 08:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:42.188 08:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:42.188 08:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:42.188 08:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:42.449 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:18:42.449 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:42.449 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:42.449 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:42.449 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:42.449 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.449 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.449 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.449 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.449 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.449 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.449 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.449 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.449 00:18:42.710 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:42.710 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:42.710 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.710 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.710 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.710 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.710 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.710 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.710 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:42.710 { 00:18:42.710 "cntlid": 1, 00:18:42.710 "qid": 0, 00:18:42.710 "state": "enabled", 00:18:42.710 "thread": "nvmf_tgt_poll_group_000", 00:18:42.710 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:42.710 "listen_address": { 00:18:42.710 "trtype": "TCP", 00:18:42.710 "adrfam": "IPv4", 00:18:42.710 "traddr": "10.0.0.2", 00:18:42.710 "trsvcid": "4420" 00:18:42.710 }, 00:18:42.710 "peer_address": { 00:18:42.710 "trtype": "TCP", 00:18:42.710 "adrfam": "IPv4", 00:18:42.710 "traddr": "10.0.0.1", 00:18:42.710 "trsvcid": "38312" 00:18:42.710 }, 00:18:42.710 "auth": { 00:18:42.710 "state": "completed", 00:18:42.710 "digest": "sha256", 00:18:42.710 "dhgroup": "null" 00:18:42.710 } 00:18:42.710 } 00:18:42.710 ]' 00:18:42.710 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:42.710 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:42.710 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:42.970 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:42.970 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:42.970 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.970 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.970 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.970 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTE1MTkyMTk3ZWNhZWZiNzYyMjhiYzNkN2U2ZTcyYTY0YzAzZmNkODIxMzI1MDU3H3XE0Q==: --dhchap-ctrl-secret DHHC-1:03:NTRkNDA0NGZkNmU5YzIxYTVhYWVjOTM2MWQ3NTI1Y2ViNTI1OTYyZThmNDExNWM4NjIwNjk1Y2U0ZWI5MjU0M3VUq34=: 00:18:42.970 08:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NTE1MTkyMTk3ZWNhZWZiNzYyMjhiYzNkN2U2ZTcyYTY0YzAzZmNkODIxMzI1MDU3H3XE0Q==: --dhchap-ctrl-secret DHHC-1:03:NTRkNDA0NGZkNmU5YzIxYTVhYWVjOTM2MWQ3NTI1Y2ViNTI1OTYyZThmNDExNWM4NjIwNjk1Y2U0ZWI5MjU0M3VUq34=: 00:18:43.911 08:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.911 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.911 08:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:43.911 08:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.911 08:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.911 08:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.911 08:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:43.911 08:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:43.911 08:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:43.911 08:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:18:43.911 08:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:43.911 08:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:43.911 08:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:43.911 08:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:43.911 08:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:43.911 08:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:43.911 08:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.911 08:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.911 08:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.911 08:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:43.911 08:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:43.911 08:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.171 00:18:44.172 08:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:44.172 08:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:44.172 08:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.432 08:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.432 08:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.432 08:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.432 08:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.432 08:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.432 08:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:44.432 { 00:18:44.432 "cntlid": 3, 00:18:44.432 "qid": 0, 00:18:44.432 "state": "enabled", 00:18:44.432 "thread": "nvmf_tgt_poll_group_000", 00:18:44.432 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:44.432 "listen_address": { 00:18:44.432 "trtype": "TCP", 00:18:44.432 "adrfam": "IPv4", 00:18:44.432 "traddr": "10.0.0.2", 00:18:44.432 "trsvcid": "4420" 00:18:44.432 }, 00:18:44.432 "peer_address": { 00:18:44.432 "trtype": "TCP", 00:18:44.432 "adrfam": "IPv4", 00:18:44.432 "traddr": "10.0.0.1", 00:18:44.432 "trsvcid": "38346" 00:18:44.432 }, 00:18:44.432 "auth": { 00:18:44.432 "state": "completed", 00:18:44.432 "digest": "sha256", 00:18:44.432 "dhgroup": "null" 00:18:44.432 } 00:18:44.432 } 00:18:44.432 ]' 00:18:44.432 08:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:44.433 08:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:44.433 08:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:44.433 08:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:44.433 08:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:44.693 08:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.693 08:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.693 08:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.693 08:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmE0MzYyZGFkOTgxMWNhYzQ3MDU2ZDYyY2NkNDI5MjOJSZcM: --dhchap-ctrl-secret DHHC-1:02:NTEyMmRhMmYwNDkyZmNlMzIxZjk5OWM5ZmNjODZlM2E0ZThmM2E0NjNkMmYxMjhkUoDicg==: 00:18:44.693 08:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NmE0MzYyZGFkOTgxMWNhYzQ3MDU2ZDYyY2NkNDI5MjOJSZcM: --dhchap-ctrl-secret DHHC-1:02:NTEyMmRhMmYwNDkyZmNlMzIxZjk5OWM5ZmNjODZlM2E0ZThmM2E0NjNkMmYxMjhkUoDicg==: 00:18:45.637 08:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.637 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.637 08:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:45.637 08:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.637 08:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.637 08:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.637 08:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:45.637 08:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:45.637 08:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:45.637 08:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:18:45.637 08:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:45.637 08:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:45.637 08:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:45.637 08:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:45.637 08:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.637 08:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.637 08:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.637 08:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.637 08:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.638 08:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.638 08:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.638 08:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.898 00:18:45.898 08:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:45.898 08:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.898 08:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:46.158 08:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.158 08:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:46.158 08:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.158 08:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.158 08:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.158 08:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:46.158 { 00:18:46.158 "cntlid": 5, 00:18:46.158 "qid": 0, 00:18:46.158 "state": "enabled", 00:18:46.158 "thread": "nvmf_tgt_poll_group_000", 00:18:46.158 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:46.158 "listen_address": { 00:18:46.158 "trtype": "TCP", 00:18:46.158 "adrfam": "IPv4", 00:18:46.158 "traddr": "10.0.0.2", 00:18:46.158 "trsvcid": "4420" 00:18:46.158 }, 00:18:46.158 "peer_address": { 00:18:46.158 "trtype": "TCP", 00:18:46.158 "adrfam": "IPv4", 00:18:46.158 "traddr": "10.0.0.1", 00:18:46.158 "trsvcid": "38366" 00:18:46.158 }, 00:18:46.158 "auth": { 00:18:46.158 "state": "completed", 00:18:46.158 "digest": "sha256", 00:18:46.158 "dhgroup": "null" 00:18:46.158 } 00:18:46.158 } 00:18:46.158 ]' 00:18:46.158 08:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:46.158 08:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:46.158 08:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:46.158 08:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:46.158 08:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:46.158 08:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.158 08:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.158 08:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.418 08:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTFkYjA0Y2M4YTM5MmE0NDQ5N2JhZmZkNjdjYjMyMjc0ODA1MzZiZmVmZDhlY2Q0siICcA==: --dhchap-ctrl-secret DHHC-1:01:YjU3NTQ5OTQxNzU1ZWY4OTIxN2RlNWVmOTIyNzc3ZDnMz2+M: 00:18:46.418 08:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YTFkYjA0Y2M4YTM5MmE0NDQ5N2JhZmZkNjdjYjMyMjc0ODA1MzZiZmVmZDhlY2Q0siICcA==: --dhchap-ctrl-secret DHHC-1:01:YjU3NTQ5OTQxNzU1ZWY4OTIxN2RlNWVmOTIyNzc3ZDnMz2+M: 00:18:47.373 08:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.373 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.373 08:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:47.373 08:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.373 08:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.373 08:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.373 08:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:47.373 08:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:47.373 08:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:47.373 08:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:18:47.373 08:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:47.373 08:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:47.373 08:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:47.373 08:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:47.373 08:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.373 08:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:47.373 08:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.373 08:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.373 08:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.373 08:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:47.373 08:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:47.373 08:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:47.632 00:18:47.632 08:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:47.632 08:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:47.632 08:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.893 08:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.893 08:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.893 08:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.893 08:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.893 08:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.893 08:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:47.893 { 00:18:47.893 "cntlid": 7, 00:18:47.893 "qid": 0, 00:18:47.893 "state": "enabled", 00:18:47.893 "thread": "nvmf_tgt_poll_group_000", 00:18:47.893 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:47.893 "listen_address": { 00:18:47.893 "trtype": "TCP", 00:18:47.893 "adrfam": "IPv4", 00:18:47.893 "traddr": "10.0.0.2", 00:18:47.893 "trsvcid": "4420" 00:18:47.893 }, 00:18:47.893 "peer_address": { 00:18:47.893 "trtype": "TCP", 00:18:47.893 "adrfam": "IPv4", 00:18:47.893 "traddr": "10.0.0.1", 00:18:47.893 "trsvcid": "38382" 00:18:47.893 }, 00:18:47.893 "auth": { 00:18:47.893 "state": "completed", 00:18:47.893 "digest": "sha256", 00:18:47.893 "dhgroup": "null" 00:18:47.893 } 00:18:47.893 } 00:18:47.893 ]' 00:18:47.893 08:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:47.893 08:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:47.893 08:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:47.893 08:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:47.893 08:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:47.893 08:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.893 08:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.893 08:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.154 08:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGFlYjc1NWQ0MTBjM2ZkODFkNTZkMTliZmE2ZDY4NDJjNWZmMTc4MmI0MWQ2MjliOTlhYzA4MThlODk3Y2Q2Ycvd40Y=: 00:18:48.154 08:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NGFlYjc1NWQ0MTBjM2ZkODFkNTZkMTliZmE2ZDY4NDJjNWZmMTc4MmI0MWQ2MjliOTlhYzA4MThlODk3Y2Q2Ycvd40Y=: 00:18:48.725 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.725 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.725 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:48.725 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.725 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.725 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.725 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:48.725 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:48.725 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:48.725 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:48.986 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:18:48.986 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:48.986 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:48.986 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:48.986 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:48.986 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.986 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:48.986 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.986 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.986 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.986 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:48.986 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:48.986 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.247 00:18:49.247 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:49.247 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:49.247 08:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.507 08:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.507 08:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.507 08:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.507 08:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.507 08:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.507 08:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:49.507 { 00:18:49.507 "cntlid": 9, 00:18:49.507 "qid": 0, 00:18:49.507 "state": "enabled", 00:18:49.507 "thread": "nvmf_tgt_poll_group_000", 00:18:49.507 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:49.507 "listen_address": { 00:18:49.507 "trtype": "TCP", 00:18:49.507 "adrfam": "IPv4", 00:18:49.507 "traddr": "10.0.0.2", 00:18:49.507 "trsvcid": "4420" 00:18:49.507 }, 00:18:49.507 "peer_address": { 00:18:49.507 "trtype": "TCP", 00:18:49.507 "adrfam": "IPv4", 00:18:49.507 "traddr": "10.0.0.1", 00:18:49.507 "trsvcid": "38404" 00:18:49.507 }, 00:18:49.507 "auth": { 00:18:49.507 "state": "completed", 00:18:49.507 "digest": "sha256", 00:18:49.507 "dhgroup": "ffdhe2048" 00:18:49.507 } 00:18:49.507 } 00:18:49.507 ]' 00:18:49.507 08:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:49.507 08:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:49.507 08:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:49.507 08:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:49.507 08:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:49.507 08:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.507 08:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.507 08:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.768 08:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTE1MTkyMTk3ZWNhZWZiNzYyMjhiYzNkN2U2ZTcyYTY0YzAzZmNkODIxMzI1MDU3H3XE0Q==: --dhchap-ctrl-secret DHHC-1:03:NTRkNDA0NGZkNmU5YzIxYTVhYWVjOTM2MWQ3NTI1Y2ViNTI1OTYyZThmNDExNWM4NjIwNjk1Y2U0ZWI5MjU0M3VUq34=: 00:18:49.768 08:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NTE1MTkyMTk3ZWNhZWZiNzYyMjhiYzNkN2U2ZTcyYTY0YzAzZmNkODIxMzI1MDU3H3XE0Q==: --dhchap-ctrl-secret DHHC-1:03:NTRkNDA0NGZkNmU5YzIxYTVhYWVjOTM2MWQ3NTI1Y2ViNTI1OTYyZThmNDExNWM4NjIwNjk1Y2U0ZWI5MjU0M3VUq34=: 00:18:50.713 08:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.713 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.713 08:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:50.713 08:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.713 08:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.713 08:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.713 08:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:50.713 08:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:50.713 08:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:50.713 08:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:18:50.713 08:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:50.713 08:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:50.713 08:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:50.713 08:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:50.713 08:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.713 08:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.713 08:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.713 08:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.713 08:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.713 08:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.713 08:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.713 08:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.975 00:18:50.975 08:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:50.975 08:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:50.975 08:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.236 08:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.236 08:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.236 08:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.236 08:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.236 08:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.236 08:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:51.236 { 00:18:51.236 "cntlid": 11, 00:18:51.236 "qid": 0, 00:18:51.236 "state": "enabled", 00:18:51.236 "thread": "nvmf_tgt_poll_group_000", 00:18:51.236 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:51.236 "listen_address": { 00:18:51.236 "trtype": "TCP", 00:18:51.236 "adrfam": "IPv4", 00:18:51.236 "traddr": "10.0.0.2", 00:18:51.236 "trsvcid": "4420" 00:18:51.236 }, 00:18:51.236 "peer_address": { 00:18:51.236 "trtype": "TCP", 00:18:51.236 "adrfam": "IPv4", 00:18:51.236 "traddr": "10.0.0.1", 00:18:51.236 "trsvcid": "54774" 00:18:51.236 }, 00:18:51.236 "auth": { 00:18:51.236 "state": "completed", 00:18:51.236 "digest": "sha256", 00:18:51.236 "dhgroup": "ffdhe2048" 00:18:51.236 } 00:18:51.236 } 00:18:51.236 ]' 00:18:51.236 08:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:51.237 08:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:51.237 08:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:51.237 08:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:51.237 08:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:51.237 08:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.237 08:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.237 08:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.497 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmE0MzYyZGFkOTgxMWNhYzQ3MDU2ZDYyY2NkNDI5MjOJSZcM: --dhchap-ctrl-secret DHHC-1:02:NTEyMmRhMmYwNDkyZmNlMzIxZjk5OWM5ZmNjODZlM2E0ZThmM2E0NjNkMmYxMjhkUoDicg==: 00:18:51.497 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NmE0MzYyZGFkOTgxMWNhYzQ3MDU2ZDYyY2NkNDI5MjOJSZcM: --dhchap-ctrl-secret DHHC-1:02:NTEyMmRhMmYwNDkyZmNlMzIxZjk5OWM5ZmNjODZlM2E0ZThmM2E0NjNkMmYxMjhkUoDicg==: 00:18:52.440 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.440 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.440 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:52.440 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.440 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.440 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.440 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:52.440 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:52.440 08:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:52.440 08:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:18:52.440 08:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:52.440 08:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:52.440 08:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:52.440 08:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:52.440 08:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.440 08:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.440 08:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.440 08:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.440 08:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.440 08:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.440 08:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.440 08:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.701 00:18:52.701 08:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:52.701 08:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:52.701 08:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.962 08:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.962 08:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.962 08:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.962 08:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.962 08:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.962 08:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:52.962 { 00:18:52.962 "cntlid": 13, 00:18:52.962 "qid": 0, 00:18:52.962 "state": "enabled", 00:18:52.962 "thread": "nvmf_tgt_poll_group_000", 00:18:52.962 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:52.962 "listen_address": { 00:18:52.962 "trtype": "TCP", 00:18:52.962 "adrfam": "IPv4", 00:18:52.962 "traddr": "10.0.0.2", 00:18:52.962 "trsvcid": "4420" 00:18:52.962 }, 00:18:52.962 "peer_address": { 00:18:52.962 "trtype": "TCP", 00:18:52.962 "adrfam": "IPv4", 00:18:52.962 "traddr": "10.0.0.1", 00:18:52.962 "trsvcid": "54800" 00:18:52.962 }, 00:18:52.962 "auth": { 00:18:52.962 "state": "completed", 00:18:52.962 "digest": "sha256", 00:18:52.962 "dhgroup": "ffdhe2048" 00:18:52.962 } 00:18:52.962 } 00:18:52.962 ]' 00:18:52.962 08:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:52.962 08:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:52.962 08:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:52.962 08:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:52.962 08:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:52.962 08:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.962 08:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.962 08:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.223 08:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTFkYjA0Y2M4YTM5MmE0NDQ5N2JhZmZkNjdjYjMyMjc0ODA1MzZiZmVmZDhlY2Q0siICcA==: --dhchap-ctrl-secret DHHC-1:01:YjU3NTQ5OTQxNzU1ZWY4OTIxN2RlNWVmOTIyNzc3ZDnMz2+M: 00:18:53.223 08:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YTFkYjA0Y2M4YTM5MmE0NDQ5N2JhZmZkNjdjYjMyMjc0ODA1MzZiZmVmZDhlY2Q0siICcA==: --dhchap-ctrl-secret DHHC-1:01:YjU3NTQ5OTQxNzU1ZWY4OTIxN2RlNWVmOTIyNzc3ZDnMz2+M: 00:18:53.793 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.054 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.054 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:54.054 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.054 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.054 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.054 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:54.054 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:54.054 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:54.054 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:18:54.054 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:54.054 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:54.054 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:54.054 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:54.054 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.054 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:54.054 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.054 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.054 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.054 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:54.054 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:54.054 08:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:54.315 00:18:54.315 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:54.315 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:54.315 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.576 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.576 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.576 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.576 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.576 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.576 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:54.576 { 00:18:54.576 "cntlid": 15, 00:18:54.576 "qid": 0, 00:18:54.576 "state": "enabled", 00:18:54.576 "thread": "nvmf_tgt_poll_group_000", 00:18:54.576 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:54.576 "listen_address": { 00:18:54.576 "trtype": "TCP", 00:18:54.576 "adrfam": "IPv4", 00:18:54.576 "traddr": "10.0.0.2", 00:18:54.576 "trsvcid": "4420" 00:18:54.576 }, 00:18:54.576 "peer_address": { 00:18:54.576 "trtype": "TCP", 00:18:54.576 "adrfam": "IPv4", 00:18:54.576 "traddr": "10.0.0.1", 00:18:54.576 "trsvcid": "54836" 00:18:54.576 }, 00:18:54.576 "auth": { 00:18:54.576 "state": "completed", 00:18:54.576 "digest": "sha256", 00:18:54.576 "dhgroup": "ffdhe2048" 00:18:54.576 } 00:18:54.576 } 00:18:54.576 ]' 00:18:54.576 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:54.576 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:54.576 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:54.576 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:54.576 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:54.576 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.576 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.576 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.837 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGFlYjc1NWQ0MTBjM2ZkODFkNTZkMTliZmE2ZDY4NDJjNWZmMTc4MmI0MWQ2MjliOTlhYzA4MThlODk3Y2Q2Ycvd40Y=: 00:18:54.837 08:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NGFlYjc1NWQ0MTBjM2ZkODFkNTZkMTliZmE2ZDY4NDJjNWZmMTc4MmI0MWQ2MjliOTlhYzA4MThlODk3Y2Q2Ycvd40Y=: 00:18:55.777 08:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.777 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.777 08:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:55.777 08:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.777 08:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.777 08:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.777 08:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:55.777 08:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:55.777 08:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:55.777 08:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:55.777 08:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:18:55.777 08:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:55.777 08:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:55.777 08:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:55.777 08:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:55.777 08:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.777 08:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:55.777 08:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.777 08:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.777 08:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.777 08:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:55.777 08:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:55.777 08:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.039 00:18:56.039 08:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:56.039 08:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:56.039 08:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.300 08:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.300 08:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.300 08:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.300 08:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.300 08:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.300 08:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:56.300 { 00:18:56.300 "cntlid": 17, 00:18:56.300 "qid": 0, 00:18:56.300 "state": "enabled", 00:18:56.300 "thread": "nvmf_tgt_poll_group_000", 00:18:56.300 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:56.300 "listen_address": { 00:18:56.300 "trtype": "TCP", 00:18:56.300 "adrfam": "IPv4", 00:18:56.300 "traddr": "10.0.0.2", 00:18:56.300 "trsvcid": "4420" 00:18:56.300 }, 00:18:56.300 "peer_address": { 00:18:56.300 "trtype": "TCP", 00:18:56.300 "adrfam": "IPv4", 00:18:56.300 "traddr": "10.0.0.1", 00:18:56.300 "trsvcid": "54872" 00:18:56.300 }, 00:18:56.300 "auth": { 00:18:56.300 "state": "completed", 00:18:56.300 "digest": "sha256", 00:18:56.300 "dhgroup": "ffdhe3072" 00:18:56.300 } 00:18:56.300 } 00:18:56.300 ]' 00:18:56.300 08:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:56.300 08:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:56.300 08:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:56.300 08:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:56.300 08:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:56.300 08:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.300 08:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.300 08:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.560 08:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTE1MTkyMTk3ZWNhZWZiNzYyMjhiYzNkN2U2ZTcyYTY0YzAzZmNkODIxMzI1MDU3H3XE0Q==: --dhchap-ctrl-secret DHHC-1:03:NTRkNDA0NGZkNmU5YzIxYTVhYWVjOTM2MWQ3NTI1Y2ViNTI1OTYyZThmNDExNWM4NjIwNjk1Y2U0ZWI5MjU0M3VUq34=: 00:18:56.560 08:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NTE1MTkyMTk3ZWNhZWZiNzYyMjhiYzNkN2U2ZTcyYTY0YzAzZmNkODIxMzI1MDU3H3XE0Q==: --dhchap-ctrl-secret DHHC-1:03:NTRkNDA0NGZkNmU5YzIxYTVhYWVjOTM2MWQ3NTI1Y2ViNTI1OTYyZThmNDExNWM4NjIwNjk1Y2U0ZWI5MjU0M3VUq34=: 00:18:57.130 08:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.390 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.390 08:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:57.390 08:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.390 08:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.390 08:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.390 08:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:57.390 08:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:57.390 08:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:57.390 08:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:18:57.390 08:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:57.390 08:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:57.390 08:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:57.390 08:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:57.390 08:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:57.390 08:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:57.390 08:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.390 08:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.390 08:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.390 08:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:57.390 08:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:57.390 08:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:57.650 00:18:57.650 08:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:57.650 08:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:57.650 08:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.931 08:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.932 08:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.932 08:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.932 08:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.932 08:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.932 08:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:57.932 { 00:18:57.932 "cntlid": 19, 00:18:57.932 "qid": 0, 00:18:57.932 "state": "enabled", 00:18:57.932 "thread": "nvmf_tgt_poll_group_000", 00:18:57.932 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:57.932 "listen_address": { 00:18:57.932 "trtype": "TCP", 00:18:57.932 "adrfam": "IPv4", 00:18:57.932 "traddr": "10.0.0.2", 00:18:57.932 "trsvcid": "4420" 00:18:57.932 }, 00:18:57.932 "peer_address": { 00:18:57.932 "trtype": "TCP", 00:18:57.932 "adrfam": "IPv4", 00:18:57.932 "traddr": "10.0.0.1", 00:18:57.932 "trsvcid": "54894" 00:18:57.932 }, 00:18:57.932 "auth": { 00:18:57.932 "state": "completed", 00:18:57.932 "digest": "sha256", 00:18:57.932 "dhgroup": "ffdhe3072" 00:18:57.932 } 00:18:57.932 } 00:18:57.932 ]' 00:18:57.932 08:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:57.932 08:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:57.932 08:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:57.932 08:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:57.932 08:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:57.932 08:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.932 08:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.932 08:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.192 08:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmE0MzYyZGFkOTgxMWNhYzQ3MDU2ZDYyY2NkNDI5MjOJSZcM: --dhchap-ctrl-secret DHHC-1:02:NTEyMmRhMmYwNDkyZmNlMzIxZjk5OWM5ZmNjODZlM2E0ZThmM2E0NjNkMmYxMjhkUoDicg==: 00:18:58.192 08:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NmE0MzYyZGFkOTgxMWNhYzQ3MDU2ZDYyY2NkNDI5MjOJSZcM: --dhchap-ctrl-secret DHHC-1:02:NTEyMmRhMmYwNDkyZmNlMzIxZjk5OWM5ZmNjODZlM2E0ZThmM2E0NjNkMmYxMjhkUoDicg==: 00:18:59.141 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.141 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.141 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:59.141 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.141 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.141 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.141 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:59.141 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:59.141 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:59.141 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:18:59.141 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:59.141 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:59.141 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:59.141 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:59.141 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.141 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:59.141 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.141 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.141 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.141 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:59.141 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:59.141 08:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:59.402 00:18:59.402 08:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:59.402 08:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:59.402 08:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.662 08:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.663 08:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.663 08:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.663 08:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.663 08:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.663 08:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:59.663 { 00:18:59.663 "cntlid": 21, 00:18:59.663 "qid": 0, 00:18:59.663 "state": "enabled", 00:18:59.663 "thread": "nvmf_tgt_poll_group_000", 00:18:59.663 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:59.663 "listen_address": { 00:18:59.663 "trtype": "TCP", 00:18:59.663 "adrfam": "IPv4", 00:18:59.663 "traddr": "10.0.0.2", 00:18:59.663 "trsvcid": "4420" 00:18:59.663 }, 00:18:59.663 "peer_address": { 00:18:59.663 "trtype": "TCP", 00:18:59.663 "adrfam": "IPv4", 00:18:59.663 "traddr": "10.0.0.1", 00:18:59.663 "trsvcid": "32976" 00:18:59.663 }, 00:18:59.663 "auth": { 00:18:59.663 "state": "completed", 00:18:59.663 "digest": "sha256", 00:18:59.663 "dhgroup": "ffdhe3072" 00:18:59.663 } 00:18:59.663 } 00:18:59.663 ]' 00:18:59.663 08:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:59.663 08:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:59.663 08:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:59.663 08:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:59.663 08:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:59.663 08:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.663 08:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.663 08:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.923 08:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTFkYjA0Y2M4YTM5MmE0NDQ5N2JhZmZkNjdjYjMyMjc0ODA1MzZiZmVmZDhlY2Q0siICcA==: --dhchap-ctrl-secret DHHC-1:01:YjU3NTQ5OTQxNzU1ZWY4OTIxN2RlNWVmOTIyNzc3ZDnMz2+M: 00:18:59.923 08:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YTFkYjA0Y2M4YTM5MmE0NDQ5N2JhZmZkNjdjYjMyMjc0ODA1MzZiZmVmZDhlY2Q0siICcA==: --dhchap-ctrl-secret DHHC-1:01:YjU3NTQ5OTQxNzU1ZWY4OTIxN2RlNWVmOTIyNzc3ZDnMz2+M: 00:19:00.864 08:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.864 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.864 08:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:00.864 08:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.864 08:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.864 08:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.864 08:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:00.864 08:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:00.864 08:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:00.864 08:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:19:00.864 08:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:00.864 08:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:00.864 08:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:00.864 08:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:00.864 08:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.864 08:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:00.864 08:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.864 08:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.864 08:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.864 08:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:00.864 08:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:00.864 08:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:01.124 00:19:01.124 08:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:01.124 08:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:01.124 08:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.385 08:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.385 08:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.385 08:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.385 08:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.385 08:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.385 08:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:01.385 { 00:19:01.385 "cntlid": 23, 00:19:01.385 "qid": 0, 00:19:01.385 "state": "enabled", 00:19:01.385 "thread": "nvmf_tgt_poll_group_000", 00:19:01.385 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:01.385 "listen_address": { 00:19:01.385 "trtype": "TCP", 00:19:01.385 "adrfam": "IPv4", 00:19:01.385 "traddr": "10.0.0.2", 00:19:01.385 "trsvcid": "4420" 00:19:01.385 }, 00:19:01.385 "peer_address": { 00:19:01.385 "trtype": "TCP", 00:19:01.385 "adrfam": "IPv4", 00:19:01.385 "traddr": "10.0.0.1", 00:19:01.385 "trsvcid": "32996" 00:19:01.385 }, 00:19:01.385 "auth": { 00:19:01.385 "state": "completed", 00:19:01.385 "digest": "sha256", 00:19:01.385 "dhgroup": "ffdhe3072" 00:19:01.385 } 00:19:01.385 } 00:19:01.385 ]' 00:19:01.385 08:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:01.385 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:01.385 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:01.385 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:01.385 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:01.385 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.385 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.385 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.645 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGFlYjc1NWQ0MTBjM2ZkODFkNTZkMTliZmE2ZDY4NDJjNWZmMTc4MmI0MWQ2MjliOTlhYzA4MThlODk3Y2Q2Ycvd40Y=: 00:19:01.645 08:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NGFlYjc1NWQ0MTBjM2ZkODFkNTZkMTliZmE2ZDY4NDJjNWZmMTc4MmI0MWQ2MjliOTlhYzA4MThlODk3Y2Q2Ycvd40Y=: 00:19:02.215 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.473 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.473 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:02.473 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.473 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.473 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.473 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:02.474 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:02.474 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:02.474 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:02.474 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:19:02.474 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:02.474 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:02.474 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:02.474 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:02.474 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.474 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.474 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.474 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.474 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.474 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.474 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.474 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.732 00:19:02.732 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:02.732 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:02.732 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.991 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.992 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.992 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.992 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.992 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.992 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:02.992 { 00:19:02.992 "cntlid": 25, 00:19:02.992 "qid": 0, 00:19:02.992 "state": "enabled", 00:19:02.992 "thread": "nvmf_tgt_poll_group_000", 00:19:02.992 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:02.992 "listen_address": { 00:19:02.992 "trtype": "TCP", 00:19:02.992 "adrfam": "IPv4", 00:19:02.992 "traddr": "10.0.0.2", 00:19:02.992 "trsvcid": "4420" 00:19:02.992 }, 00:19:02.992 "peer_address": { 00:19:02.992 "trtype": "TCP", 00:19:02.992 "adrfam": "IPv4", 00:19:02.992 "traddr": "10.0.0.1", 00:19:02.992 "trsvcid": "33032" 00:19:02.992 }, 00:19:02.992 "auth": { 00:19:02.992 "state": "completed", 00:19:02.992 "digest": "sha256", 00:19:02.992 "dhgroup": "ffdhe4096" 00:19:02.992 } 00:19:02.992 } 00:19:02.992 ]' 00:19:02.992 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:02.992 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:02.992 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:02.992 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:02.992 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:03.252 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.252 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.252 08:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.252 08:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTE1MTkyMTk3ZWNhZWZiNzYyMjhiYzNkN2U2ZTcyYTY0YzAzZmNkODIxMzI1MDU3H3XE0Q==: --dhchap-ctrl-secret DHHC-1:03:NTRkNDA0NGZkNmU5YzIxYTVhYWVjOTM2MWQ3NTI1Y2ViNTI1OTYyZThmNDExNWM4NjIwNjk1Y2U0ZWI5MjU0M3VUq34=: 00:19:03.252 08:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NTE1MTkyMTk3ZWNhZWZiNzYyMjhiYzNkN2U2ZTcyYTY0YzAzZmNkODIxMzI1MDU3H3XE0Q==: --dhchap-ctrl-secret DHHC-1:03:NTRkNDA0NGZkNmU5YzIxYTVhYWVjOTM2MWQ3NTI1Y2ViNTI1OTYyZThmNDExNWM4NjIwNjk1Y2U0ZWI5MjU0M3VUq34=: 00:19:04.195 08:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.195 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.195 08:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:04.195 08:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.195 08:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.195 08:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.195 08:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:04.195 08:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:04.195 08:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:04.195 08:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:19:04.195 08:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:04.195 08:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:04.195 08:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:04.195 08:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:04.195 08:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.195 08:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.195 08:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.195 08:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.195 08:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.195 08:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.195 08:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.195 08:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.455 00:19:04.455 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:04.455 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:04.455 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.717 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.717 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.717 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.717 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.717 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.717 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:04.717 { 00:19:04.717 "cntlid": 27, 00:19:04.717 "qid": 0, 00:19:04.717 "state": "enabled", 00:19:04.717 "thread": "nvmf_tgt_poll_group_000", 00:19:04.717 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:04.717 "listen_address": { 00:19:04.717 "trtype": "TCP", 00:19:04.717 "adrfam": "IPv4", 00:19:04.717 "traddr": "10.0.0.2", 00:19:04.717 "trsvcid": "4420" 00:19:04.717 }, 00:19:04.717 "peer_address": { 00:19:04.717 "trtype": "TCP", 00:19:04.717 "adrfam": "IPv4", 00:19:04.717 "traddr": "10.0.0.1", 00:19:04.717 "trsvcid": "33046" 00:19:04.717 }, 00:19:04.717 "auth": { 00:19:04.717 "state": "completed", 00:19:04.717 "digest": "sha256", 00:19:04.717 "dhgroup": "ffdhe4096" 00:19:04.717 } 00:19:04.717 } 00:19:04.717 ]' 00:19:04.717 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:04.717 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:04.717 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:04.977 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:04.977 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:04.977 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.977 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.977 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.977 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmE0MzYyZGFkOTgxMWNhYzQ3MDU2ZDYyY2NkNDI5MjOJSZcM: --dhchap-ctrl-secret DHHC-1:02:NTEyMmRhMmYwNDkyZmNlMzIxZjk5OWM5ZmNjODZlM2E0ZThmM2E0NjNkMmYxMjhkUoDicg==: 00:19:04.978 08:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NmE0MzYyZGFkOTgxMWNhYzQ3MDU2ZDYyY2NkNDI5MjOJSZcM: --dhchap-ctrl-secret DHHC-1:02:NTEyMmRhMmYwNDkyZmNlMzIxZjk5OWM5ZmNjODZlM2E0ZThmM2E0NjNkMmYxMjhkUoDicg==: 00:19:05.919 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.919 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.919 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:05.919 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.919 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.919 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.919 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:05.919 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:05.919 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:05.919 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:19:05.919 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:05.919 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:05.919 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:05.919 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:05.919 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:05.919 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:05.919 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.919 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.919 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.919 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:05.919 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:05.919 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.181 00:19:06.181 08:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:06.181 08:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.181 08:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:06.442 08:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.442 08:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.442 08:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.442 08:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.442 08:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.442 08:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:06.442 { 00:19:06.442 "cntlid": 29, 00:19:06.442 "qid": 0, 00:19:06.442 "state": "enabled", 00:19:06.442 "thread": "nvmf_tgt_poll_group_000", 00:19:06.442 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:06.442 "listen_address": { 00:19:06.442 "trtype": "TCP", 00:19:06.442 "adrfam": "IPv4", 00:19:06.442 "traddr": "10.0.0.2", 00:19:06.442 "trsvcid": "4420" 00:19:06.442 }, 00:19:06.442 "peer_address": { 00:19:06.442 "trtype": "TCP", 00:19:06.442 "adrfam": "IPv4", 00:19:06.442 "traddr": "10.0.0.1", 00:19:06.442 "trsvcid": "33078" 00:19:06.442 }, 00:19:06.442 "auth": { 00:19:06.442 "state": "completed", 00:19:06.442 "digest": "sha256", 00:19:06.442 "dhgroup": "ffdhe4096" 00:19:06.442 } 00:19:06.442 } 00:19:06.442 ]' 00:19:06.442 08:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:06.442 08:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:06.442 08:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:06.703 08:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:06.703 08:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:06.703 08:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.703 08:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.703 08:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.703 08:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTFkYjA0Y2M4YTM5MmE0NDQ5N2JhZmZkNjdjYjMyMjc0ODA1MzZiZmVmZDhlY2Q0siICcA==: --dhchap-ctrl-secret DHHC-1:01:YjU3NTQ5OTQxNzU1ZWY4OTIxN2RlNWVmOTIyNzc3ZDnMz2+M: 00:19:06.703 08:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YTFkYjA0Y2M4YTM5MmE0NDQ5N2JhZmZkNjdjYjMyMjc0ODA1MzZiZmVmZDhlY2Q0siICcA==: --dhchap-ctrl-secret DHHC-1:01:YjU3NTQ5OTQxNzU1ZWY4OTIxN2RlNWVmOTIyNzc3ZDnMz2+M: 00:19:07.643 08:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.643 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.643 08:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:07.643 08:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.643 08:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.643 08:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.643 08:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:07.643 08:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:07.643 08:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:07.643 08:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:19:07.643 08:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:07.643 08:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:07.643 08:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:07.643 08:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:07.643 08:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.643 08:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:07.643 08:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.643 08:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.643 08:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.643 08:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:07.643 08:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:07.643 08:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:07.903 00:19:07.903 08:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:07.903 08:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:07.903 08:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.168 08:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.168 08:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.168 08:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.168 08:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.168 08:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.168 08:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:08.168 { 00:19:08.168 "cntlid": 31, 00:19:08.168 "qid": 0, 00:19:08.168 "state": "enabled", 00:19:08.168 "thread": "nvmf_tgt_poll_group_000", 00:19:08.168 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:08.168 "listen_address": { 00:19:08.168 "trtype": "TCP", 00:19:08.168 "adrfam": "IPv4", 00:19:08.168 "traddr": "10.0.0.2", 00:19:08.168 "trsvcid": "4420" 00:19:08.168 }, 00:19:08.168 "peer_address": { 00:19:08.168 "trtype": "TCP", 00:19:08.168 "adrfam": "IPv4", 00:19:08.168 "traddr": "10.0.0.1", 00:19:08.168 "trsvcid": "33108" 00:19:08.168 }, 00:19:08.168 "auth": { 00:19:08.168 "state": "completed", 00:19:08.168 "digest": "sha256", 00:19:08.168 "dhgroup": "ffdhe4096" 00:19:08.168 } 00:19:08.168 } 00:19:08.168 ]' 00:19:08.168 08:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:08.168 08:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:08.168 08:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:08.168 08:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:08.168 08:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:08.444 08:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.444 08:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.444 08:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.444 08:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGFlYjc1NWQ0MTBjM2ZkODFkNTZkMTliZmE2ZDY4NDJjNWZmMTc4MmI0MWQ2MjliOTlhYzA4MThlODk3Y2Q2Ycvd40Y=: 00:19:08.444 08:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NGFlYjc1NWQ0MTBjM2ZkODFkNTZkMTliZmE2ZDY4NDJjNWZmMTc4MmI0MWQ2MjliOTlhYzA4MThlODk3Y2Q2Ycvd40Y=: 00:19:09.412 08:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.412 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.412 08:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:09.412 08:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.412 08:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.412 08:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.412 08:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:09.413 08:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:09.413 08:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:09.413 08:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:09.413 08:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:19:09.413 08:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:09.413 08:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:09.413 08:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:09.413 08:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:09.413 08:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.413 08:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.413 08:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.413 08:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.413 08:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.413 08:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.413 08:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.413 08:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.672 00:19:09.933 08:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:09.933 08:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:09.933 08:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.933 08:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.933 08:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.933 08:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.933 08:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.933 08:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.933 08:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:09.933 { 00:19:09.933 "cntlid": 33, 00:19:09.933 "qid": 0, 00:19:09.933 "state": "enabled", 00:19:09.933 "thread": "nvmf_tgt_poll_group_000", 00:19:09.933 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:09.933 "listen_address": { 00:19:09.933 "trtype": "TCP", 00:19:09.933 "adrfam": "IPv4", 00:19:09.933 "traddr": "10.0.0.2", 00:19:09.933 "trsvcid": "4420" 00:19:09.933 }, 00:19:09.933 "peer_address": { 00:19:09.933 "trtype": "TCP", 00:19:09.933 "adrfam": "IPv4", 00:19:09.933 "traddr": "10.0.0.1", 00:19:09.933 "trsvcid": "51598" 00:19:09.933 }, 00:19:09.933 "auth": { 00:19:09.933 "state": "completed", 00:19:09.933 "digest": "sha256", 00:19:09.933 "dhgroup": "ffdhe6144" 00:19:09.933 } 00:19:09.933 } 00:19:09.933 ]' 00:19:09.933 08:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:09.933 08:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:09.933 08:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:10.194 08:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:10.194 08:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:10.194 08:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.194 08:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.194 08:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.194 08:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTE1MTkyMTk3ZWNhZWZiNzYyMjhiYzNkN2U2ZTcyYTY0YzAzZmNkODIxMzI1MDU3H3XE0Q==: --dhchap-ctrl-secret DHHC-1:03:NTRkNDA0NGZkNmU5YzIxYTVhYWVjOTM2MWQ3NTI1Y2ViNTI1OTYyZThmNDExNWM4NjIwNjk1Y2U0ZWI5MjU0M3VUq34=: 00:19:10.194 08:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NTE1MTkyMTk3ZWNhZWZiNzYyMjhiYzNkN2U2ZTcyYTY0YzAzZmNkODIxMzI1MDU3H3XE0Q==: --dhchap-ctrl-secret DHHC-1:03:NTRkNDA0NGZkNmU5YzIxYTVhYWVjOTM2MWQ3NTI1Y2ViNTI1OTYyZThmNDExNWM4NjIwNjk1Y2U0ZWI5MjU0M3VUq34=: 00:19:11.135 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.135 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.135 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:11.135 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.135 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.135 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.135 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:11.135 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:11.135 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:11.135 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:19:11.135 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:11.135 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:11.135 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:11.135 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:11.135 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.135 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.135 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.135 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.135 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.135 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.135 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.135 08:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.706 00:19:11.706 08:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:11.706 08:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:11.706 08:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.706 08:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.706 08:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.706 08:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.706 08:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.706 08:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.706 08:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:11.706 { 00:19:11.706 "cntlid": 35, 00:19:11.706 "qid": 0, 00:19:11.706 "state": "enabled", 00:19:11.706 "thread": "nvmf_tgt_poll_group_000", 00:19:11.706 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:11.706 "listen_address": { 00:19:11.706 "trtype": "TCP", 00:19:11.706 "adrfam": "IPv4", 00:19:11.706 "traddr": "10.0.0.2", 00:19:11.706 "trsvcid": "4420" 00:19:11.706 }, 00:19:11.706 "peer_address": { 00:19:11.706 "trtype": "TCP", 00:19:11.706 "adrfam": "IPv4", 00:19:11.706 "traddr": "10.0.0.1", 00:19:11.706 "trsvcid": "51632" 00:19:11.706 }, 00:19:11.706 "auth": { 00:19:11.706 "state": "completed", 00:19:11.706 "digest": "sha256", 00:19:11.706 "dhgroup": "ffdhe6144" 00:19:11.706 } 00:19:11.706 } 00:19:11.706 ]' 00:19:11.706 08:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:11.967 08:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:11.967 08:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:11.967 08:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:11.967 08:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:11.967 08:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.967 08:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.967 08:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.227 08:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmE0MzYyZGFkOTgxMWNhYzQ3MDU2ZDYyY2NkNDI5MjOJSZcM: --dhchap-ctrl-secret DHHC-1:02:NTEyMmRhMmYwNDkyZmNlMzIxZjk5OWM5ZmNjODZlM2E0ZThmM2E0NjNkMmYxMjhkUoDicg==: 00:19:12.227 08:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NmE0MzYyZGFkOTgxMWNhYzQ3MDU2ZDYyY2NkNDI5MjOJSZcM: --dhchap-ctrl-secret DHHC-1:02:NTEyMmRhMmYwNDkyZmNlMzIxZjk5OWM5ZmNjODZlM2E0ZThmM2E0NjNkMmYxMjhkUoDicg==: 00:19:12.798 08:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.798 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.798 08:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:12.798 08:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.798 08:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.798 08:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.798 08:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:12.798 08:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:12.798 08:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:13.059 08:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:19:13.059 08:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:13.059 08:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:13.059 08:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:13.059 08:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:13.059 08:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.059 08:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.059 08:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.059 08:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.059 08:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.059 08:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.059 08:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.059 08:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.319 00:19:13.580 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:13.580 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:13.580 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.580 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.580 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.580 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.580 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.580 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.580 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:13.580 { 00:19:13.580 "cntlid": 37, 00:19:13.580 "qid": 0, 00:19:13.580 "state": "enabled", 00:19:13.580 "thread": "nvmf_tgt_poll_group_000", 00:19:13.580 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:13.580 "listen_address": { 00:19:13.580 "trtype": "TCP", 00:19:13.580 "adrfam": "IPv4", 00:19:13.580 "traddr": "10.0.0.2", 00:19:13.580 "trsvcid": "4420" 00:19:13.580 }, 00:19:13.581 "peer_address": { 00:19:13.581 "trtype": "TCP", 00:19:13.581 "adrfam": "IPv4", 00:19:13.581 "traddr": "10.0.0.1", 00:19:13.581 "trsvcid": "51654" 00:19:13.581 }, 00:19:13.581 "auth": { 00:19:13.581 "state": "completed", 00:19:13.581 "digest": "sha256", 00:19:13.581 "dhgroup": "ffdhe6144" 00:19:13.581 } 00:19:13.581 } 00:19:13.581 ]' 00:19:13.581 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:13.581 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:13.581 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:13.841 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:13.841 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:13.841 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.841 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.841 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.841 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTFkYjA0Y2M4YTM5MmE0NDQ5N2JhZmZkNjdjYjMyMjc0ODA1MzZiZmVmZDhlY2Q0siICcA==: --dhchap-ctrl-secret DHHC-1:01:YjU3NTQ5OTQxNzU1ZWY4OTIxN2RlNWVmOTIyNzc3ZDnMz2+M: 00:19:13.841 08:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YTFkYjA0Y2M4YTM5MmE0NDQ5N2JhZmZkNjdjYjMyMjc0ODA1MzZiZmVmZDhlY2Q0siICcA==: --dhchap-ctrl-secret DHHC-1:01:YjU3NTQ5OTQxNzU1ZWY4OTIxN2RlNWVmOTIyNzc3ZDnMz2+M: 00:19:14.782 08:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.782 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.782 08:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:14.782 08:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.782 08:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.782 08:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.782 08:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:14.782 08:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:14.782 08:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:15.042 08:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:19:15.042 08:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:15.042 08:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:15.042 08:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:15.042 08:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:15.042 08:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.042 08:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:15.042 08:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.042 08:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.042 08:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.042 08:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:15.042 08:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:15.042 08:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:15.303 00:19:15.303 08:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:15.303 08:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:15.303 08:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.563 08:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.563 08:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.563 08:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.563 08:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.563 08:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.563 08:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:15.563 { 00:19:15.563 "cntlid": 39, 00:19:15.563 "qid": 0, 00:19:15.563 "state": "enabled", 00:19:15.563 "thread": "nvmf_tgt_poll_group_000", 00:19:15.563 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:15.563 "listen_address": { 00:19:15.563 "trtype": "TCP", 00:19:15.563 "adrfam": "IPv4", 00:19:15.563 "traddr": "10.0.0.2", 00:19:15.563 "trsvcid": "4420" 00:19:15.563 }, 00:19:15.563 "peer_address": { 00:19:15.563 "trtype": "TCP", 00:19:15.563 "adrfam": "IPv4", 00:19:15.563 "traddr": "10.0.0.1", 00:19:15.563 "trsvcid": "51688" 00:19:15.563 }, 00:19:15.563 "auth": { 00:19:15.563 "state": "completed", 00:19:15.563 "digest": "sha256", 00:19:15.563 "dhgroup": "ffdhe6144" 00:19:15.563 } 00:19:15.563 } 00:19:15.563 ]' 00:19:15.563 08:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:15.563 08:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:15.563 08:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:15.563 08:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:15.563 08:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:15.563 08:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.563 08:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.563 08:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.823 08:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGFlYjc1NWQ0MTBjM2ZkODFkNTZkMTliZmE2ZDY4NDJjNWZmMTc4MmI0MWQ2MjliOTlhYzA4MThlODk3Y2Q2Ycvd40Y=: 00:19:15.823 08:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NGFlYjc1NWQ0MTBjM2ZkODFkNTZkMTliZmE2ZDY4NDJjNWZmMTc4MmI0MWQ2MjliOTlhYzA4MThlODk3Y2Q2Ycvd40Y=: 00:19:16.763 08:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.763 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.764 08:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:16.764 08:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.764 08:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.764 08:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.764 08:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:16.764 08:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:16.764 08:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:16.764 08:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:16.764 08:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:19:16.764 08:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:16.764 08:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:16.764 08:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:16.764 08:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:16.764 08:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.764 08:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.764 08:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.764 08:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.764 08:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.764 08:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.764 08:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.764 08:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.340 00:19:17.340 08:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:17.341 08:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.341 08:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:17.604 08:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.605 08:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.605 08:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.605 08:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.605 08:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.605 08:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:17.605 { 00:19:17.605 "cntlid": 41, 00:19:17.605 "qid": 0, 00:19:17.605 "state": "enabled", 00:19:17.605 "thread": "nvmf_tgt_poll_group_000", 00:19:17.605 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:17.605 "listen_address": { 00:19:17.605 "trtype": "TCP", 00:19:17.605 "adrfam": "IPv4", 00:19:17.605 "traddr": "10.0.0.2", 00:19:17.605 "trsvcid": "4420" 00:19:17.605 }, 00:19:17.605 "peer_address": { 00:19:17.605 "trtype": "TCP", 00:19:17.605 "adrfam": "IPv4", 00:19:17.605 "traddr": "10.0.0.1", 00:19:17.605 "trsvcid": "51706" 00:19:17.605 }, 00:19:17.605 "auth": { 00:19:17.605 "state": "completed", 00:19:17.605 "digest": "sha256", 00:19:17.605 "dhgroup": "ffdhe8192" 00:19:17.605 } 00:19:17.605 } 00:19:17.605 ]' 00:19:17.605 08:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:17.605 08:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:17.605 08:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:17.605 08:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:17.605 08:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:17.605 08:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.605 08:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.605 08:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.865 08:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTE1MTkyMTk3ZWNhZWZiNzYyMjhiYzNkN2U2ZTcyYTY0YzAzZmNkODIxMzI1MDU3H3XE0Q==: --dhchap-ctrl-secret DHHC-1:03:NTRkNDA0NGZkNmU5YzIxYTVhYWVjOTM2MWQ3NTI1Y2ViNTI1OTYyZThmNDExNWM4NjIwNjk1Y2U0ZWI5MjU0M3VUq34=: 00:19:17.865 08:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NTE1MTkyMTk3ZWNhZWZiNzYyMjhiYzNkN2U2ZTcyYTY0YzAzZmNkODIxMzI1MDU3H3XE0Q==: --dhchap-ctrl-secret DHHC-1:03:NTRkNDA0NGZkNmU5YzIxYTVhYWVjOTM2MWQ3NTI1Y2ViNTI1OTYyZThmNDExNWM4NjIwNjk1Y2U0ZWI5MjU0M3VUq34=: 00:19:18.436 08:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.436 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.436 08:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:18.436 08:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.436 08:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.436 08:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.436 08:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:18.436 08:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:18.436 08:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:18.698 08:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:19:18.698 08:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:18.698 08:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:18.698 08:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:18.698 08:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:18.698 08:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.698 08:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.698 08:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.698 08:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.698 08:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.698 08:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.698 08:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.698 08:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.270 00:19:19.270 08:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:19.270 08:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:19.270 08:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.530 08:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.530 08:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.530 08:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.530 08:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.530 08:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.530 08:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:19.530 { 00:19:19.530 "cntlid": 43, 00:19:19.530 "qid": 0, 00:19:19.530 "state": "enabled", 00:19:19.530 "thread": "nvmf_tgt_poll_group_000", 00:19:19.530 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:19.530 "listen_address": { 00:19:19.530 "trtype": "TCP", 00:19:19.530 "adrfam": "IPv4", 00:19:19.530 "traddr": "10.0.0.2", 00:19:19.530 "trsvcid": "4420" 00:19:19.530 }, 00:19:19.530 "peer_address": { 00:19:19.530 "trtype": "TCP", 00:19:19.530 "adrfam": "IPv4", 00:19:19.530 "traddr": "10.0.0.1", 00:19:19.530 "trsvcid": "51748" 00:19:19.530 }, 00:19:19.530 "auth": { 00:19:19.530 "state": "completed", 00:19:19.530 "digest": "sha256", 00:19:19.530 "dhgroup": "ffdhe8192" 00:19:19.530 } 00:19:19.530 } 00:19:19.530 ]' 00:19:19.530 08:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:19.530 08:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:19.530 08:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:19.530 08:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:19.530 08:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:19.530 08:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.530 08:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.530 08:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.791 08:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmE0MzYyZGFkOTgxMWNhYzQ3MDU2ZDYyY2NkNDI5MjOJSZcM: --dhchap-ctrl-secret DHHC-1:02:NTEyMmRhMmYwNDkyZmNlMzIxZjk5OWM5ZmNjODZlM2E0ZThmM2E0NjNkMmYxMjhkUoDicg==: 00:19:19.791 08:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NmE0MzYyZGFkOTgxMWNhYzQ3MDU2ZDYyY2NkNDI5MjOJSZcM: --dhchap-ctrl-secret DHHC-1:02:NTEyMmRhMmYwNDkyZmNlMzIxZjk5OWM5ZmNjODZlM2E0ZThmM2E0NjNkMmYxMjhkUoDicg==: 00:19:20.732 08:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.732 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.732 08:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:20.732 08:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.732 08:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.732 08:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.732 08:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:20.732 08:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:20.732 08:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:20.732 08:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:19:20.732 08:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:20.732 08:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:20.732 08:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:20.732 08:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:20.732 08:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.732 08:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.732 08:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.732 08:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.732 08:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.732 08:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.732 08:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.732 08:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:21.303 00:19:21.303 08:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:21.303 08:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:21.303 08:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.303 08:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.303 08:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.303 08:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.303 08:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.564 08:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.564 08:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:21.564 { 00:19:21.564 "cntlid": 45, 00:19:21.564 "qid": 0, 00:19:21.564 "state": "enabled", 00:19:21.564 "thread": "nvmf_tgt_poll_group_000", 00:19:21.564 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:21.564 "listen_address": { 00:19:21.564 "trtype": "TCP", 00:19:21.564 "adrfam": "IPv4", 00:19:21.564 "traddr": "10.0.0.2", 00:19:21.564 "trsvcid": "4420" 00:19:21.564 }, 00:19:21.564 "peer_address": { 00:19:21.564 "trtype": "TCP", 00:19:21.564 "adrfam": "IPv4", 00:19:21.564 "traddr": "10.0.0.1", 00:19:21.564 "trsvcid": "49122" 00:19:21.564 }, 00:19:21.564 "auth": { 00:19:21.564 "state": "completed", 00:19:21.564 "digest": "sha256", 00:19:21.564 "dhgroup": "ffdhe8192" 00:19:21.564 } 00:19:21.564 } 00:19:21.564 ]' 00:19:21.564 08:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:21.564 08:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:21.564 08:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:21.564 08:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:21.564 08:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:21.564 08:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.564 08:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.564 08:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.825 08:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTFkYjA0Y2M4YTM5MmE0NDQ5N2JhZmZkNjdjYjMyMjc0ODA1MzZiZmVmZDhlY2Q0siICcA==: --dhchap-ctrl-secret DHHC-1:01:YjU3NTQ5OTQxNzU1ZWY4OTIxN2RlNWVmOTIyNzc3ZDnMz2+M: 00:19:21.825 08:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YTFkYjA0Y2M4YTM5MmE0NDQ5N2JhZmZkNjdjYjMyMjc0ODA1MzZiZmVmZDhlY2Q0siICcA==: --dhchap-ctrl-secret DHHC-1:01:YjU3NTQ5OTQxNzU1ZWY4OTIxN2RlNWVmOTIyNzc3ZDnMz2+M: 00:19:22.397 08:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.658 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.658 08:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:22.658 08:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.658 08:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.658 08:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.658 08:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:22.658 08:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:22.658 08:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:22.658 08:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:19:22.658 08:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:22.658 08:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:22.658 08:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:22.659 08:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:22.659 08:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.659 08:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:22.659 08:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.659 08:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.659 08:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.659 08:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:22.659 08:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:22.659 08:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:23.230 00:19:23.230 08:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:23.230 08:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:23.230 08:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.491 08:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.491 08:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.491 08:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.491 08:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.491 08:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.491 08:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:23.491 { 00:19:23.491 "cntlid": 47, 00:19:23.491 "qid": 0, 00:19:23.491 "state": "enabled", 00:19:23.491 "thread": "nvmf_tgt_poll_group_000", 00:19:23.491 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:23.491 "listen_address": { 00:19:23.491 "trtype": "TCP", 00:19:23.491 "adrfam": "IPv4", 00:19:23.491 "traddr": "10.0.0.2", 00:19:23.491 "trsvcid": "4420" 00:19:23.491 }, 00:19:23.491 "peer_address": { 00:19:23.491 "trtype": "TCP", 00:19:23.491 "adrfam": "IPv4", 00:19:23.491 "traddr": "10.0.0.1", 00:19:23.491 "trsvcid": "49134" 00:19:23.491 }, 00:19:23.491 "auth": { 00:19:23.491 "state": "completed", 00:19:23.491 "digest": "sha256", 00:19:23.491 "dhgroup": "ffdhe8192" 00:19:23.491 } 00:19:23.491 } 00:19:23.491 ]' 00:19:23.491 08:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:23.491 08:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:23.491 08:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:23.491 08:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:23.491 08:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:23.491 08:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.491 08:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.491 08:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.752 08:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGFlYjc1NWQ0MTBjM2ZkODFkNTZkMTliZmE2ZDY4NDJjNWZmMTc4MmI0MWQ2MjliOTlhYzA4MThlODk3Y2Q2Ycvd40Y=: 00:19:23.752 08:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NGFlYjc1NWQ0MTBjM2ZkODFkNTZkMTliZmE2ZDY4NDJjNWZmMTc4MmI0MWQ2MjliOTlhYzA4MThlODk3Y2Q2Ycvd40Y=: 00:19:24.692 08:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.692 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.692 08:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:24.692 08:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.692 08:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.692 08:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.692 08:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:24.692 08:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:24.693 08:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:24.693 08:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:24.693 08:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:24.693 08:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:19:24.693 08:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:24.693 08:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:24.693 08:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:24.693 08:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:24.693 08:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.693 08:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:24.693 08:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.693 08:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.693 08:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.693 08:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:24.693 08:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:24.693 08:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:24.953 00:19:24.953 08:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:24.953 08:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:24.953 08:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.953 08:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.953 08:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.953 08:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.953 08:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.213 08:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.213 08:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:25.213 { 00:19:25.213 "cntlid": 49, 00:19:25.213 "qid": 0, 00:19:25.213 "state": "enabled", 00:19:25.213 "thread": "nvmf_tgt_poll_group_000", 00:19:25.213 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:25.213 "listen_address": { 00:19:25.213 "trtype": "TCP", 00:19:25.213 "adrfam": "IPv4", 00:19:25.213 "traddr": "10.0.0.2", 00:19:25.213 "trsvcid": "4420" 00:19:25.213 }, 00:19:25.213 "peer_address": { 00:19:25.213 "trtype": "TCP", 00:19:25.213 "adrfam": "IPv4", 00:19:25.213 "traddr": "10.0.0.1", 00:19:25.213 "trsvcid": "49162" 00:19:25.213 }, 00:19:25.213 "auth": { 00:19:25.213 "state": "completed", 00:19:25.213 "digest": "sha384", 00:19:25.213 "dhgroup": "null" 00:19:25.213 } 00:19:25.213 } 00:19:25.213 ]' 00:19:25.213 08:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:25.213 08:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:25.213 08:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:25.213 08:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:25.213 08:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:25.213 08:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.213 08:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.214 08:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.474 08:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTE1MTkyMTk3ZWNhZWZiNzYyMjhiYzNkN2U2ZTcyYTY0YzAzZmNkODIxMzI1MDU3H3XE0Q==: --dhchap-ctrl-secret DHHC-1:03:NTRkNDA0NGZkNmU5YzIxYTVhYWVjOTM2MWQ3NTI1Y2ViNTI1OTYyZThmNDExNWM4NjIwNjk1Y2U0ZWI5MjU0M3VUq34=: 00:19:25.474 08:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NTE1MTkyMTk3ZWNhZWZiNzYyMjhiYzNkN2U2ZTcyYTY0YzAzZmNkODIxMzI1MDU3H3XE0Q==: --dhchap-ctrl-secret DHHC-1:03:NTRkNDA0NGZkNmU5YzIxYTVhYWVjOTM2MWQ3NTI1Y2ViNTI1OTYyZThmNDExNWM4NjIwNjk1Y2U0ZWI5MjU0M3VUq34=: 00:19:26.044 08:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.044 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.044 08:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:26.044 08:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.044 08:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.305 08:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.305 08:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:26.305 08:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:26.305 08:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:26.305 08:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:19:26.305 08:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:26.305 08:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:26.305 08:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:26.305 08:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:26.305 08:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.305 08:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.305 08:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.305 08:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.305 08:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.305 08:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.305 08:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.305 08:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.565 00:19:26.565 08:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:26.565 08:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:26.565 08:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.826 08:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.826 08:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.826 08:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.826 08:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.826 08:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.826 08:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:26.826 { 00:19:26.826 "cntlid": 51, 00:19:26.826 "qid": 0, 00:19:26.826 "state": "enabled", 00:19:26.826 "thread": "nvmf_tgt_poll_group_000", 00:19:26.826 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:26.826 "listen_address": { 00:19:26.826 "trtype": "TCP", 00:19:26.826 "adrfam": "IPv4", 00:19:26.826 "traddr": "10.0.0.2", 00:19:26.826 "trsvcid": "4420" 00:19:26.826 }, 00:19:26.826 "peer_address": { 00:19:26.826 "trtype": "TCP", 00:19:26.826 "adrfam": "IPv4", 00:19:26.826 "traddr": "10.0.0.1", 00:19:26.826 "trsvcid": "49194" 00:19:26.826 }, 00:19:26.826 "auth": { 00:19:26.826 "state": "completed", 00:19:26.826 "digest": "sha384", 00:19:26.826 "dhgroup": "null" 00:19:26.826 } 00:19:26.826 } 00:19:26.826 ]' 00:19:26.826 08:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:26.826 08:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:26.826 08:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:26.826 08:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:26.826 08:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:26.826 08:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.826 08:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.826 08:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.088 08:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmE0MzYyZGFkOTgxMWNhYzQ3MDU2ZDYyY2NkNDI5MjOJSZcM: --dhchap-ctrl-secret DHHC-1:02:NTEyMmRhMmYwNDkyZmNlMzIxZjk5OWM5ZmNjODZlM2E0ZThmM2E0NjNkMmYxMjhkUoDicg==: 00:19:27.088 08:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NmE0MzYyZGFkOTgxMWNhYzQ3MDU2ZDYyY2NkNDI5MjOJSZcM: --dhchap-ctrl-secret DHHC-1:02:NTEyMmRhMmYwNDkyZmNlMzIxZjk5OWM5ZmNjODZlM2E0ZThmM2E0NjNkMmYxMjhkUoDicg==: 00:19:28.028 08:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.028 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.028 08:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:28.028 08:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.028 08:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.028 08:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.028 08:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:28.028 08:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:28.028 08:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:28.028 08:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:19:28.028 08:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:28.028 08:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:28.028 08:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:28.028 08:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:28.028 08:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.028 08:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.028 08:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.028 08:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.028 08:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.028 08:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.028 08:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.028 08:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.288 00:19:28.288 08:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:28.288 08:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:28.288 08:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.548 08:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.548 08:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.549 08:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.549 08:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.549 08:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.549 08:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:28.549 { 00:19:28.549 "cntlid": 53, 00:19:28.549 "qid": 0, 00:19:28.549 "state": "enabled", 00:19:28.549 "thread": "nvmf_tgt_poll_group_000", 00:19:28.549 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:28.549 "listen_address": { 00:19:28.549 "trtype": "TCP", 00:19:28.549 "adrfam": "IPv4", 00:19:28.549 "traddr": "10.0.0.2", 00:19:28.549 "trsvcid": "4420" 00:19:28.549 }, 00:19:28.549 "peer_address": { 00:19:28.549 "trtype": "TCP", 00:19:28.549 "adrfam": "IPv4", 00:19:28.549 "traddr": "10.0.0.1", 00:19:28.549 "trsvcid": "49220" 00:19:28.549 }, 00:19:28.549 "auth": { 00:19:28.549 "state": "completed", 00:19:28.549 "digest": "sha384", 00:19:28.549 "dhgroup": "null" 00:19:28.549 } 00:19:28.549 } 00:19:28.549 ]' 00:19:28.549 08:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:28.549 08:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:28.549 08:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:28.549 08:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:28.549 08:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:28.549 08:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.549 08:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.549 08:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.809 08:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTFkYjA0Y2M4YTM5MmE0NDQ5N2JhZmZkNjdjYjMyMjc0ODA1MzZiZmVmZDhlY2Q0siICcA==: --dhchap-ctrl-secret DHHC-1:01:YjU3NTQ5OTQxNzU1ZWY4OTIxN2RlNWVmOTIyNzc3ZDnMz2+M: 00:19:28.809 08:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YTFkYjA0Y2M4YTM5MmE0NDQ5N2JhZmZkNjdjYjMyMjc0ODA1MzZiZmVmZDhlY2Q0siICcA==: --dhchap-ctrl-secret DHHC-1:01:YjU3NTQ5OTQxNzU1ZWY4OTIxN2RlNWVmOTIyNzc3ZDnMz2+M: 00:19:29.751 08:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.751 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.751 08:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:29.751 08:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.751 08:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.751 08:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.751 08:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:29.751 08:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:29.751 08:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:29.751 08:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:19:29.751 08:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:29.751 08:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:29.751 08:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:29.751 08:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:29.751 08:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:29.751 08:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:29.751 08:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.751 08:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.751 08:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.751 08:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:29.751 08:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:29.751 08:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:30.011 00:19:30.011 08:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:30.011 08:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.011 08:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:30.272 08:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.272 08:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.272 08:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.272 08:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.272 08:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.272 08:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:30.272 { 00:19:30.272 "cntlid": 55, 00:19:30.272 "qid": 0, 00:19:30.272 "state": "enabled", 00:19:30.272 "thread": "nvmf_tgt_poll_group_000", 00:19:30.272 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:30.272 "listen_address": { 00:19:30.272 "trtype": "TCP", 00:19:30.272 "adrfam": "IPv4", 00:19:30.272 "traddr": "10.0.0.2", 00:19:30.272 "trsvcid": "4420" 00:19:30.272 }, 00:19:30.272 "peer_address": { 00:19:30.272 "trtype": "TCP", 00:19:30.272 "adrfam": "IPv4", 00:19:30.272 "traddr": "10.0.0.1", 00:19:30.272 "trsvcid": "44558" 00:19:30.272 }, 00:19:30.272 "auth": { 00:19:30.272 "state": "completed", 00:19:30.272 "digest": "sha384", 00:19:30.272 "dhgroup": "null" 00:19:30.272 } 00:19:30.272 } 00:19:30.272 ]' 00:19:30.272 08:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:30.272 08:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:30.272 08:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:30.272 08:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:30.272 08:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:30.272 08:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.272 08:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.272 08:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.532 08:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGFlYjc1NWQ0MTBjM2ZkODFkNTZkMTliZmE2ZDY4NDJjNWZmMTc4MmI0MWQ2MjliOTlhYzA4MThlODk3Y2Q2Ycvd40Y=: 00:19:30.532 08:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NGFlYjc1NWQ0MTBjM2ZkODFkNTZkMTliZmE2ZDY4NDJjNWZmMTc4MmI0MWQ2MjliOTlhYzA4MThlODk3Y2Q2Ycvd40Y=: 00:19:31.105 08:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.365 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.365 08:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:31.365 08:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.365 08:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.365 08:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.365 08:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:31.365 08:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:31.365 08:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:31.365 08:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:31.365 08:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:19:31.365 08:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:31.365 08:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:31.365 08:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:31.365 08:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:31.365 08:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.365 08:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.365 08:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.365 08:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.365 08:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.365 08:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.365 08:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.365 08:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.625 00:19:31.625 08:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:31.625 08:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:31.625 08:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.885 08:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.885 08:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.885 08:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.885 08:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.885 08:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.885 08:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:31.885 { 00:19:31.885 "cntlid": 57, 00:19:31.885 "qid": 0, 00:19:31.885 "state": "enabled", 00:19:31.885 "thread": "nvmf_tgt_poll_group_000", 00:19:31.885 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:31.885 "listen_address": { 00:19:31.885 "trtype": "TCP", 00:19:31.885 "adrfam": "IPv4", 00:19:31.885 "traddr": "10.0.0.2", 00:19:31.885 "trsvcid": "4420" 00:19:31.885 }, 00:19:31.885 "peer_address": { 00:19:31.885 "trtype": "TCP", 00:19:31.885 "adrfam": "IPv4", 00:19:31.885 "traddr": "10.0.0.1", 00:19:31.885 "trsvcid": "44592" 00:19:31.885 }, 00:19:31.885 "auth": { 00:19:31.885 "state": "completed", 00:19:31.885 "digest": "sha384", 00:19:31.885 "dhgroup": "ffdhe2048" 00:19:31.885 } 00:19:31.885 } 00:19:31.885 ]' 00:19:31.885 08:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:31.885 08:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:31.885 08:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:31.885 08:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:31.885 08:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:31.885 08:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.885 08:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.885 08:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.145 08:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTE1MTkyMTk3ZWNhZWZiNzYyMjhiYzNkN2U2ZTcyYTY0YzAzZmNkODIxMzI1MDU3H3XE0Q==: --dhchap-ctrl-secret DHHC-1:03:NTRkNDA0NGZkNmU5YzIxYTVhYWVjOTM2MWQ3NTI1Y2ViNTI1OTYyZThmNDExNWM4NjIwNjk1Y2U0ZWI5MjU0M3VUq34=: 00:19:32.145 08:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NTE1MTkyMTk3ZWNhZWZiNzYyMjhiYzNkN2U2ZTcyYTY0YzAzZmNkODIxMzI1MDU3H3XE0Q==: --dhchap-ctrl-secret DHHC-1:03:NTRkNDA0NGZkNmU5YzIxYTVhYWVjOTM2MWQ3NTI1Y2ViNTI1OTYyZThmNDExNWM4NjIwNjk1Y2U0ZWI5MjU0M3VUq34=: 00:19:33.084 08:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.084 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.084 08:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:33.084 08:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.084 08:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.084 08:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.084 08:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:33.084 08:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:33.084 08:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:33.084 08:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:19:33.084 08:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:33.084 08:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:33.084 08:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:33.084 08:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:33.084 08:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.084 08:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.085 08:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.085 08:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.085 08:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.085 08:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.085 08:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.085 08:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.344 00:19:33.344 08:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:33.344 08:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:33.344 08:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.604 08:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.604 08:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.604 08:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.604 08:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.604 08:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.604 08:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:33.604 { 00:19:33.604 "cntlid": 59, 00:19:33.604 "qid": 0, 00:19:33.604 "state": "enabled", 00:19:33.604 "thread": "nvmf_tgt_poll_group_000", 00:19:33.604 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:33.604 "listen_address": { 00:19:33.604 "trtype": "TCP", 00:19:33.604 "adrfam": "IPv4", 00:19:33.604 "traddr": "10.0.0.2", 00:19:33.604 "trsvcid": "4420" 00:19:33.604 }, 00:19:33.604 "peer_address": { 00:19:33.604 "trtype": "TCP", 00:19:33.604 "adrfam": "IPv4", 00:19:33.604 "traddr": "10.0.0.1", 00:19:33.604 "trsvcid": "44612" 00:19:33.604 }, 00:19:33.604 "auth": { 00:19:33.604 "state": "completed", 00:19:33.604 "digest": "sha384", 00:19:33.604 "dhgroup": "ffdhe2048" 00:19:33.604 } 00:19:33.604 } 00:19:33.604 ]' 00:19:33.604 08:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:33.604 08:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:33.604 08:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:33.604 08:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:33.604 08:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:33.604 08:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.604 08:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.604 08:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.864 08:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmE0MzYyZGFkOTgxMWNhYzQ3MDU2ZDYyY2NkNDI5MjOJSZcM: --dhchap-ctrl-secret DHHC-1:02:NTEyMmRhMmYwNDkyZmNlMzIxZjk5OWM5ZmNjODZlM2E0ZThmM2E0NjNkMmYxMjhkUoDicg==: 00:19:33.864 08:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NmE0MzYyZGFkOTgxMWNhYzQ3MDU2ZDYyY2NkNDI5MjOJSZcM: --dhchap-ctrl-secret DHHC-1:02:NTEyMmRhMmYwNDkyZmNlMzIxZjk5OWM5ZmNjODZlM2E0ZThmM2E0NjNkMmYxMjhkUoDicg==: 00:19:34.805 08:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.805 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.805 08:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:34.805 08:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.805 08:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.805 08:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.805 08:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:34.805 08:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:34.805 08:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:34.805 08:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:19:34.805 08:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:34.805 08:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:34.805 08:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:34.805 08:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:34.805 08:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.805 08:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.805 08:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.805 08:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.805 08:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.805 08:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.805 08:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.805 08:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.066 00:19:35.066 08:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:35.066 08:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.066 08:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:35.326 08:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.326 08:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.326 08:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.326 08:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.326 08:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.326 08:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:35.326 { 00:19:35.326 "cntlid": 61, 00:19:35.326 "qid": 0, 00:19:35.326 "state": "enabled", 00:19:35.326 "thread": "nvmf_tgt_poll_group_000", 00:19:35.326 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:35.326 "listen_address": { 00:19:35.326 "trtype": "TCP", 00:19:35.326 "adrfam": "IPv4", 00:19:35.326 "traddr": "10.0.0.2", 00:19:35.326 "trsvcid": "4420" 00:19:35.326 }, 00:19:35.326 "peer_address": { 00:19:35.326 "trtype": "TCP", 00:19:35.326 "adrfam": "IPv4", 00:19:35.326 "traddr": "10.0.0.1", 00:19:35.326 "trsvcid": "44632" 00:19:35.326 }, 00:19:35.326 "auth": { 00:19:35.326 "state": "completed", 00:19:35.326 "digest": "sha384", 00:19:35.326 "dhgroup": "ffdhe2048" 00:19:35.326 } 00:19:35.326 } 00:19:35.326 ]' 00:19:35.326 08:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:35.326 08:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:35.326 08:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:35.326 08:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:35.326 08:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:35.326 08:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.326 08:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.326 08:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.586 08:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTFkYjA0Y2M4YTM5MmE0NDQ5N2JhZmZkNjdjYjMyMjc0ODA1MzZiZmVmZDhlY2Q0siICcA==: --dhchap-ctrl-secret DHHC-1:01:YjU3NTQ5OTQxNzU1ZWY4OTIxN2RlNWVmOTIyNzc3ZDnMz2+M: 00:19:35.586 08:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YTFkYjA0Y2M4YTM5MmE0NDQ5N2JhZmZkNjdjYjMyMjc0ODA1MzZiZmVmZDhlY2Q0siICcA==: --dhchap-ctrl-secret DHHC-1:01:YjU3NTQ5OTQxNzU1ZWY4OTIxN2RlNWVmOTIyNzc3ZDnMz2+M: 00:19:36.157 08:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.418 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.418 08:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:36.418 08:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.418 08:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.418 08:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.418 08:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:36.418 08:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:36.418 08:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:36.418 08:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:19:36.418 08:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:36.418 08:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:36.418 08:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:36.418 08:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:36.418 08:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.418 08:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:36.418 08:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.418 08:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.418 08:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.418 08:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:36.418 08:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:36.418 08:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:36.678 00:19:36.678 08:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:36.678 08:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:36.678 08:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.938 08:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.938 08:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.938 08:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.938 08:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.938 08:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.938 08:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:36.938 { 00:19:36.938 "cntlid": 63, 00:19:36.938 "qid": 0, 00:19:36.938 "state": "enabled", 00:19:36.938 "thread": "nvmf_tgt_poll_group_000", 00:19:36.938 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:36.938 "listen_address": { 00:19:36.938 "trtype": "TCP", 00:19:36.938 "adrfam": "IPv4", 00:19:36.938 "traddr": "10.0.0.2", 00:19:36.938 "trsvcid": "4420" 00:19:36.938 }, 00:19:36.938 "peer_address": { 00:19:36.938 "trtype": "TCP", 00:19:36.938 "adrfam": "IPv4", 00:19:36.938 "traddr": "10.0.0.1", 00:19:36.938 "trsvcid": "44658" 00:19:36.938 }, 00:19:36.938 "auth": { 00:19:36.938 "state": "completed", 00:19:36.938 "digest": "sha384", 00:19:36.938 "dhgroup": "ffdhe2048" 00:19:36.938 } 00:19:36.938 } 00:19:36.938 ]' 00:19:36.938 08:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:36.938 08:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:36.938 08:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:36.939 08:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:36.939 08:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:36.939 08:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.939 08:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.939 08:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.200 08:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGFlYjc1NWQ0MTBjM2ZkODFkNTZkMTliZmE2ZDY4NDJjNWZmMTc4MmI0MWQ2MjliOTlhYzA4MThlODk3Y2Q2Ycvd40Y=: 00:19:37.200 08:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NGFlYjc1NWQ0MTBjM2ZkODFkNTZkMTliZmE2ZDY4NDJjNWZmMTc4MmI0MWQ2MjliOTlhYzA4MThlODk3Y2Q2Ycvd40Y=: 00:19:38.141 08:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.141 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.141 08:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:38.141 08:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.141 08:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.141 08:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.141 08:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:38.141 08:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:38.141 08:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:38.141 08:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:38.141 08:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:19:38.141 08:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:38.141 08:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:38.141 08:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:38.141 08:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:38.141 08:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.141 08:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.141 08:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.141 08:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.141 08:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.141 08:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.141 08:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.141 08:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.401 00:19:38.401 08:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:38.401 08:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:38.401 08:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.663 08:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.663 08:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.663 08:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.663 08:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.663 08:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.663 08:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:38.663 { 00:19:38.663 "cntlid": 65, 00:19:38.663 "qid": 0, 00:19:38.663 "state": "enabled", 00:19:38.663 "thread": "nvmf_tgt_poll_group_000", 00:19:38.663 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:38.663 "listen_address": { 00:19:38.663 "trtype": "TCP", 00:19:38.663 "adrfam": "IPv4", 00:19:38.663 "traddr": "10.0.0.2", 00:19:38.663 "trsvcid": "4420" 00:19:38.663 }, 00:19:38.663 "peer_address": { 00:19:38.663 "trtype": "TCP", 00:19:38.663 "adrfam": "IPv4", 00:19:38.663 "traddr": "10.0.0.1", 00:19:38.663 "trsvcid": "44682" 00:19:38.663 }, 00:19:38.663 "auth": { 00:19:38.663 "state": "completed", 00:19:38.663 "digest": "sha384", 00:19:38.663 "dhgroup": "ffdhe3072" 00:19:38.663 } 00:19:38.663 } 00:19:38.663 ]' 00:19:38.663 08:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:38.663 08:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:38.663 08:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:38.663 08:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:38.663 08:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:38.663 08:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.664 08:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.664 08:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.925 08:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTE1MTkyMTk3ZWNhZWZiNzYyMjhiYzNkN2U2ZTcyYTY0YzAzZmNkODIxMzI1MDU3H3XE0Q==: --dhchap-ctrl-secret DHHC-1:03:NTRkNDA0NGZkNmU5YzIxYTVhYWVjOTM2MWQ3NTI1Y2ViNTI1OTYyZThmNDExNWM4NjIwNjk1Y2U0ZWI5MjU0M3VUq34=: 00:19:38.925 08:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NTE1MTkyMTk3ZWNhZWZiNzYyMjhiYzNkN2U2ZTcyYTY0YzAzZmNkODIxMzI1MDU3H3XE0Q==: --dhchap-ctrl-secret DHHC-1:03:NTRkNDA0NGZkNmU5YzIxYTVhYWVjOTM2MWQ3NTI1Y2ViNTI1OTYyZThmNDExNWM4NjIwNjk1Y2U0ZWI5MjU0M3VUq34=: 00:19:39.868 08:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.868 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.868 08:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:39.868 08:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.868 08:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.868 08:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.868 08:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:39.868 08:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:39.868 08:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:39.868 08:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:19:39.868 08:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:39.868 08:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:39.868 08:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:39.868 08:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:39.868 08:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.868 08:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.868 08:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.868 08:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.868 08:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.868 08:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.868 08:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.868 08:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.129 00:19:40.129 08:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:40.129 08:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:40.129 08:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.390 08:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.390 08:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.390 08:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.390 08:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.390 08:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.390 08:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:40.390 { 00:19:40.390 "cntlid": 67, 00:19:40.390 "qid": 0, 00:19:40.390 "state": "enabled", 00:19:40.390 "thread": "nvmf_tgt_poll_group_000", 00:19:40.390 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:40.390 "listen_address": { 00:19:40.390 "trtype": "TCP", 00:19:40.390 "adrfam": "IPv4", 00:19:40.390 "traddr": "10.0.0.2", 00:19:40.390 "trsvcid": "4420" 00:19:40.390 }, 00:19:40.390 "peer_address": { 00:19:40.390 "trtype": "TCP", 00:19:40.390 "adrfam": "IPv4", 00:19:40.390 "traddr": "10.0.0.1", 00:19:40.390 "trsvcid": "42968" 00:19:40.390 }, 00:19:40.390 "auth": { 00:19:40.390 "state": "completed", 00:19:40.390 "digest": "sha384", 00:19:40.390 "dhgroup": "ffdhe3072" 00:19:40.390 } 00:19:40.390 } 00:19:40.390 ]' 00:19:40.390 08:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:40.390 08:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:40.390 08:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:40.390 08:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:40.390 08:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:40.390 08:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.390 08:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.390 08:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.651 08:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmE0MzYyZGFkOTgxMWNhYzQ3MDU2ZDYyY2NkNDI5MjOJSZcM: --dhchap-ctrl-secret DHHC-1:02:NTEyMmRhMmYwNDkyZmNlMzIxZjk5OWM5ZmNjODZlM2E0ZThmM2E0NjNkMmYxMjhkUoDicg==: 00:19:40.651 08:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NmE0MzYyZGFkOTgxMWNhYzQ3MDU2ZDYyY2NkNDI5MjOJSZcM: --dhchap-ctrl-secret DHHC-1:02:NTEyMmRhMmYwNDkyZmNlMzIxZjk5OWM5ZmNjODZlM2E0ZThmM2E0NjNkMmYxMjhkUoDicg==: 00:19:41.594 08:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.594 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.594 08:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:41.594 08:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.594 08:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.594 08:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.594 08:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:41.595 08:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:41.595 08:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:41.595 08:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:19:41.595 08:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:41.595 08:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:41.595 08:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:41.595 08:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:41.595 08:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.595 08:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.595 08:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.595 08:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.595 08:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.595 08:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.595 08:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.595 08:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.854 00:19:41.854 08:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:41.854 08:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.854 08:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:42.114 08:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.114 08:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.114 08:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.114 08:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.114 08:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.114 08:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:42.114 { 00:19:42.114 "cntlid": 69, 00:19:42.114 "qid": 0, 00:19:42.114 "state": "enabled", 00:19:42.114 "thread": "nvmf_tgt_poll_group_000", 00:19:42.114 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:42.114 "listen_address": { 00:19:42.114 "trtype": "TCP", 00:19:42.114 "adrfam": "IPv4", 00:19:42.114 "traddr": "10.0.0.2", 00:19:42.114 "trsvcid": "4420" 00:19:42.114 }, 00:19:42.114 "peer_address": { 00:19:42.114 "trtype": "TCP", 00:19:42.114 "adrfam": "IPv4", 00:19:42.114 "traddr": "10.0.0.1", 00:19:42.114 "trsvcid": "42994" 00:19:42.114 }, 00:19:42.114 "auth": { 00:19:42.114 "state": "completed", 00:19:42.114 "digest": "sha384", 00:19:42.114 "dhgroup": "ffdhe3072" 00:19:42.114 } 00:19:42.114 } 00:19:42.114 ]' 00:19:42.114 08:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:42.114 08:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:42.114 08:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:42.114 08:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:42.114 08:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:42.114 08:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.114 08:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.114 08:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.375 08:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTFkYjA0Y2M4YTM5MmE0NDQ5N2JhZmZkNjdjYjMyMjc0ODA1MzZiZmVmZDhlY2Q0siICcA==: --dhchap-ctrl-secret DHHC-1:01:YjU3NTQ5OTQxNzU1ZWY4OTIxN2RlNWVmOTIyNzc3ZDnMz2+M: 00:19:42.375 08:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YTFkYjA0Y2M4YTM5MmE0NDQ5N2JhZmZkNjdjYjMyMjc0ODA1MzZiZmVmZDhlY2Q0siICcA==: --dhchap-ctrl-secret DHHC-1:01:YjU3NTQ5OTQxNzU1ZWY4OTIxN2RlNWVmOTIyNzc3ZDnMz2+M: 00:19:42.944 08:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.214 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.214 08:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:43.214 08:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.214 08:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.214 08:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.214 08:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:43.214 08:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:43.214 08:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:43.214 08:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:19:43.214 08:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:43.214 08:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:43.214 08:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:43.214 08:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:43.214 08:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.215 08:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:43.215 08:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.215 08:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.215 08:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.215 08:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:43.215 08:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:43.215 08:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:43.476 00:19:43.476 08:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:43.476 08:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:43.476 08:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.735 08:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.735 08:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.735 08:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.735 08:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.735 08:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.735 08:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:43.735 { 00:19:43.735 "cntlid": 71, 00:19:43.735 "qid": 0, 00:19:43.735 "state": "enabled", 00:19:43.735 "thread": "nvmf_tgt_poll_group_000", 00:19:43.735 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:43.735 "listen_address": { 00:19:43.735 "trtype": "TCP", 00:19:43.735 "adrfam": "IPv4", 00:19:43.735 "traddr": "10.0.0.2", 00:19:43.735 "trsvcid": "4420" 00:19:43.735 }, 00:19:43.735 "peer_address": { 00:19:43.735 "trtype": "TCP", 00:19:43.735 "adrfam": "IPv4", 00:19:43.735 "traddr": "10.0.0.1", 00:19:43.735 "trsvcid": "43014" 00:19:43.735 }, 00:19:43.735 "auth": { 00:19:43.735 "state": "completed", 00:19:43.735 "digest": "sha384", 00:19:43.735 "dhgroup": "ffdhe3072" 00:19:43.735 } 00:19:43.735 } 00:19:43.735 ]' 00:19:43.735 08:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:43.735 08:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:43.735 08:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:43.735 08:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:43.735 08:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:43.735 08:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.735 08:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.736 08:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.995 08:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGFlYjc1NWQ0MTBjM2ZkODFkNTZkMTliZmE2ZDY4NDJjNWZmMTc4MmI0MWQ2MjliOTlhYzA4MThlODk3Y2Q2Ycvd40Y=: 00:19:43.995 08:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NGFlYjc1NWQ0MTBjM2ZkODFkNTZkMTliZmE2ZDY4NDJjNWZmMTc4MmI0MWQ2MjliOTlhYzA4MThlODk3Y2Q2Ycvd40Y=: 00:19:44.566 08:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.566 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.566 08:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:44.566 08:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.566 08:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.566 08:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.566 08:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:44.566 08:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:44.566 08:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:44.566 08:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:44.827 08:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:19:44.827 08:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:44.827 08:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:44.827 08:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:44.827 08:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:44.827 08:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.827 08:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.827 08:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.827 08:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.827 08:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.827 08:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.827 08:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.827 08:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.088 00:19:45.088 08:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:45.088 08:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:45.088 08:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.350 08:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.350 08:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.350 08:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.350 08:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.350 08:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.350 08:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:45.350 { 00:19:45.350 "cntlid": 73, 00:19:45.350 "qid": 0, 00:19:45.350 "state": "enabled", 00:19:45.350 "thread": "nvmf_tgt_poll_group_000", 00:19:45.350 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:45.350 "listen_address": { 00:19:45.350 "trtype": "TCP", 00:19:45.350 "adrfam": "IPv4", 00:19:45.350 "traddr": "10.0.0.2", 00:19:45.350 "trsvcid": "4420" 00:19:45.350 }, 00:19:45.350 "peer_address": { 00:19:45.350 "trtype": "TCP", 00:19:45.350 "adrfam": "IPv4", 00:19:45.350 "traddr": "10.0.0.1", 00:19:45.350 "trsvcid": "43038" 00:19:45.350 }, 00:19:45.350 "auth": { 00:19:45.350 "state": "completed", 00:19:45.350 "digest": "sha384", 00:19:45.350 "dhgroup": "ffdhe4096" 00:19:45.350 } 00:19:45.350 } 00:19:45.350 ]' 00:19:45.350 08:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:45.350 08:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:45.350 08:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:45.350 08:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:45.350 08:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:45.350 08:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.350 08:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.350 08:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.612 08:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTE1MTkyMTk3ZWNhZWZiNzYyMjhiYzNkN2U2ZTcyYTY0YzAzZmNkODIxMzI1MDU3H3XE0Q==: --dhchap-ctrl-secret DHHC-1:03:NTRkNDA0NGZkNmU5YzIxYTVhYWVjOTM2MWQ3NTI1Y2ViNTI1OTYyZThmNDExNWM4NjIwNjk1Y2U0ZWI5MjU0M3VUq34=: 00:19:45.612 08:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NTE1MTkyMTk3ZWNhZWZiNzYyMjhiYzNkN2U2ZTcyYTY0YzAzZmNkODIxMzI1MDU3H3XE0Q==: --dhchap-ctrl-secret DHHC-1:03:NTRkNDA0NGZkNmU5YzIxYTVhYWVjOTM2MWQ3NTI1Y2ViNTI1OTYyZThmNDExNWM4NjIwNjk1Y2U0ZWI5MjU0M3VUq34=: 00:19:46.554 08:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.554 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.554 08:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:46.555 08:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.555 08:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.555 08:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.555 08:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:46.555 08:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:46.555 08:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:46.555 08:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:19:46.555 08:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:46.555 08:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:46.555 08:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:46.555 08:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:46.555 08:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.555 08:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.555 08:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.555 08:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.555 08:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.555 08:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.555 08:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.555 08:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.824 00:19:46.824 08:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:46.824 08:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:46.824 08:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.092 08:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.092 08:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.092 08:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.092 08:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.092 08:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.092 08:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:47.092 { 00:19:47.092 "cntlid": 75, 00:19:47.092 "qid": 0, 00:19:47.092 "state": "enabled", 00:19:47.092 "thread": "nvmf_tgt_poll_group_000", 00:19:47.092 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:47.092 "listen_address": { 00:19:47.092 "trtype": "TCP", 00:19:47.092 "adrfam": "IPv4", 00:19:47.092 "traddr": "10.0.0.2", 00:19:47.092 "trsvcid": "4420" 00:19:47.092 }, 00:19:47.092 "peer_address": { 00:19:47.092 "trtype": "TCP", 00:19:47.092 "adrfam": "IPv4", 00:19:47.092 "traddr": "10.0.0.1", 00:19:47.092 "trsvcid": "43072" 00:19:47.092 }, 00:19:47.092 "auth": { 00:19:47.092 "state": "completed", 00:19:47.092 "digest": "sha384", 00:19:47.092 "dhgroup": "ffdhe4096" 00:19:47.092 } 00:19:47.092 } 00:19:47.092 ]' 00:19:47.092 08:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:47.092 08:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:47.092 08:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:47.092 08:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:47.092 08:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:47.092 08:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.092 08:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.092 08:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.354 08:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmE0MzYyZGFkOTgxMWNhYzQ3MDU2ZDYyY2NkNDI5MjOJSZcM: --dhchap-ctrl-secret DHHC-1:02:NTEyMmRhMmYwNDkyZmNlMzIxZjk5OWM5ZmNjODZlM2E0ZThmM2E0NjNkMmYxMjhkUoDicg==: 00:19:47.354 08:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NmE0MzYyZGFkOTgxMWNhYzQ3MDU2ZDYyY2NkNDI5MjOJSZcM: --dhchap-ctrl-secret DHHC-1:02:NTEyMmRhMmYwNDkyZmNlMzIxZjk5OWM5ZmNjODZlM2E0ZThmM2E0NjNkMmYxMjhkUoDicg==: 00:19:48.324 08:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.324 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.324 08:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:48.324 08:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.324 08:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.324 08:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.324 08:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:48.324 08:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:48.324 08:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:48.324 08:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:19:48.324 08:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:48.324 08:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:48.324 08:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:48.324 08:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:48.324 08:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.324 08:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.324 08:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.324 08:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.324 08:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.324 08:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.324 08:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.324 08:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.622 00:19:48.622 08:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:48.622 08:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:48.622 08:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.622 08:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.622 08:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.622 08:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.622 08:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.924 08:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.924 08:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:48.924 { 00:19:48.924 "cntlid": 77, 00:19:48.924 "qid": 0, 00:19:48.924 "state": "enabled", 00:19:48.924 "thread": "nvmf_tgt_poll_group_000", 00:19:48.924 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:48.924 "listen_address": { 00:19:48.924 "trtype": "TCP", 00:19:48.924 "adrfam": "IPv4", 00:19:48.924 "traddr": "10.0.0.2", 00:19:48.924 "trsvcid": "4420" 00:19:48.924 }, 00:19:48.924 "peer_address": { 00:19:48.924 "trtype": "TCP", 00:19:48.924 "adrfam": "IPv4", 00:19:48.924 "traddr": "10.0.0.1", 00:19:48.924 "trsvcid": "43106" 00:19:48.924 }, 00:19:48.924 "auth": { 00:19:48.924 "state": "completed", 00:19:48.924 "digest": "sha384", 00:19:48.924 "dhgroup": "ffdhe4096" 00:19:48.924 } 00:19:48.924 } 00:19:48.924 ]' 00:19:48.924 08:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:48.924 08:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:48.924 08:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:48.924 08:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:48.924 08:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:48.924 08:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.924 08:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.924 08:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.184 08:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTFkYjA0Y2M4YTM5MmE0NDQ5N2JhZmZkNjdjYjMyMjc0ODA1MzZiZmVmZDhlY2Q0siICcA==: --dhchap-ctrl-secret DHHC-1:01:YjU3NTQ5OTQxNzU1ZWY4OTIxN2RlNWVmOTIyNzc3ZDnMz2+M: 00:19:49.184 08:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YTFkYjA0Y2M4YTM5MmE0NDQ5N2JhZmZkNjdjYjMyMjc0ODA1MzZiZmVmZDhlY2Q0siICcA==: --dhchap-ctrl-secret DHHC-1:01:YjU3NTQ5OTQxNzU1ZWY4OTIxN2RlNWVmOTIyNzc3ZDnMz2+M: 00:19:49.754 08:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.754 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.754 08:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:49.754 08:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.754 08:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.754 08:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.754 08:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:49.754 08:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:49.754 08:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:50.014 08:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:19:50.014 08:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:50.014 08:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:50.014 08:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:50.014 08:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:50.014 08:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.014 08:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:50.014 08:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.014 08:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.014 08:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.014 08:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:50.014 08:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:50.014 08:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:50.275 00:19:50.275 08:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:50.275 08:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:50.275 08:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.536 08:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.536 08:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.536 08:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.536 08:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.536 08:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.536 08:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:50.536 { 00:19:50.536 "cntlid": 79, 00:19:50.536 "qid": 0, 00:19:50.536 "state": "enabled", 00:19:50.536 "thread": "nvmf_tgt_poll_group_000", 00:19:50.536 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:50.536 "listen_address": { 00:19:50.536 "trtype": "TCP", 00:19:50.536 "adrfam": "IPv4", 00:19:50.536 "traddr": "10.0.0.2", 00:19:50.536 "trsvcid": "4420" 00:19:50.536 }, 00:19:50.536 "peer_address": { 00:19:50.536 "trtype": "TCP", 00:19:50.536 "adrfam": "IPv4", 00:19:50.536 "traddr": "10.0.0.1", 00:19:50.536 "trsvcid": "47866" 00:19:50.536 }, 00:19:50.536 "auth": { 00:19:50.536 "state": "completed", 00:19:50.536 "digest": "sha384", 00:19:50.536 "dhgroup": "ffdhe4096" 00:19:50.536 } 00:19:50.536 } 00:19:50.536 ]' 00:19:50.536 08:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:50.536 08:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:50.536 08:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:50.536 08:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:50.536 08:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:50.536 08:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.536 08:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.536 08:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.797 08:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGFlYjc1NWQ0MTBjM2ZkODFkNTZkMTliZmE2ZDY4NDJjNWZmMTc4MmI0MWQ2MjliOTlhYzA4MThlODk3Y2Q2Ycvd40Y=: 00:19:50.797 08:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NGFlYjc1NWQ0MTBjM2ZkODFkNTZkMTliZmE2ZDY4NDJjNWZmMTc4MmI0MWQ2MjliOTlhYzA4MThlODk3Y2Q2Ycvd40Y=: 00:19:51.738 08:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.738 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.738 08:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:51.738 08:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.738 08:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.738 08:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.738 08:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:51.738 08:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:51.738 08:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:51.738 08:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:51.738 08:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:19:51.738 08:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:51.738 08:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:51.738 08:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:51.738 08:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:51.738 08:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.738 08:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.738 08:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.738 08:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.738 08:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.738 08:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.738 08:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.738 08:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.310 00:19:52.310 08:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:52.310 08:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:52.310 08:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.310 08:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.310 08:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.310 08:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.310 08:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.310 08:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.310 08:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:52.310 { 00:19:52.310 "cntlid": 81, 00:19:52.310 "qid": 0, 00:19:52.310 "state": "enabled", 00:19:52.310 "thread": "nvmf_tgt_poll_group_000", 00:19:52.310 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:52.310 "listen_address": { 00:19:52.310 "trtype": "TCP", 00:19:52.310 "adrfam": "IPv4", 00:19:52.310 "traddr": "10.0.0.2", 00:19:52.310 "trsvcid": "4420" 00:19:52.310 }, 00:19:52.310 "peer_address": { 00:19:52.310 "trtype": "TCP", 00:19:52.310 "adrfam": "IPv4", 00:19:52.310 "traddr": "10.0.0.1", 00:19:52.310 "trsvcid": "47876" 00:19:52.310 }, 00:19:52.310 "auth": { 00:19:52.310 "state": "completed", 00:19:52.310 "digest": "sha384", 00:19:52.310 "dhgroup": "ffdhe6144" 00:19:52.310 } 00:19:52.310 } 00:19:52.310 ]' 00:19:52.310 08:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:52.310 08:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:52.310 08:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:52.310 08:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:52.310 08:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:52.570 08:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.570 08:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.570 08:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.570 08:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTE1MTkyMTk3ZWNhZWZiNzYyMjhiYzNkN2U2ZTcyYTY0YzAzZmNkODIxMzI1MDU3H3XE0Q==: --dhchap-ctrl-secret DHHC-1:03:NTRkNDA0NGZkNmU5YzIxYTVhYWVjOTM2MWQ3NTI1Y2ViNTI1OTYyZThmNDExNWM4NjIwNjk1Y2U0ZWI5MjU0M3VUq34=: 00:19:52.570 08:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NTE1MTkyMTk3ZWNhZWZiNzYyMjhiYzNkN2U2ZTcyYTY0YzAzZmNkODIxMzI1MDU3H3XE0Q==: --dhchap-ctrl-secret DHHC-1:03:NTRkNDA0NGZkNmU5YzIxYTVhYWVjOTM2MWQ3NTI1Y2ViNTI1OTYyZThmNDExNWM4NjIwNjk1Y2U0ZWI5MjU0M3VUq34=: 00:19:53.510 08:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.510 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.510 08:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:53.510 08:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.510 08:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.510 08:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.510 08:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:53.510 08:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:53.510 08:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:53.510 08:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:19:53.510 08:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:53.510 08:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:53.510 08:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:53.510 08:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:53.510 08:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.510 08:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.510 08:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.510 08:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.510 08:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.510 08:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.511 08:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.511 08:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.082 00:19:54.082 08:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:54.082 08:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:54.082 08:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.082 08:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.082 08:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.082 08:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.082 08:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.082 08:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.082 08:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:54.082 { 00:19:54.082 "cntlid": 83, 00:19:54.082 "qid": 0, 00:19:54.082 "state": "enabled", 00:19:54.082 "thread": "nvmf_tgt_poll_group_000", 00:19:54.082 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:54.082 "listen_address": { 00:19:54.082 "trtype": "TCP", 00:19:54.082 "adrfam": "IPv4", 00:19:54.082 "traddr": "10.0.0.2", 00:19:54.082 "trsvcid": "4420" 00:19:54.082 }, 00:19:54.082 "peer_address": { 00:19:54.082 "trtype": "TCP", 00:19:54.082 "adrfam": "IPv4", 00:19:54.082 "traddr": "10.0.0.1", 00:19:54.082 "trsvcid": "47906" 00:19:54.082 }, 00:19:54.082 "auth": { 00:19:54.082 "state": "completed", 00:19:54.083 "digest": "sha384", 00:19:54.083 "dhgroup": "ffdhe6144" 00:19:54.083 } 00:19:54.083 } 00:19:54.083 ]' 00:19:54.083 08:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:54.083 08:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:54.083 08:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:54.344 08:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:54.344 08:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:54.344 08:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.344 08:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.344 08:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.344 08:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmE0MzYyZGFkOTgxMWNhYzQ3MDU2ZDYyY2NkNDI5MjOJSZcM: --dhchap-ctrl-secret DHHC-1:02:NTEyMmRhMmYwNDkyZmNlMzIxZjk5OWM5ZmNjODZlM2E0ZThmM2E0NjNkMmYxMjhkUoDicg==: 00:19:54.344 08:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NmE0MzYyZGFkOTgxMWNhYzQ3MDU2ZDYyY2NkNDI5MjOJSZcM: --dhchap-ctrl-secret DHHC-1:02:NTEyMmRhMmYwNDkyZmNlMzIxZjk5OWM5ZmNjODZlM2E0ZThmM2E0NjNkMmYxMjhkUoDicg==: 00:19:55.288 08:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.288 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.288 08:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:55.288 08:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.288 08:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.288 08:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.288 08:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:55.288 08:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:55.288 08:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:55.288 08:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:19:55.288 08:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:55.288 08:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:55.288 08:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:55.288 08:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:55.288 08:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.288 08:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.288 08:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.288 08:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.288 08:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.288 08:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.288 08:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.288 08:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.860 00:19:55.860 08:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:55.860 08:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:55.860 08:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.860 08:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.860 08:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.860 08:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.860 08:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.860 08:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.860 08:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:55.860 { 00:19:55.860 "cntlid": 85, 00:19:55.860 "qid": 0, 00:19:55.860 "state": "enabled", 00:19:55.860 "thread": "nvmf_tgt_poll_group_000", 00:19:55.860 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:55.860 "listen_address": { 00:19:55.860 "trtype": "TCP", 00:19:55.860 "adrfam": "IPv4", 00:19:55.860 "traddr": "10.0.0.2", 00:19:55.860 "trsvcid": "4420" 00:19:55.860 }, 00:19:55.860 "peer_address": { 00:19:55.860 "trtype": "TCP", 00:19:55.860 "adrfam": "IPv4", 00:19:55.860 "traddr": "10.0.0.1", 00:19:55.860 "trsvcid": "47932" 00:19:55.860 }, 00:19:55.860 "auth": { 00:19:55.860 "state": "completed", 00:19:55.860 "digest": "sha384", 00:19:55.860 "dhgroup": "ffdhe6144" 00:19:55.860 } 00:19:55.860 } 00:19:55.860 ]' 00:19:55.860 08:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:56.121 08:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:56.121 08:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:56.121 08:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:56.121 08:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:56.121 08:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.121 08:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.121 08:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.382 08:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTFkYjA0Y2M4YTM5MmE0NDQ5N2JhZmZkNjdjYjMyMjc0ODA1MzZiZmVmZDhlY2Q0siICcA==: --dhchap-ctrl-secret DHHC-1:01:YjU3NTQ5OTQxNzU1ZWY4OTIxN2RlNWVmOTIyNzc3ZDnMz2+M: 00:19:56.383 08:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YTFkYjA0Y2M4YTM5MmE0NDQ5N2JhZmZkNjdjYjMyMjc0ODA1MzZiZmVmZDhlY2Q0siICcA==: --dhchap-ctrl-secret DHHC-1:01:YjU3NTQ5OTQxNzU1ZWY4OTIxN2RlNWVmOTIyNzc3ZDnMz2+M: 00:19:56.956 08:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.956 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.956 08:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:56.956 08:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.956 08:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.956 08:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.956 08:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:56.956 08:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:56.956 08:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:57.217 08:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:19:57.217 08:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:57.217 08:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:57.217 08:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:57.217 08:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:57.217 08:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.217 08:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:57.217 08:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.217 08:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.217 08:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.217 08:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:57.217 08:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:57.218 08:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:57.477 00:19:57.477 08:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:57.477 08:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:57.477 08:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.737 08:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.737 08:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.737 08:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.737 08:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.737 08:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.737 08:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:57.737 { 00:19:57.737 "cntlid": 87, 00:19:57.737 "qid": 0, 00:19:57.737 "state": "enabled", 00:19:57.737 "thread": "nvmf_tgt_poll_group_000", 00:19:57.737 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:57.737 "listen_address": { 00:19:57.737 "trtype": "TCP", 00:19:57.737 "adrfam": "IPv4", 00:19:57.737 "traddr": "10.0.0.2", 00:19:57.737 "trsvcid": "4420" 00:19:57.737 }, 00:19:57.737 "peer_address": { 00:19:57.737 "trtype": "TCP", 00:19:57.737 "adrfam": "IPv4", 00:19:57.737 "traddr": "10.0.0.1", 00:19:57.737 "trsvcid": "47962" 00:19:57.737 }, 00:19:57.737 "auth": { 00:19:57.737 "state": "completed", 00:19:57.737 "digest": "sha384", 00:19:57.737 "dhgroup": "ffdhe6144" 00:19:57.737 } 00:19:57.737 } 00:19:57.737 ]' 00:19:57.737 08:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:57.737 08:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:57.737 08:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:57.737 08:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:57.737 08:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:57.998 08:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.998 08:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.998 08:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.998 08:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGFlYjc1NWQ0MTBjM2ZkODFkNTZkMTliZmE2ZDY4NDJjNWZmMTc4MmI0MWQ2MjliOTlhYzA4MThlODk3Y2Q2Ycvd40Y=: 00:19:57.998 08:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NGFlYjc1NWQ0MTBjM2ZkODFkNTZkMTliZmE2ZDY4NDJjNWZmMTc4MmI0MWQ2MjliOTlhYzA4MThlODk3Y2Q2Ycvd40Y=: 00:19:58.940 08:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.940 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.940 08:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:58.940 08:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.940 08:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.940 08:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.940 08:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:58.940 08:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:58.940 08:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:58.940 08:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:58.940 08:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:19:58.940 08:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:58.940 08:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:58.940 08:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:58.940 08:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:58.940 08:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.940 08:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.940 08:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.940 08:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.940 08:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.940 08:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.940 08:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.940 08:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:59.511 00:19:59.511 08:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:59.511 08:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:59.511 08:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.775 08:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.775 08:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.775 08:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.775 08:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.775 08:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.775 08:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:59.775 { 00:19:59.775 "cntlid": 89, 00:19:59.775 "qid": 0, 00:19:59.775 "state": "enabled", 00:19:59.775 "thread": "nvmf_tgt_poll_group_000", 00:19:59.775 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:59.775 "listen_address": { 00:19:59.775 "trtype": "TCP", 00:19:59.775 "adrfam": "IPv4", 00:19:59.775 "traddr": "10.0.0.2", 00:19:59.775 "trsvcid": "4420" 00:19:59.775 }, 00:19:59.775 "peer_address": { 00:19:59.775 "trtype": "TCP", 00:19:59.775 "adrfam": "IPv4", 00:19:59.775 "traddr": "10.0.0.1", 00:19:59.775 "trsvcid": "47994" 00:19:59.775 }, 00:19:59.775 "auth": { 00:19:59.775 "state": "completed", 00:19:59.775 "digest": "sha384", 00:19:59.775 "dhgroup": "ffdhe8192" 00:19:59.775 } 00:19:59.775 } 00:19:59.775 ]' 00:19:59.775 08:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:59.775 08:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:59.775 08:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:59.775 08:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:59.775 08:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:59.775 08:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.775 08:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.775 08:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.036 08:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTE1MTkyMTk3ZWNhZWZiNzYyMjhiYzNkN2U2ZTcyYTY0YzAzZmNkODIxMzI1MDU3H3XE0Q==: --dhchap-ctrl-secret DHHC-1:03:NTRkNDA0NGZkNmU5YzIxYTVhYWVjOTM2MWQ3NTI1Y2ViNTI1OTYyZThmNDExNWM4NjIwNjk1Y2U0ZWI5MjU0M3VUq34=: 00:20:00.036 08:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NTE1MTkyMTk3ZWNhZWZiNzYyMjhiYzNkN2U2ZTcyYTY0YzAzZmNkODIxMzI1MDU3H3XE0Q==: --dhchap-ctrl-secret DHHC-1:03:NTRkNDA0NGZkNmU5YzIxYTVhYWVjOTM2MWQ3NTI1Y2ViNTI1OTYyZThmNDExNWM4NjIwNjk1Y2U0ZWI5MjU0M3VUq34=: 00:20:00.976 08:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:00.976 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:00.976 08:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:00.976 08:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.976 08:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.976 08:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.976 08:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:00.976 08:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:00.976 08:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:00.976 08:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:20:00.976 08:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:00.976 08:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:00.976 08:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:00.976 08:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:00.976 08:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.976 08:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.976 08:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.976 08:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.976 08:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.976 08:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.976 08:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.976 08:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:01.544 00:20:01.544 08:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:01.544 08:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:01.544 08:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.804 08:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.804 08:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:01.804 08:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.804 08:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.804 08:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.804 08:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:01.804 { 00:20:01.804 "cntlid": 91, 00:20:01.804 "qid": 0, 00:20:01.804 "state": "enabled", 00:20:01.804 "thread": "nvmf_tgt_poll_group_000", 00:20:01.804 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:01.804 "listen_address": { 00:20:01.804 "trtype": "TCP", 00:20:01.804 "adrfam": "IPv4", 00:20:01.804 "traddr": "10.0.0.2", 00:20:01.804 "trsvcid": "4420" 00:20:01.804 }, 00:20:01.804 "peer_address": { 00:20:01.804 "trtype": "TCP", 00:20:01.804 "adrfam": "IPv4", 00:20:01.804 "traddr": "10.0.0.1", 00:20:01.804 "trsvcid": "50614" 00:20:01.804 }, 00:20:01.804 "auth": { 00:20:01.804 "state": "completed", 00:20:01.804 "digest": "sha384", 00:20:01.804 "dhgroup": "ffdhe8192" 00:20:01.804 } 00:20:01.804 } 00:20:01.804 ]' 00:20:01.804 08:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:01.804 08:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:01.805 08:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:01.805 08:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:01.805 08:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:01.805 08:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:01.805 08:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.805 08:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.065 08:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmE0MzYyZGFkOTgxMWNhYzQ3MDU2ZDYyY2NkNDI5MjOJSZcM: --dhchap-ctrl-secret DHHC-1:02:NTEyMmRhMmYwNDkyZmNlMzIxZjk5OWM5ZmNjODZlM2E0ZThmM2E0NjNkMmYxMjhkUoDicg==: 00:20:02.065 08:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NmE0MzYyZGFkOTgxMWNhYzQ3MDU2ZDYyY2NkNDI5MjOJSZcM: --dhchap-ctrl-secret DHHC-1:02:NTEyMmRhMmYwNDkyZmNlMzIxZjk5OWM5ZmNjODZlM2E0ZThmM2E0NjNkMmYxMjhkUoDicg==: 00:20:03.008 08:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.008 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.008 08:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:03.008 08:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.008 08:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.008 08:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.008 08:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:03.008 08:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:03.008 08:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:03.008 08:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:20:03.008 08:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:03.008 08:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:03.008 08:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:03.008 08:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:03.008 08:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.008 08:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.008 08:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.008 08:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.008 08:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.008 08:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.008 08:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.008 08:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.580 00:20:03.580 08:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:03.580 08:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:03.580 08:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.840 08:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.840 08:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.840 08:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.840 08:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.840 08:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.840 08:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:03.840 { 00:20:03.840 "cntlid": 93, 00:20:03.840 "qid": 0, 00:20:03.840 "state": "enabled", 00:20:03.840 "thread": "nvmf_tgt_poll_group_000", 00:20:03.840 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:03.840 "listen_address": { 00:20:03.840 "trtype": "TCP", 00:20:03.840 "adrfam": "IPv4", 00:20:03.840 "traddr": "10.0.0.2", 00:20:03.840 "trsvcid": "4420" 00:20:03.840 }, 00:20:03.840 "peer_address": { 00:20:03.840 "trtype": "TCP", 00:20:03.840 "adrfam": "IPv4", 00:20:03.840 "traddr": "10.0.0.1", 00:20:03.840 "trsvcid": "50652" 00:20:03.840 }, 00:20:03.840 "auth": { 00:20:03.840 "state": "completed", 00:20:03.840 "digest": "sha384", 00:20:03.840 "dhgroup": "ffdhe8192" 00:20:03.840 } 00:20:03.840 } 00:20:03.840 ]' 00:20:03.840 08:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:03.840 08:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:03.840 08:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:03.840 08:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:03.840 08:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:03.840 08:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.840 08:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.841 08:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.102 08:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTFkYjA0Y2M4YTM5MmE0NDQ5N2JhZmZkNjdjYjMyMjc0ODA1MzZiZmVmZDhlY2Q0siICcA==: --dhchap-ctrl-secret DHHC-1:01:YjU3NTQ5OTQxNzU1ZWY4OTIxN2RlNWVmOTIyNzc3ZDnMz2+M: 00:20:04.102 08:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YTFkYjA0Y2M4YTM5MmE0NDQ5N2JhZmZkNjdjYjMyMjc0ODA1MzZiZmVmZDhlY2Q0siICcA==: --dhchap-ctrl-secret DHHC-1:01:YjU3NTQ5OTQxNzU1ZWY4OTIxN2RlNWVmOTIyNzc3ZDnMz2+M: 00:20:04.672 08:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.934 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.934 08:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:04.934 08:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.934 08:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.934 08:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.934 08:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:04.934 08:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:04.934 08:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:04.934 08:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:20:04.934 08:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:04.934 08:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:04.934 08:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:04.934 08:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:04.934 08:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.934 08:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:04.934 08:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.934 08:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.934 08:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.934 08:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:04.934 08:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:04.934 08:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:05.505 00:20:05.505 08:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:05.505 08:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:05.505 08:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.766 08:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.766 08:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.766 08:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.766 08:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.766 08:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.766 08:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:05.766 { 00:20:05.766 "cntlid": 95, 00:20:05.766 "qid": 0, 00:20:05.766 "state": "enabled", 00:20:05.766 "thread": "nvmf_tgt_poll_group_000", 00:20:05.766 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:05.766 "listen_address": { 00:20:05.766 "trtype": "TCP", 00:20:05.766 "adrfam": "IPv4", 00:20:05.766 "traddr": "10.0.0.2", 00:20:05.766 "trsvcid": "4420" 00:20:05.766 }, 00:20:05.766 "peer_address": { 00:20:05.766 "trtype": "TCP", 00:20:05.766 "adrfam": "IPv4", 00:20:05.766 "traddr": "10.0.0.1", 00:20:05.766 "trsvcid": "50670" 00:20:05.766 }, 00:20:05.766 "auth": { 00:20:05.766 "state": "completed", 00:20:05.766 "digest": "sha384", 00:20:05.766 "dhgroup": "ffdhe8192" 00:20:05.766 } 00:20:05.766 } 00:20:05.766 ]' 00:20:05.766 08:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:05.767 08:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:05.767 08:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:05.767 08:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:05.767 08:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:05.767 08:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.767 08:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.767 08:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.027 08:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGFlYjc1NWQ0MTBjM2ZkODFkNTZkMTliZmE2ZDY4NDJjNWZmMTc4MmI0MWQ2MjliOTlhYzA4MThlODk3Y2Q2Ycvd40Y=: 00:20:06.027 08:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NGFlYjc1NWQ0MTBjM2ZkODFkNTZkMTliZmE2ZDY4NDJjNWZmMTc4MmI0MWQ2MjliOTlhYzA4MThlODk3Y2Q2Ycvd40Y=: 00:20:06.969 08:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.969 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.969 08:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:06.969 08:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.969 08:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.969 08:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.969 08:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:06.969 08:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:06.969 08:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:06.970 08:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:06.970 08:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:06.970 08:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:20:06.970 08:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:06.970 08:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:06.970 08:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:06.970 08:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:06.970 08:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.970 08:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.970 08:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.970 08:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.970 08:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.970 08:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.970 08:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.970 08:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.229 00:20:07.229 08:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:07.229 08:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:07.229 08:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.490 08:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.490 08:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.490 08:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.490 08:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.490 08:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.490 08:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:07.490 { 00:20:07.490 "cntlid": 97, 00:20:07.490 "qid": 0, 00:20:07.490 "state": "enabled", 00:20:07.491 "thread": "nvmf_tgt_poll_group_000", 00:20:07.491 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:07.491 "listen_address": { 00:20:07.491 "trtype": "TCP", 00:20:07.491 "adrfam": "IPv4", 00:20:07.491 "traddr": "10.0.0.2", 00:20:07.491 "trsvcid": "4420" 00:20:07.491 }, 00:20:07.491 "peer_address": { 00:20:07.491 "trtype": "TCP", 00:20:07.491 "adrfam": "IPv4", 00:20:07.491 "traddr": "10.0.0.1", 00:20:07.491 "trsvcid": "50688" 00:20:07.491 }, 00:20:07.491 "auth": { 00:20:07.491 "state": "completed", 00:20:07.491 "digest": "sha512", 00:20:07.491 "dhgroup": "null" 00:20:07.491 } 00:20:07.491 } 00:20:07.491 ]' 00:20:07.491 08:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:07.491 08:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:07.491 08:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:07.491 08:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:07.491 08:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:07.491 08:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.491 08:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.491 08:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.750 08:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTE1MTkyMTk3ZWNhZWZiNzYyMjhiYzNkN2U2ZTcyYTY0YzAzZmNkODIxMzI1MDU3H3XE0Q==: --dhchap-ctrl-secret DHHC-1:03:NTRkNDA0NGZkNmU5YzIxYTVhYWVjOTM2MWQ3NTI1Y2ViNTI1OTYyZThmNDExNWM4NjIwNjk1Y2U0ZWI5MjU0M3VUq34=: 00:20:07.750 08:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NTE1MTkyMTk3ZWNhZWZiNzYyMjhiYzNkN2U2ZTcyYTY0YzAzZmNkODIxMzI1MDU3H3XE0Q==: --dhchap-ctrl-secret DHHC-1:03:NTRkNDA0NGZkNmU5YzIxYTVhYWVjOTM2MWQ3NTI1Y2ViNTI1OTYyZThmNDExNWM4NjIwNjk1Y2U0ZWI5MjU0M3VUq34=: 00:20:08.695 08:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.695 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.695 08:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:08.695 08:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.695 08:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.695 08:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.695 08:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:08.695 08:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:08.695 08:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:08.695 08:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:20:08.695 08:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:08.695 08:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:08.696 08:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:08.696 08:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:08.696 08:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:08.696 08:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.696 08:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.696 08:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.696 08:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.696 08:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.696 08:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.696 08:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.956 00:20:08.956 08:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:08.956 08:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:08.956 08:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.217 08:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.217 08:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.217 08:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.217 08:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.217 08:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.217 08:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:09.217 { 00:20:09.217 "cntlid": 99, 00:20:09.217 "qid": 0, 00:20:09.217 "state": "enabled", 00:20:09.217 "thread": "nvmf_tgt_poll_group_000", 00:20:09.217 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:09.217 "listen_address": { 00:20:09.217 "trtype": "TCP", 00:20:09.217 "adrfam": "IPv4", 00:20:09.217 "traddr": "10.0.0.2", 00:20:09.217 "trsvcid": "4420" 00:20:09.217 }, 00:20:09.217 "peer_address": { 00:20:09.217 "trtype": "TCP", 00:20:09.217 "adrfam": "IPv4", 00:20:09.217 "traddr": "10.0.0.1", 00:20:09.217 "trsvcid": "50716" 00:20:09.217 }, 00:20:09.217 "auth": { 00:20:09.217 "state": "completed", 00:20:09.217 "digest": "sha512", 00:20:09.217 "dhgroup": "null" 00:20:09.217 } 00:20:09.217 } 00:20:09.217 ]' 00:20:09.217 08:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:09.217 08:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:09.217 08:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:09.217 08:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:09.217 08:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:09.217 08:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.217 08:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.217 08:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.478 08:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmE0MzYyZGFkOTgxMWNhYzQ3MDU2ZDYyY2NkNDI5MjOJSZcM: --dhchap-ctrl-secret DHHC-1:02:NTEyMmRhMmYwNDkyZmNlMzIxZjk5OWM5ZmNjODZlM2E0ZThmM2E0NjNkMmYxMjhkUoDicg==: 00:20:09.478 08:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NmE0MzYyZGFkOTgxMWNhYzQ3MDU2ZDYyY2NkNDI5MjOJSZcM: --dhchap-ctrl-secret DHHC-1:02:NTEyMmRhMmYwNDkyZmNlMzIxZjk5OWM5ZmNjODZlM2E0ZThmM2E0NjNkMmYxMjhkUoDicg==: 00:20:10.049 08:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.049 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.049 08:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:10.049 08:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.049 08:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.049 08:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.049 08:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:10.049 08:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:10.049 08:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:10.310 08:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:20:10.310 08:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:10.310 08:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:10.310 08:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:10.310 08:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:10.310 08:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.310 08:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.310 08:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.310 08:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.310 08:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.310 08:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.310 08:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.310 08:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.571 00:20:10.571 08:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:10.571 08:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:10.571 08:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.833 08:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.833 08:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.833 08:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.833 08:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.833 08:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.833 08:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:10.833 { 00:20:10.833 "cntlid": 101, 00:20:10.833 "qid": 0, 00:20:10.833 "state": "enabled", 00:20:10.833 "thread": "nvmf_tgt_poll_group_000", 00:20:10.833 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:10.833 "listen_address": { 00:20:10.833 "trtype": "TCP", 00:20:10.833 "adrfam": "IPv4", 00:20:10.833 "traddr": "10.0.0.2", 00:20:10.833 "trsvcid": "4420" 00:20:10.833 }, 00:20:10.833 "peer_address": { 00:20:10.833 "trtype": "TCP", 00:20:10.833 "adrfam": "IPv4", 00:20:10.833 "traddr": "10.0.0.1", 00:20:10.833 "trsvcid": "37720" 00:20:10.833 }, 00:20:10.833 "auth": { 00:20:10.833 "state": "completed", 00:20:10.833 "digest": "sha512", 00:20:10.833 "dhgroup": "null" 00:20:10.833 } 00:20:10.833 } 00:20:10.833 ]' 00:20:10.833 08:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:10.833 08:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:10.833 08:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:10.833 08:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:10.833 08:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:10.833 08:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.833 08:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.833 08:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.094 08:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTFkYjA0Y2M4YTM5MmE0NDQ5N2JhZmZkNjdjYjMyMjc0ODA1MzZiZmVmZDhlY2Q0siICcA==: --dhchap-ctrl-secret DHHC-1:01:YjU3NTQ5OTQxNzU1ZWY4OTIxN2RlNWVmOTIyNzc3ZDnMz2+M: 00:20:11.094 08:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YTFkYjA0Y2M4YTM5MmE0NDQ5N2JhZmZkNjdjYjMyMjc0ODA1MzZiZmVmZDhlY2Q0siICcA==: --dhchap-ctrl-secret DHHC-1:01:YjU3NTQ5OTQxNzU1ZWY4OTIxN2RlNWVmOTIyNzc3ZDnMz2+M: 00:20:12.036 08:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.036 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.036 08:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:12.036 08:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.036 08:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.036 08:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.036 08:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:12.036 08:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:12.036 08:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:12.036 08:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:20:12.036 08:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:12.036 08:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:12.036 08:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:12.036 08:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:12.036 08:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.036 08:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:12.036 08:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.036 08:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.036 08:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.036 08:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:12.036 08:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:12.036 08:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:12.296 00:20:12.296 08:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:12.296 08:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.296 08:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:12.558 08:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.558 08:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.558 08:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.558 08:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.558 08:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.558 08:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:12.558 { 00:20:12.558 "cntlid": 103, 00:20:12.558 "qid": 0, 00:20:12.558 "state": "enabled", 00:20:12.558 "thread": "nvmf_tgt_poll_group_000", 00:20:12.558 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:12.558 "listen_address": { 00:20:12.558 "trtype": "TCP", 00:20:12.558 "adrfam": "IPv4", 00:20:12.558 "traddr": "10.0.0.2", 00:20:12.558 "trsvcid": "4420" 00:20:12.558 }, 00:20:12.558 "peer_address": { 00:20:12.558 "trtype": "TCP", 00:20:12.558 "adrfam": "IPv4", 00:20:12.558 "traddr": "10.0.0.1", 00:20:12.558 "trsvcid": "37746" 00:20:12.558 }, 00:20:12.558 "auth": { 00:20:12.558 "state": "completed", 00:20:12.558 "digest": "sha512", 00:20:12.558 "dhgroup": "null" 00:20:12.558 } 00:20:12.558 } 00:20:12.558 ]' 00:20:12.558 08:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:12.558 08:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:12.558 08:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:12.558 08:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:12.558 08:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:12.558 08:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.558 08:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.558 08:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.820 08:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGFlYjc1NWQ0MTBjM2ZkODFkNTZkMTliZmE2ZDY4NDJjNWZmMTc4MmI0MWQ2MjliOTlhYzA4MThlODk3Y2Q2Ycvd40Y=: 00:20:12.820 08:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NGFlYjc1NWQ0MTBjM2ZkODFkNTZkMTliZmE2ZDY4NDJjNWZmMTc4MmI0MWQ2MjliOTlhYzA4MThlODk3Y2Q2Ycvd40Y=: 00:20:13.393 08:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.393 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.393 08:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:13.393 08:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.393 08:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.393 08:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.393 08:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:13.393 08:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:13.393 08:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:13.393 08:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:13.654 08:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:20:13.654 08:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:13.654 08:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:13.654 08:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:13.654 08:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:13.654 08:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.654 08:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.654 08:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.654 08:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.654 08:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.654 08:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.654 08:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.654 08:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.914 00:20:13.914 08:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:13.914 08:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:13.914 08:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.174 08:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.174 08:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.174 08:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.174 08:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.174 08:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.174 08:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:14.174 { 00:20:14.174 "cntlid": 105, 00:20:14.174 "qid": 0, 00:20:14.174 "state": "enabled", 00:20:14.174 "thread": "nvmf_tgt_poll_group_000", 00:20:14.174 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:14.174 "listen_address": { 00:20:14.174 "trtype": "TCP", 00:20:14.174 "adrfam": "IPv4", 00:20:14.174 "traddr": "10.0.0.2", 00:20:14.174 "trsvcid": "4420" 00:20:14.174 }, 00:20:14.174 "peer_address": { 00:20:14.174 "trtype": "TCP", 00:20:14.174 "adrfam": "IPv4", 00:20:14.174 "traddr": "10.0.0.1", 00:20:14.174 "trsvcid": "37780" 00:20:14.174 }, 00:20:14.174 "auth": { 00:20:14.174 "state": "completed", 00:20:14.174 "digest": "sha512", 00:20:14.174 "dhgroup": "ffdhe2048" 00:20:14.174 } 00:20:14.174 } 00:20:14.175 ]' 00:20:14.175 08:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:14.175 08:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:14.175 08:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:14.175 08:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:14.175 08:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:14.175 08:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.175 08:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.175 08:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.435 08:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTE1MTkyMTk3ZWNhZWZiNzYyMjhiYzNkN2U2ZTcyYTY0YzAzZmNkODIxMzI1MDU3H3XE0Q==: --dhchap-ctrl-secret DHHC-1:03:NTRkNDA0NGZkNmU5YzIxYTVhYWVjOTM2MWQ3NTI1Y2ViNTI1OTYyZThmNDExNWM4NjIwNjk1Y2U0ZWI5MjU0M3VUq34=: 00:20:14.435 08:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NTE1MTkyMTk3ZWNhZWZiNzYyMjhiYzNkN2U2ZTcyYTY0YzAzZmNkODIxMzI1MDU3H3XE0Q==: --dhchap-ctrl-secret DHHC-1:03:NTRkNDA0NGZkNmU5YzIxYTVhYWVjOTM2MWQ3NTI1Y2ViNTI1OTYyZThmNDExNWM4NjIwNjk1Y2U0ZWI5MjU0M3VUq34=: 00:20:15.006 08:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.006 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.006 08:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:15.006 08:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.006 08:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.006 08:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.006 08:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:15.006 08:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:15.007 08:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:15.267 08:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:20:15.267 08:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:15.267 08:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:15.267 08:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:15.267 08:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:15.267 08:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:15.267 08:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.267 08:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.267 08:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.267 08:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.267 08:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.267 08:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.267 08:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.528 00:20:15.528 08:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:15.528 08:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:15.528 08:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.788 08:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.788 08:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.788 08:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.788 08:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.788 08:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.788 08:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:15.788 { 00:20:15.788 "cntlid": 107, 00:20:15.788 "qid": 0, 00:20:15.788 "state": "enabled", 00:20:15.788 "thread": "nvmf_tgt_poll_group_000", 00:20:15.788 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:15.788 "listen_address": { 00:20:15.788 "trtype": "TCP", 00:20:15.788 "adrfam": "IPv4", 00:20:15.788 "traddr": "10.0.0.2", 00:20:15.788 "trsvcid": "4420" 00:20:15.788 }, 00:20:15.788 "peer_address": { 00:20:15.788 "trtype": "TCP", 00:20:15.788 "adrfam": "IPv4", 00:20:15.788 "traddr": "10.0.0.1", 00:20:15.788 "trsvcid": "37804" 00:20:15.788 }, 00:20:15.788 "auth": { 00:20:15.788 "state": "completed", 00:20:15.788 "digest": "sha512", 00:20:15.788 "dhgroup": "ffdhe2048" 00:20:15.788 } 00:20:15.788 } 00:20:15.788 ]' 00:20:15.788 08:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:15.788 08:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:15.788 08:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:15.788 08:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:15.788 08:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:15.788 08:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.788 08:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.788 08:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.049 08:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmE0MzYyZGFkOTgxMWNhYzQ3MDU2ZDYyY2NkNDI5MjOJSZcM: --dhchap-ctrl-secret DHHC-1:02:NTEyMmRhMmYwNDkyZmNlMzIxZjk5OWM5ZmNjODZlM2E0ZThmM2E0NjNkMmYxMjhkUoDicg==: 00:20:16.049 08:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NmE0MzYyZGFkOTgxMWNhYzQ3MDU2ZDYyY2NkNDI5MjOJSZcM: --dhchap-ctrl-secret DHHC-1:02:NTEyMmRhMmYwNDkyZmNlMzIxZjk5OWM5ZmNjODZlM2E0ZThmM2E0NjNkMmYxMjhkUoDicg==: 00:20:16.620 08:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.882 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.882 08:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:16.882 08:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.882 08:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.882 08:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.882 08:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:16.882 08:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:16.882 08:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:16.882 08:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:20:16.882 08:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:16.882 08:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:16.882 08:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:16.882 08:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:16.882 08:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.882 08:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.882 08:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.882 08:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.882 08:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.882 08:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.882 08:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.882 08:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.144 00:20:17.144 08:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:17.144 08:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:17.144 08:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.403 08:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.403 08:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.403 08:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.403 08:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.403 08:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.403 08:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:17.403 { 00:20:17.403 "cntlid": 109, 00:20:17.403 "qid": 0, 00:20:17.403 "state": "enabled", 00:20:17.403 "thread": "nvmf_tgt_poll_group_000", 00:20:17.403 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:17.403 "listen_address": { 00:20:17.403 "trtype": "TCP", 00:20:17.403 "adrfam": "IPv4", 00:20:17.403 "traddr": "10.0.0.2", 00:20:17.403 "trsvcid": "4420" 00:20:17.403 }, 00:20:17.403 "peer_address": { 00:20:17.403 "trtype": "TCP", 00:20:17.403 "adrfam": "IPv4", 00:20:17.403 "traddr": "10.0.0.1", 00:20:17.403 "trsvcid": "37834" 00:20:17.403 }, 00:20:17.403 "auth": { 00:20:17.403 "state": "completed", 00:20:17.403 "digest": "sha512", 00:20:17.403 "dhgroup": "ffdhe2048" 00:20:17.403 } 00:20:17.403 } 00:20:17.403 ]' 00:20:17.403 08:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:17.403 08:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:17.403 08:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:17.403 08:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:17.403 08:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:17.403 08:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.403 08:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.403 08:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.663 08:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTFkYjA0Y2M4YTM5MmE0NDQ5N2JhZmZkNjdjYjMyMjc0ODA1MzZiZmVmZDhlY2Q0siICcA==: --dhchap-ctrl-secret DHHC-1:01:YjU3NTQ5OTQxNzU1ZWY4OTIxN2RlNWVmOTIyNzc3ZDnMz2+M: 00:20:17.663 08:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YTFkYjA0Y2M4YTM5MmE0NDQ5N2JhZmZkNjdjYjMyMjc0ODA1MzZiZmVmZDhlY2Q0siICcA==: --dhchap-ctrl-secret DHHC-1:01:YjU3NTQ5OTQxNzU1ZWY4OTIxN2RlNWVmOTIyNzc3ZDnMz2+M: 00:20:18.606 08:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.606 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.606 08:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:18.606 08:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.606 08:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.606 08:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.606 08:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:18.606 08:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:18.606 08:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:18.606 08:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:20:18.606 08:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:18.606 08:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:18.606 08:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:18.606 08:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:18.606 08:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.606 08:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:18.606 08:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.606 08:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.606 08:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.606 08:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:18.606 08:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:18.606 08:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:18.868 00:20:18.868 08:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:18.868 08:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:18.868 08:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.128 08:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.128 08:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.128 08:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.128 08:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.128 08:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.128 08:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:19.128 { 00:20:19.128 "cntlid": 111, 00:20:19.128 "qid": 0, 00:20:19.128 "state": "enabled", 00:20:19.128 "thread": "nvmf_tgt_poll_group_000", 00:20:19.128 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:19.128 "listen_address": { 00:20:19.128 "trtype": "TCP", 00:20:19.128 "adrfam": "IPv4", 00:20:19.128 "traddr": "10.0.0.2", 00:20:19.128 "trsvcid": "4420" 00:20:19.128 }, 00:20:19.128 "peer_address": { 00:20:19.128 "trtype": "TCP", 00:20:19.128 "adrfam": "IPv4", 00:20:19.128 "traddr": "10.0.0.1", 00:20:19.128 "trsvcid": "37864" 00:20:19.128 }, 00:20:19.128 "auth": { 00:20:19.128 "state": "completed", 00:20:19.128 "digest": "sha512", 00:20:19.128 "dhgroup": "ffdhe2048" 00:20:19.128 } 00:20:19.128 } 00:20:19.128 ]' 00:20:19.128 08:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:19.128 08:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:19.128 08:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:19.128 08:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:19.128 08:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:19.128 08:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.128 08:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.128 08:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.389 08:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGFlYjc1NWQ0MTBjM2ZkODFkNTZkMTliZmE2ZDY4NDJjNWZmMTc4MmI0MWQ2MjliOTlhYzA4MThlODk3Y2Q2Ycvd40Y=: 00:20:19.389 08:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NGFlYjc1NWQ0MTBjM2ZkODFkNTZkMTliZmE2ZDY4NDJjNWZmMTc4MmI0MWQ2MjliOTlhYzA4MThlODk3Y2Q2Ycvd40Y=: 00:20:20.332 08:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.332 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.332 08:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:20.332 08:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.332 08:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.332 08:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.332 08:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:20.332 08:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:20.332 08:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:20.332 08:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:20.332 08:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:20:20.332 08:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:20.332 08:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:20.332 08:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:20.332 08:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:20.332 08:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.333 08:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.333 08:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.333 08:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.333 08:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.333 08:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.333 08:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.333 08:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.594 00:20:20.594 08:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:20.594 08:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:20.594 08:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.855 08:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.855 08:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.855 08:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.855 08:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.855 08:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.855 08:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:20.855 { 00:20:20.855 "cntlid": 113, 00:20:20.855 "qid": 0, 00:20:20.855 "state": "enabled", 00:20:20.855 "thread": "nvmf_tgt_poll_group_000", 00:20:20.855 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:20.855 "listen_address": { 00:20:20.855 "trtype": "TCP", 00:20:20.855 "adrfam": "IPv4", 00:20:20.855 "traddr": "10.0.0.2", 00:20:20.855 "trsvcid": "4420" 00:20:20.855 }, 00:20:20.855 "peer_address": { 00:20:20.855 "trtype": "TCP", 00:20:20.855 "adrfam": "IPv4", 00:20:20.855 "traddr": "10.0.0.1", 00:20:20.855 "trsvcid": "39832" 00:20:20.855 }, 00:20:20.855 "auth": { 00:20:20.855 "state": "completed", 00:20:20.855 "digest": "sha512", 00:20:20.855 "dhgroup": "ffdhe3072" 00:20:20.855 } 00:20:20.855 } 00:20:20.855 ]' 00:20:20.855 08:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:20.855 08:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:20.855 08:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:20.855 08:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:20.855 08:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:20.855 08:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.855 08:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.855 08:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.116 08:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTE1MTkyMTk3ZWNhZWZiNzYyMjhiYzNkN2U2ZTcyYTY0YzAzZmNkODIxMzI1MDU3H3XE0Q==: --dhchap-ctrl-secret DHHC-1:03:NTRkNDA0NGZkNmU5YzIxYTVhYWVjOTM2MWQ3NTI1Y2ViNTI1OTYyZThmNDExNWM4NjIwNjk1Y2U0ZWI5MjU0M3VUq34=: 00:20:21.116 08:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NTE1MTkyMTk3ZWNhZWZiNzYyMjhiYzNkN2U2ZTcyYTY0YzAzZmNkODIxMzI1MDU3H3XE0Q==: --dhchap-ctrl-secret DHHC-1:03:NTRkNDA0NGZkNmU5YzIxYTVhYWVjOTM2MWQ3NTI1Y2ViNTI1OTYyZThmNDExNWM4NjIwNjk1Y2U0ZWI5MjU0M3VUq34=: 00:20:22.057 08:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.057 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.057 08:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:22.057 08:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.057 08:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.057 08:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.057 08:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:22.057 08:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:22.057 08:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:22.057 08:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:20:22.057 08:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:22.057 08:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:22.057 08:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:22.057 08:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:22.057 08:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.057 08:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.057 08:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.057 08:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.057 08:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.057 08:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.057 08:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.057 08:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.318 00:20:22.318 08:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:22.318 08:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:22.318 08:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.579 08:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.579 08:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.579 08:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.579 08:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.579 08:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.579 08:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:22.579 { 00:20:22.579 "cntlid": 115, 00:20:22.579 "qid": 0, 00:20:22.579 "state": "enabled", 00:20:22.579 "thread": "nvmf_tgt_poll_group_000", 00:20:22.579 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:22.579 "listen_address": { 00:20:22.579 "trtype": "TCP", 00:20:22.579 "adrfam": "IPv4", 00:20:22.579 "traddr": "10.0.0.2", 00:20:22.579 "trsvcid": "4420" 00:20:22.579 }, 00:20:22.579 "peer_address": { 00:20:22.579 "trtype": "TCP", 00:20:22.579 "adrfam": "IPv4", 00:20:22.579 "traddr": "10.0.0.1", 00:20:22.579 "trsvcid": "39860" 00:20:22.579 }, 00:20:22.579 "auth": { 00:20:22.579 "state": "completed", 00:20:22.579 "digest": "sha512", 00:20:22.579 "dhgroup": "ffdhe3072" 00:20:22.579 } 00:20:22.579 } 00:20:22.579 ]' 00:20:22.579 08:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:22.579 08:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:22.579 08:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:22.579 08:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:22.579 08:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:22.579 08:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.579 08:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.579 08:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.840 08:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmE0MzYyZGFkOTgxMWNhYzQ3MDU2ZDYyY2NkNDI5MjOJSZcM: --dhchap-ctrl-secret DHHC-1:02:NTEyMmRhMmYwNDkyZmNlMzIxZjk5OWM5ZmNjODZlM2E0ZThmM2E0NjNkMmYxMjhkUoDicg==: 00:20:22.840 08:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NmE0MzYyZGFkOTgxMWNhYzQ3MDU2ZDYyY2NkNDI5MjOJSZcM: --dhchap-ctrl-secret DHHC-1:02:NTEyMmRhMmYwNDkyZmNlMzIxZjk5OWM5ZmNjODZlM2E0ZThmM2E0NjNkMmYxMjhkUoDicg==: 00:20:23.782 08:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.782 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.782 08:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:23.782 08:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.782 08:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.783 08:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.783 08:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:23.783 08:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:23.783 08:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:23.783 08:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:20:23.783 08:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:23.783 08:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:23.783 08:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:23.783 08:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:23.783 08:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.783 08:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.783 08:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.783 08:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.783 08:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.783 08:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.783 08:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.783 08:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.044 00:20:24.044 08:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:24.044 08:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:24.044 08:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.306 08:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.306 08:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.306 08:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.306 08:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.306 08:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.306 08:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:24.306 { 00:20:24.306 "cntlid": 117, 00:20:24.306 "qid": 0, 00:20:24.306 "state": "enabled", 00:20:24.306 "thread": "nvmf_tgt_poll_group_000", 00:20:24.306 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:24.306 "listen_address": { 00:20:24.306 "trtype": "TCP", 00:20:24.306 "adrfam": "IPv4", 00:20:24.306 "traddr": "10.0.0.2", 00:20:24.306 "trsvcid": "4420" 00:20:24.306 }, 00:20:24.306 "peer_address": { 00:20:24.306 "trtype": "TCP", 00:20:24.306 "adrfam": "IPv4", 00:20:24.306 "traddr": "10.0.0.1", 00:20:24.306 "trsvcid": "39882" 00:20:24.306 }, 00:20:24.306 "auth": { 00:20:24.306 "state": "completed", 00:20:24.306 "digest": "sha512", 00:20:24.306 "dhgroup": "ffdhe3072" 00:20:24.306 } 00:20:24.306 } 00:20:24.306 ]' 00:20:24.306 08:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:24.306 08:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:24.306 08:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:24.306 08:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:24.306 08:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:24.306 08:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.306 08:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.306 08:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.566 08:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTFkYjA0Y2M4YTM5MmE0NDQ5N2JhZmZkNjdjYjMyMjc0ODA1MzZiZmVmZDhlY2Q0siICcA==: --dhchap-ctrl-secret DHHC-1:01:YjU3NTQ5OTQxNzU1ZWY4OTIxN2RlNWVmOTIyNzc3ZDnMz2+M: 00:20:24.566 08:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YTFkYjA0Y2M4YTM5MmE0NDQ5N2JhZmZkNjdjYjMyMjc0ODA1MzZiZmVmZDhlY2Q0siICcA==: --dhchap-ctrl-secret DHHC-1:01:YjU3NTQ5OTQxNzU1ZWY4OTIxN2RlNWVmOTIyNzc3ZDnMz2+M: 00:20:25.138 08:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.399 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.399 08:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:25.399 08:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.399 08:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.399 08:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.399 08:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:25.399 08:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:25.399 08:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:25.399 08:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:20:25.399 08:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:25.399 08:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:25.399 08:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:25.399 08:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:25.399 08:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.399 08:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:25.399 08:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.399 08:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.399 08:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.399 08:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:25.399 08:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:25.400 08:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:25.660 00:20:25.660 08:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:25.660 08:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:25.660 08:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.921 08:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.921 08:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.921 08:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.921 08:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.921 08:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.921 08:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:25.921 { 00:20:25.921 "cntlid": 119, 00:20:25.921 "qid": 0, 00:20:25.921 "state": "enabled", 00:20:25.921 "thread": "nvmf_tgt_poll_group_000", 00:20:25.921 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:25.921 "listen_address": { 00:20:25.921 "trtype": "TCP", 00:20:25.921 "adrfam": "IPv4", 00:20:25.921 "traddr": "10.0.0.2", 00:20:25.921 "trsvcid": "4420" 00:20:25.921 }, 00:20:25.921 "peer_address": { 00:20:25.921 "trtype": "TCP", 00:20:25.921 "adrfam": "IPv4", 00:20:25.921 "traddr": "10.0.0.1", 00:20:25.921 "trsvcid": "39912" 00:20:25.921 }, 00:20:25.921 "auth": { 00:20:25.921 "state": "completed", 00:20:25.921 "digest": "sha512", 00:20:25.921 "dhgroup": "ffdhe3072" 00:20:25.921 } 00:20:25.921 } 00:20:25.921 ]' 00:20:25.921 08:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:25.921 08:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:25.921 08:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:25.921 08:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:25.921 08:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:25.921 08:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.921 08:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.921 08:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.183 08:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGFlYjc1NWQ0MTBjM2ZkODFkNTZkMTliZmE2ZDY4NDJjNWZmMTc4MmI0MWQ2MjliOTlhYzA4MThlODk3Y2Q2Ycvd40Y=: 00:20:26.183 08:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NGFlYjc1NWQ0MTBjM2ZkODFkNTZkMTliZmE2ZDY4NDJjNWZmMTc4MmI0MWQ2MjliOTlhYzA4MThlODk3Y2Q2Ycvd40Y=: 00:20:27.125 08:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.125 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.125 08:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:27.125 08:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.125 08:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.125 08:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.125 08:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:27.125 08:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:27.125 08:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:27.125 08:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:27.125 08:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:20:27.125 08:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:27.125 08:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:27.125 08:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:27.125 08:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:27.125 08:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.125 08:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.125 08:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.125 08:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.125 08:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.125 08:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.125 08:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.125 08:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.387 00:20:27.387 08:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:27.387 08:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:27.387 08:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.648 08:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.648 08:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.648 08:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.648 08:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.648 08:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.648 08:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:27.648 { 00:20:27.648 "cntlid": 121, 00:20:27.648 "qid": 0, 00:20:27.648 "state": "enabled", 00:20:27.648 "thread": "nvmf_tgt_poll_group_000", 00:20:27.648 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:27.648 "listen_address": { 00:20:27.648 "trtype": "TCP", 00:20:27.648 "adrfam": "IPv4", 00:20:27.648 "traddr": "10.0.0.2", 00:20:27.648 "trsvcid": "4420" 00:20:27.648 }, 00:20:27.648 "peer_address": { 00:20:27.648 "trtype": "TCP", 00:20:27.648 "adrfam": "IPv4", 00:20:27.648 "traddr": "10.0.0.1", 00:20:27.648 "trsvcid": "39938" 00:20:27.648 }, 00:20:27.648 "auth": { 00:20:27.648 "state": "completed", 00:20:27.648 "digest": "sha512", 00:20:27.648 "dhgroup": "ffdhe4096" 00:20:27.648 } 00:20:27.648 } 00:20:27.648 ]' 00:20:27.648 08:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:27.648 08:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:27.648 08:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:27.648 08:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:27.648 08:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:27.648 08:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.648 08:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.648 08:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.909 08:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTE1MTkyMTk3ZWNhZWZiNzYyMjhiYzNkN2U2ZTcyYTY0YzAzZmNkODIxMzI1MDU3H3XE0Q==: --dhchap-ctrl-secret DHHC-1:03:NTRkNDA0NGZkNmU5YzIxYTVhYWVjOTM2MWQ3NTI1Y2ViNTI1OTYyZThmNDExNWM4NjIwNjk1Y2U0ZWI5MjU0M3VUq34=: 00:20:27.909 08:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NTE1MTkyMTk3ZWNhZWZiNzYyMjhiYzNkN2U2ZTcyYTY0YzAzZmNkODIxMzI1MDU3H3XE0Q==: --dhchap-ctrl-secret DHHC-1:03:NTRkNDA0NGZkNmU5YzIxYTVhYWVjOTM2MWQ3NTI1Y2ViNTI1OTYyZThmNDExNWM4NjIwNjk1Y2U0ZWI5MjU0M3VUq34=: 00:20:28.609 08:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.609 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.609 08:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:28.609 08:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.609 08:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.609 08:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.609 08:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:28.609 08:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:28.609 08:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:28.869 08:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:20:28.869 08:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:28.869 08:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:28.869 08:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:28.869 08:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:28.869 08:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.869 08:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.869 08:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.869 08:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.869 08:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.869 08:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.869 08:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.869 08:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:29.130 00:20:29.130 08:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:29.130 08:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:29.130 08:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.390 08:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.390 08:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.390 08:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.390 08:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.390 08:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.390 08:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:29.390 { 00:20:29.390 "cntlid": 123, 00:20:29.390 "qid": 0, 00:20:29.390 "state": "enabled", 00:20:29.390 "thread": "nvmf_tgt_poll_group_000", 00:20:29.390 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:29.390 "listen_address": { 00:20:29.390 "trtype": "TCP", 00:20:29.390 "adrfam": "IPv4", 00:20:29.390 "traddr": "10.0.0.2", 00:20:29.390 "trsvcid": "4420" 00:20:29.390 }, 00:20:29.390 "peer_address": { 00:20:29.390 "trtype": "TCP", 00:20:29.390 "adrfam": "IPv4", 00:20:29.390 "traddr": "10.0.0.1", 00:20:29.391 "trsvcid": "39984" 00:20:29.391 }, 00:20:29.391 "auth": { 00:20:29.391 "state": "completed", 00:20:29.391 "digest": "sha512", 00:20:29.391 "dhgroup": "ffdhe4096" 00:20:29.391 } 00:20:29.391 } 00:20:29.391 ]' 00:20:29.391 08:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:29.391 08:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:29.391 08:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:29.391 08:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:29.391 08:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:29.391 08:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.391 08:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.391 08:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.651 08:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmE0MzYyZGFkOTgxMWNhYzQ3MDU2ZDYyY2NkNDI5MjOJSZcM: --dhchap-ctrl-secret DHHC-1:02:NTEyMmRhMmYwNDkyZmNlMzIxZjk5OWM5ZmNjODZlM2E0ZThmM2E0NjNkMmYxMjhkUoDicg==: 00:20:29.651 08:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NmE0MzYyZGFkOTgxMWNhYzQ3MDU2ZDYyY2NkNDI5MjOJSZcM: --dhchap-ctrl-secret DHHC-1:02:NTEyMmRhMmYwNDkyZmNlMzIxZjk5OWM5ZmNjODZlM2E0ZThmM2E0NjNkMmYxMjhkUoDicg==: 00:20:30.223 08:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.483 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.483 08:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:30.483 08:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.483 08:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.483 08:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.483 08:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:30.483 08:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:30.483 08:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:30.483 08:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:20:30.483 08:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:30.483 08:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:30.483 08:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:30.483 08:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:30.483 08:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.483 08:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.483 08:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.483 08:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.483 08:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.483 08:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.483 08:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.483 08:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.744 00:20:30.744 08:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:30.744 08:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:30.744 08:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.004 08:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.004 08:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.004 08:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.004 08:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.004 08:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.004 08:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:31.004 { 00:20:31.004 "cntlid": 125, 00:20:31.004 "qid": 0, 00:20:31.004 "state": "enabled", 00:20:31.004 "thread": "nvmf_tgt_poll_group_000", 00:20:31.004 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:31.004 "listen_address": { 00:20:31.004 "trtype": "TCP", 00:20:31.004 "adrfam": "IPv4", 00:20:31.004 "traddr": "10.0.0.2", 00:20:31.004 "trsvcid": "4420" 00:20:31.004 }, 00:20:31.004 "peer_address": { 00:20:31.004 "trtype": "TCP", 00:20:31.004 "adrfam": "IPv4", 00:20:31.004 "traddr": "10.0.0.1", 00:20:31.004 "trsvcid": "38994" 00:20:31.004 }, 00:20:31.004 "auth": { 00:20:31.004 "state": "completed", 00:20:31.004 "digest": "sha512", 00:20:31.004 "dhgroup": "ffdhe4096" 00:20:31.004 } 00:20:31.004 } 00:20:31.004 ]' 00:20:31.004 08:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:31.004 08:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:31.004 08:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:31.004 08:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:31.004 08:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:31.004 08:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.004 08:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.004 08:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.264 08:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTFkYjA0Y2M4YTM5MmE0NDQ5N2JhZmZkNjdjYjMyMjc0ODA1MzZiZmVmZDhlY2Q0siICcA==: --dhchap-ctrl-secret DHHC-1:01:YjU3NTQ5OTQxNzU1ZWY4OTIxN2RlNWVmOTIyNzc3ZDnMz2+M: 00:20:31.265 08:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YTFkYjA0Y2M4YTM5MmE0NDQ5N2JhZmZkNjdjYjMyMjc0ODA1MzZiZmVmZDhlY2Q0siICcA==: --dhchap-ctrl-secret DHHC-1:01:YjU3NTQ5OTQxNzU1ZWY4OTIxN2RlNWVmOTIyNzc3ZDnMz2+M: 00:20:32.204 08:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.204 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.204 08:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:32.204 08:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.204 08:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.204 08:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.204 08:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:32.204 08:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:32.204 08:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:32.204 08:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:20:32.204 08:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:32.204 08:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:32.204 08:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:32.204 08:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:32.204 08:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.204 08:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:32.204 08:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.204 08:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.204 08:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.204 08:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:32.204 08:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:32.204 08:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:32.464 00:20:32.464 08:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:32.464 08:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:32.464 08:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.725 08:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.725 08:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.725 08:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.725 08:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.725 08:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.725 08:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:32.725 { 00:20:32.725 "cntlid": 127, 00:20:32.725 "qid": 0, 00:20:32.725 "state": "enabled", 00:20:32.725 "thread": "nvmf_tgt_poll_group_000", 00:20:32.725 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:32.725 "listen_address": { 00:20:32.725 "trtype": "TCP", 00:20:32.725 "adrfam": "IPv4", 00:20:32.725 "traddr": "10.0.0.2", 00:20:32.725 "trsvcid": "4420" 00:20:32.725 }, 00:20:32.725 "peer_address": { 00:20:32.725 "trtype": "TCP", 00:20:32.725 "adrfam": "IPv4", 00:20:32.725 "traddr": "10.0.0.1", 00:20:32.725 "trsvcid": "39012" 00:20:32.725 }, 00:20:32.725 "auth": { 00:20:32.725 "state": "completed", 00:20:32.725 "digest": "sha512", 00:20:32.725 "dhgroup": "ffdhe4096" 00:20:32.725 } 00:20:32.725 } 00:20:32.725 ]' 00:20:32.725 08:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:32.725 08:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:32.725 08:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:32.725 08:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:32.726 08:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:32.726 08:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.726 08:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.726 08:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.985 08:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGFlYjc1NWQ0MTBjM2ZkODFkNTZkMTliZmE2ZDY4NDJjNWZmMTc4MmI0MWQ2MjliOTlhYzA4MThlODk3Y2Q2Ycvd40Y=: 00:20:32.985 08:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NGFlYjc1NWQ0MTBjM2ZkODFkNTZkMTliZmE2ZDY4NDJjNWZmMTc4MmI0MWQ2MjliOTlhYzA4MThlODk3Y2Q2Ycvd40Y=: 00:20:33.554 08:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.813 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.813 08:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:33.813 08:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.813 08:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.813 08:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.813 08:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:33.813 08:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:33.813 08:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:33.814 08:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:33.814 08:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:20:33.814 08:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:33.814 08:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:33.814 08:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:33.814 08:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:33.814 08:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.814 08:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.814 08:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.814 08:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.814 08:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.814 08:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.814 08:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.814 08:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:34.384 00:20:34.384 08:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:34.384 08:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:34.384 08:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.384 08:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.384 08:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.384 08:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.384 08:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.384 08:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.384 08:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:34.384 { 00:20:34.384 "cntlid": 129, 00:20:34.384 "qid": 0, 00:20:34.384 "state": "enabled", 00:20:34.384 "thread": "nvmf_tgt_poll_group_000", 00:20:34.384 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:34.384 "listen_address": { 00:20:34.384 "trtype": "TCP", 00:20:34.384 "adrfam": "IPv4", 00:20:34.384 "traddr": "10.0.0.2", 00:20:34.384 "trsvcid": "4420" 00:20:34.384 }, 00:20:34.384 "peer_address": { 00:20:34.384 "trtype": "TCP", 00:20:34.384 "adrfam": "IPv4", 00:20:34.384 "traddr": "10.0.0.1", 00:20:34.384 "trsvcid": "39028" 00:20:34.384 }, 00:20:34.384 "auth": { 00:20:34.384 "state": "completed", 00:20:34.384 "digest": "sha512", 00:20:34.384 "dhgroup": "ffdhe6144" 00:20:34.384 } 00:20:34.384 } 00:20:34.384 ]' 00:20:34.384 08:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:34.384 08:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:34.384 08:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:34.645 08:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:34.645 08:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:34.645 08:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.645 08:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.645 08:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.645 08:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTE1MTkyMTk3ZWNhZWZiNzYyMjhiYzNkN2U2ZTcyYTY0YzAzZmNkODIxMzI1MDU3H3XE0Q==: --dhchap-ctrl-secret DHHC-1:03:NTRkNDA0NGZkNmU5YzIxYTVhYWVjOTM2MWQ3NTI1Y2ViNTI1OTYyZThmNDExNWM4NjIwNjk1Y2U0ZWI5MjU0M3VUq34=: 00:20:34.645 08:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NTE1MTkyMTk3ZWNhZWZiNzYyMjhiYzNkN2U2ZTcyYTY0YzAzZmNkODIxMzI1MDU3H3XE0Q==: --dhchap-ctrl-secret DHHC-1:03:NTRkNDA0NGZkNmU5YzIxYTVhYWVjOTM2MWQ3NTI1Y2ViNTI1OTYyZThmNDExNWM4NjIwNjk1Y2U0ZWI5MjU0M3VUq34=: 00:20:35.588 08:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.588 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.588 08:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:35.588 08:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.588 08:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.588 08:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.588 08:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:35.588 08:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:35.588 08:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:35.588 08:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:20:35.588 08:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:35.588 08:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:35.588 08:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:35.588 08:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:35.588 08:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.588 08:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.588 08:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.588 08:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.588 08:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.588 08:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.588 08:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.588 08:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.159 00:20:36.159 08:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:36.159 08:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:36.159 08:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.159 08:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.159 08:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.159 08:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.159 08:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.159 08:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.159 08:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:36.159 { 00:20:36.159 "cntlid": 131, 00:20:36.159 "qid": 0, 00:20:36.159 "state": "enabled", 00:20:36.159 "thread": "nvmf_tgt_poll_group_000", 00:20:36.159 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:36.159 "listen_address": { 00:20:36.159 "trtype": "TCP", 00:20:36.159 "adrfam": "IPv4", 00:20:36.159 "traddr": "10.0.0.2", 00:20:36.159 "trsvcid": "4420" 00:20:36.159 }, 00:20:36.159 "peer_address": { 00:20:36.159 "trtype": "TCP", 00:20:36.159 "adrfam": "IPv4", 00:20:36.159 "traddr": "10.0.0.1", 00:20:36.159 "trsvcid": "39044" 00:20:36.159 }, 00:20:36.159 "auth": { 00:20:36.159 "state": "completed", 00:20:36.159 "digest": "sha512", 00:20:36.159 "dhgroup": "ffdhe6144" 00:20:36.159 } 00:20:36.159 } 00:20:36.159 ]' 00:20:36.159 08:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:36.159 08:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:36.159 08:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:36.159 08:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:36.159 08:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:36.419 08:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.419 08:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.419 08:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.419 08:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmE0MzYyZGFkOTgxMWNhYzQ3MDU2ZDYyY2NkNDI5MjOJSZcM: --dhchap-ctrl-secret DHHC-1:02:NTEyMmRhMmYwNDkyZmNlMzIxZjk5OWM5ZmNjODZlM2E0ZThmM2E0NjNkMmYxMjhkUoDicg==: 00:20:36.419 08:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NmE0MzYyZGFkOTgxMWNhYzQ3MDU2ZDYyY2NkNDI5MjOJSZcM: --dhchap-ctrl-secret DHHC-1:02:NTEyMmRhMmYwNDkyZmNlMzIxZjk5OWM5ZmNjODZlM2E0ZThmM2E0NjNkMmYxMjhkUoDicg==: 00:20:37.359 08:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.359 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.359 08:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:37.359 08:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.359 08:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.359 08:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.359 08:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:37.359 08:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:37.359 08:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:37.359 08:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:20:37.359 08:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:37.359 08:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:37.359 08:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:37.359 08:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:37.359 08:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:37.359 08:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.359 08:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.359 08:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.359 08:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.359 08:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.359 08:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.359 08:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.930 00:20:37.930 08:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:37.930 08:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:37.930 08:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.930 08:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.930 08:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.930 08:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.930 08:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.930 08:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.930 08:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:37.930 { 00:20:37.930 "cntlid": 133, 00:20:37.930 "qid": 0, 00:20:37.930 "state": "enabled", 00:20:37.930 "thread": "nvmf_tgt_poll_group_000", 00:20:37.930 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:37.930 "listen_address": { 00:20:37.930 "trtype": "TCP", 00:20:37.930 "adrfam": "IPv4", 00:20:37.930 "traddr": "10.0.0.2", 00:20:37.930 "trsvcid": "4420" 00:20:37.930 }, 00:20:37.930 "peer_address": { 00:20:37.930 "trtype": "TCP", 00:20:37.930 "adrfam": "IPv4", 00:20:37.930 "traddr": "10.0.0.1", 00:20:37.930 "trsvcid": "39066" 00:20:37.930 }, 00:20:37.930 "auth": { 00:20:37.930 "state": "completed", 00:20:37.930 "digest": "sha512", 00:20:37.930 "dhgroup": "ffdhe6144" 00:20:37.930 } 00:20:37.930 } 00:20:37.930 ]' 00:20:37.930 08:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:37.930 08:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:37.930 08:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:38.190 08:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:38.190 08:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:38.190 08:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:38.191 08:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:38.191 08:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:38.191 08:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTFkYjA0Y2M4YTM5MmE0NDQ5N2JhZmZkNjdjYjMyMjc0ODA1MzZiZmVmZDhlY2Q0siICcA==: --dhchap-ctrl-secret DHHC-1:01:YjU3NTQ5OTQxNzU1ZWY4OTIxN2RlNWVmOTIyNzc3ZDnMz2+M: 00:20:38.191 08:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YTFkYjA0Y2M4YTM5MmE0NDQ5N2JhZmZkNjdjYjMyMjc0ODA1MzZiZmVmZDhlY2Q0siICcA==: --dhchap-ctrl-secret DHHC-1:01:YjU3NTQ5OTQxNzU1ZWY4OTIxN2RlNWVmOTIyNzc3ZDnMz2+M: 00:20:39.131 08:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:39.131 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:39.131 08:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:39.131 08:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.131 08:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.131 08:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.131 08:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:39.131 08:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:39.131 08:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:39.131 08:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:20:39.131 08:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:39.131 08:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:39.131 08:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:39.131 08:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:39.131 08:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:39.131 08:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:39.131 08:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.131 08:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.131 08:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.131 08:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:39.131 08:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:39.131 08:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:39.703 00:20:39.703 08:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:39.703 08:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:39.703 08:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.703 08:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.703 08:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.703 08:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.703 08:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.703 08:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.703 08:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:39.703 { 00:20:39.703 "cntlid": 135, 00:20:39.703 "qid": 0, 00:20:39.703 "state": "enabled", 00:20:39.703 "thread": "nvmf_tgt_poll_group_000", 00:20:39.703 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:39.703 "listen_address": { 00:20:39.703 "trtype": "TCP", 00:20:39.703 "adrfam": "IPv4", 00:20:39.703 "traddr": "10.0.0.2", 00:20:39.703 "trsvcid": "4420" 00:20:39.703 }, 00:20:39.703 "peer_address": { 00:20:39.703 "trtype": "TCP", 00:20:39.703 "adrfam": "IPv4", 00:20:39.703 "traddr": "10.0.0.1", 00:20:39.703 "trsvcid": "36748" 00:20:39.703 }, 00:20:39.703 "auth": { 00:20:39.703 "state": "completed", 00:20:39.703 "digest": "sha512", 00:20:39.703 "dhgroup": "ffdhe6144" 00:20:39.703 } 00:20:39.703 } 00:20:39.703 ]' 00:20:39.703 08:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:39.703 08:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:39.703 08:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:39.963 08:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:39.963 08:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:39.963 08:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.963 08:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.963 08:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.963 08:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGFlYjc1NWQ0MTBjM2ZkODFkNTZkMTliZmE2ZDY4NDJjNWZmMTc4MmI0MWQ2MjliOTlhYzA4MThlODk3Y2Q2Ycvd40Y=: 00:20:39.964 08:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NGFlYjc1NWQ0MTBjM2ZkODFkNTZkMTliZmE2ZDY4NDJjNWZmMTc4MmI0MWQ2MjliOTlhYzA4MThlODk3Y2Q2Ycvd40Y=: 00:20:40.904 08:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.904 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.904 08:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:40.904 08:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.904 08:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.904 08:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.904 08:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:40.904 08:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:40.904 08:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:40.904 08:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:41.165 08:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:20:41.165 08:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:41.165 08:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:41.165 08:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:41.165 08:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:41.165 08:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.165 08:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.165 08:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.165 08:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.165 08:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.165 08:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.165 08:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.165 08:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.426 00:20:41.688 08:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:41.688 08:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:41.688 08:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.688 08:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.688 08:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.688 08:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.688 08:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.688 08:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.688 08:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:41.688 { 00:20:41.688 "cntlid": 137, 00:20:41.688 "qid": 0, 00:20:41.688 "state": "enabled", 00:20:41.688 "thread": "nvmf_tgt_poll_group_000", 00:20:41.688 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:41.688 "listen_address": { 00:20:41.688 "trtype": "TCP", 00:20:41.688 "adrfam": "IPv4", 00:20:41.688 "traddr": "10.0.0.2", 00:20:41.688 "trsvcid": "4420" 00:20:41.688 }, 00:20:41.688 "peer_address": { 00:20:41.688 "trtype": "TCP", 00:20:41.688 "adrfam": "IPv4", 00:20:41.688 "traddr": "10.0.0.1", 00:20:41.688 "trsvcid": "36778" 00:20:41.688 }, 00:20:41.688 "auth": { 00:20:41.688 "state": "completed", 00:20:41.688 "digest": "sha512", 00:20:41.688 "dhgroup": "ffdhe8192" 00:20:41.688 } 00:20:41.688 } 00:20:41.688 ]' 00:20:41.688 08:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:41.688 08:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:41.688 08:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:41.949 08:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:41.949 08:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:41.949 08:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.949 08:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.949 08:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.949 08:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTE1MTkyMTk3ZWNhZWZiNzYyMjhiYzNkN2U2ZTcyYTY0YzAzZmNkODIxMzI1MDU3H3XE0Q==: --dhchap-ctrl-secret DHHC-1:03:NTRkNDA0NGZkNmU5YzIxYTVhYWVjOTM2MWQ3NTI1Y2ViNTI1OTYyZThmNDExNWM4NjIwNjk1Y2U0ZWI5MjU0M3VUq34=: 00:20:41.949 08:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NTE1MTkyMTk3ZWNhZWZiNzYyMjhiYzNkN2U2ZTcyYTY0YzAzZmNkODIxMzI1MDU3H3XE0Q==: --dhchap-ctrl-secret DHHC-1:03:NTRkNDA0NGZkNmU5YzIxYTVhYWVjOTM2MWQ3NTI1Y2ViNTI1OTYyZThmNDExNWM4NjIwNjk1Y2U0ZWI5MjU0M3VUq34=: 00:20:42.897 08:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.897 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.897 08:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:42.897 08:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.897 08:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.897 08:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.897 08:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:42.897 08:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:42.897 08:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:42.897 08:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:20:42.897 08:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:42.897 08:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:42.897 08:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:42.897 08:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:42.897 08:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.897 08:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.897 08:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.897 08:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.157 08:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.158 08:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.158 08:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.158 08:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.417 00:20:43.678 08:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:43.678 08:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:43.678 08:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.678 08:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.678 08:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.678 08:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.678 08:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.678 08:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.678 08:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:43.678 { 00:20:43.678 "cntlid": 139, 00:20:43.678 "qid": 0, 00:20:43.678 "state": "enabled", 00:20:43.678 "thread": "nvmf_tgt_poll_group_000", 00:20:43.678 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:43.678 "listen_address": { 00:20:43.678 "trtype": "TCP", 00:20:43.678 "adrfam": "IPv4", 00:20:43.678 "traddr": "10.0.0.2", 00:20:43.678 "trsvcid": "4420" 00:20:43.678 }, 00:20:43.678 "peer_address": { 00:20:43.678 "trtype": "TCP", 00:20:43.678 "adrfam": "IPv4", 00:20:43.678 "traddr": "10.0.0.1", 00:20:43.678 "trsvcid": "36816" 00:20:43.678 }, 00:20:43.678 "auth": { 00:20:43.678 "state": "completed", 00:20:43.678 "digest": "sha512", 00:20:43.678 "dhgroup": "ffdhe8192" 00:20:43.678 } 00:20:43.678 } 00:20:43.678 ]' 00:20:43.678 08:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:43.678 08:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:43.678 08:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:43.938 08:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:43.938 08:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:43.938 08:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.938 08:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.938 08:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.938 08:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmE0MzYyZGFkOTgxMWNhYzQ3MDU2ZDYyY2NkNDI5MjOJSZcM: --dhchap-ctrl-secret DHHC-1:02:NTEyMmRhMmYwNDkyZmNlMzIxZjk5OWM5ZmNjODZlM2E0ZThmM2E0NjNkMmYxMjhkUoDicg==: 00:20:43.938 08:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NmE0MzYyZGFkOTgxMWNhYzQ3MDU2ZDYyY2NkNDI5MjOJSZcM: --dhchap-ctrl-secret DHHC-1:02:NTEyMmRhMmYwNDkyZmNlMzIxZjk5OWM5ZmNjODZlM2E0ZThmM2E0NjNkMmYxMjhkUoDicg==: 00:20:44.889 08:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.889 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.889 08:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:44.889 08:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.889 08:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.889 08:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.889 08:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:44.889 08:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:44.889 08:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:44.889 08:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:20:44.889 08:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:44.889 08:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:44.889 08:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:44.889 08:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:44.889 08:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.889 08:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.889 08:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.889 08:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.889 08:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.889 08:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.889 08:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.889 08:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.458 00:20:45.458 08:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:45.458 08:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.458 08:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:45.719 08:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.719 08:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.719 08:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.719 08:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.719 08:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.719 08:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:45.719 { 00:20:45.719 "cntlid": 141, 00:20:45.719 "qid": 0, 00:20:45.719 "state": "enabled", 00:20:45.719 "thread": "nvmf_tgt_poll_group_000", 00:20:45.719 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:45.719 "listen_address": { 00:20:45.719 "trtype": "TCP", 00:20:45.719 "adrfam": "IPv4", 00:20:45.719 "traddr": "10.0.0.2", 00:20:45.719 "trsvcid": "4420" 00:20:45.719 }, 00:20:45.719 "peer_address": { 00:20:45.719 "trtype": "TCP", 00:20:45.719 "adrfam": "IPv4", 00:20:45.719 "traddr": "10.0.0.1", 00:20:45.719 "trsvcid": "36830" 00:20:45.719 }, 00:20:45.719 "auth": { 00:20:45.719 "state": "completed", 00:20:45.719 "digest": "sha512", 00:20:45.719 "dhgroup": "ffdhe8192" 00:20:45.719 } 00:20:45.719 } 00:20:45.719 ]' 00:20:45.719 08:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:45.719 08:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:45.719 08:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:45.719 08:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:45.719 08:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:45.719 08:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.719 08:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.719 08:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.980 08:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTFkYjA0Y2M4YTM5MmE0NDQ5N2JhZmZkNjdjYjMyMjc0ODA1MzZiZmVmZDhlY2Q0siICcA==: --dhchap-ctrl-secret DHHC-1:01:YjU3NTQ5OTQxNzU1ZWY4OTIxN2RlNWVmOTIyNzc3ZDnMz2+M: 00:20:45.980 08:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YTFkYjA0Y2M4YTM5MmE0NDQ5N2JhZmZkNjdjYjMyMjc0ODA1MzZiZmVmZDhlY2Q0siICcA==: --dhchap-ctrl-secret DHHC-1:01:YjU3NTQ5OTQxNzU1ZWY4OTIxN2RlNWVmOTIyNzc3ZDnMz2+M: 00:20:46.550 08:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.550 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.550 08:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:46.550 08:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.550 08:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.550 08:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.550 08:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:46.550 08:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:46.550 08:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:46.810 08:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:20:46.810 08:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:46.810 08:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:46.810 08:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:46.810 08:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:46.810 08:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.810 08:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:46.810 08:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.810 08:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.810 08:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.810 08:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:46.810 08:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:46.810 08:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:47.381 00:20:47.381 08:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:47.381 08:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.381 08:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:47.381 08:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.381 08:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.381 08:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.381 08:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.381 08:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.381 08:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:47.381 { 00:20:47.381 "cntlid": 143, 00:20:47.381 "qid": 0, 00:20:47.381 "state": "enabled", 00:20:47.381 "thread": "nvmf_tgt_poll_group_000", 00:20:47.381 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:47.381 "listen_address": { 00:20:47.381 "trtype": "TCP", 00:20:47.381 "adrfam": "IPv4", 00:20:47.381 "traddr": "10.0.0.2", 00:20:47.381 "trsvcid": "4420" 00:20:47.381 }, 00:20:47.381 "peer_address": { 00:20:47.381 "trtype": "TCP", 00:20:47.381 "adrfam": "IPv4", 00:20:47.381 "traddr": "10.0.0.1", 00:20:47.381 "trsvcid": "36844" 00:20:47.381 }, 00:20:47.381 "auth": { 00:20:47.381 "state": "completed", 00:20:47.381 "digest": "sha512", 00:20:47.381 "dhgroup": "ffdhe8192" 00:20:47.381 } 00:20:47.381 } 00:20:47.381 ]' 00:20:47.381 08:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:47.642 08:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:47.642 08:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:47.642 08:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:47.642 08:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:47.642 08:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.642 08:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.642 08:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.902 08:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGFlYjc1NWQ0MTBjM2ZkODFkNTZkMTliZmE2ZDY4NDJjNWZmMTc4MmI0MWQ2MjliOTlhYzA4MThlODk3Y2Q2Ycvd40Y=: 00:20:47.902 08:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NGFlYjc1NWQ0MTBjM2ZkODFkNTZkMTliZmE2ZDY4NDJjNWZmMTc4MmI0MWQ2MjliOTlhYzA4MThlODk3Y2Q2Ycvd40Y=: 00:20:48.474 08:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.474 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.474 08:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:48.474 08:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.474 08:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.474 08:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.474 08:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:20:48.474 08:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:20:48.474 08:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:20:48.474 08:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:48.474 08:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:48.474 08:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:48.734 08:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:20:48.734 08:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:48.734 08:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:48.734 08:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:48.734 08:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:48.734 08:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.734 08:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.734 08:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.734 08:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.734 08:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.734 08:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.734 08:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.734 08:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.306 00:20:49.306 08:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:49.306 08:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:49.306 08:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.306 08:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.306 08:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.306 08:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.306 08:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.306 08:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.306 08:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:49.306 { 00:20:49.306 "cntlid": 145, 00:20:49.306 "qid": 0, 00:20:49.306 "state": "enabled", 00:20:49.306 "thread": "nvmf_tgt_poll_group_000", 00:20:49.306 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:49.306 "listen_address": { 00:20:49.306 "trtype": "TCP", 00:20:49.306 "adrfam": "IPv4", 00:20:49.306 "traddr": "10.0.0.2", 00:20:49.306 "trsvcid": "4420" 00:20:49.306 }, 00:20:49.306 "peer_address": { 00:20:49.306 "trtype": "TCP", 00:20:49.306 "adrfam": "IPv4", 00:20:49.306 "traddr": "10.0.0.1", 00:20:49.306 "trsvcid": "36874" 00:20:49.306 }, 00:20:49.306 "auth": { 00:20:49.306 "state": "completed", 00:20:49.306 "digest": "sha512", 00:20:49.306 "dhgroup": "ffdhe8192" 00:20:49.306 } 00:20:49.306 } 00:20:49.306 ]' 00:20:49.306 08:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:49.306 08:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:49.306 08:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:49.567 08:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:49.567 08:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:49.567 08:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.567 08:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.567 08:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.567 08:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTE1MTkyMTk3ZWNhZWZiNzYyMjhiYzNkN2U2ZTcyYTY0YzAzZmNkODIxMzI1MDU3H3XE0Q==: --dhchap-ctrl-secret DHHC-1:03:NTRkNDA0NGZkNmU5YzIxYTVhYWVjOTM2MWQ3NTI1Y2ViNTI1OTYyZThmNDExNWM4NjIwNjk1Y2U0ZWI5MjU0M3VUq34=: 00:20:49.567 08:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NTE1MTkyMTk3ZWNhZWZiNzYyMjhiYzNkN2U2ZTcyYTY0YzAzZmNkODIxMzI1MDU3H3XE0Q==: --dhchap-ctrl-secret DHHC-1:03:NTRkNDA0NGZkNmU5YzIxYTVhYWVjOTM2MWQ3NTI1Y2ViNTI1OTYyZThmNDExNWM4NjIwNjk1Y2U0ZWI5MjU0M3VUq34=: 00:20:50.508 08:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.508 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.508 08:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:50.508 08:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.508 08:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.508 08:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.508 08:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:20:50.508 08:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.508 08:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.508 08:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.508 08:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:20:50.508 08:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:50.508 08:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:20:50.508 08:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:20:50.508 08:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:50.508 08:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:20:50.508 08:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:50.508 08:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:20:50.508 08:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:20:50.508 08:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:20:50.769 request: 00:20:50.769 { 00:20:50.769 "name": "nvme0", 00:20:50.769 "trtype": "tcp", 00:20:50.769 "traddr": "10.0.0.2", 00:20:50.769 "adrfam": "ipv4", 00:20:50.769 "trsvcid": "4420", 00:20:50.769 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:50.769 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:50.769 "prchk_reftag": false, 00:20:50.769 "prchk_guard": false, 00:20:50.769 "hdgst": false, 00:20:50.769 "ddgst": false, 00:20:50.769 "dhchap_key": "key2", 00:20:50.769 "allow_unrecognized_csi": false, 00:20:50.769 "method": "bdev_nvme_attach_controller", 00:20:50.769 "req_id": 1 00:20:50.769 } 00:20:50.769 Got JSON-RPC error response 00:20:50.769 response: 00:20:50.769 { 00:20:50.769 "code": -5, 00:20:50.769 "message": "Input/output error" 00:20:50.769 } 00:20:50.769 08:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:50.769 08:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:50.769 08:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:50.769 08:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:50.769 08:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:50.769 08:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.769 08:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.769 08:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.769 08:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.769 08:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.769 08:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.769 08:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.769 08:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:50.769 08:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:50.769 08:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:50.769 08:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:20:50.769 08:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:50.769 08:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:20:50.769 08:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:50.769 08:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:51.030 08:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:51.030 08:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:51.291 request: 00:20:51.291 { 00:20:51.291 "name": "nvme0", 00:20:51.291 "trtype": "tcp", 00:20:51.291 "traddr": "10.0.0.2", 00:20:51.291 "adrfam": "ipv4", 00:20:51.291 "trsvcid": "4420", 00:20:51.291 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:51.291 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:51.291 "prchk_reftag": false, 00:20:51.291 "prchk_guard": false, 00:20:51.291 "hdgst": false, 00:20:51.291 "ddgst": false, 00:20:51.291 "dhchap_key": "key1", 00:20:51.291 "dhchap_ctrlr_key": "ckey2", 00:20:51.291 "allow_unrecognized_csi": false, 00:20:51.291 "method": "bdev_nvme_attach_controller", 00:20:51.291 "req_id": 1 00:20:51.291 } 00:20:51.291 Got JSON-RPC error response 00:20:51.291 response: 00:20:51.291 { 00:20:51.291 "code": -5, 00:20:51.291 "message": "Input/output error" 00:20:51.291 } 00:20:51.291 08:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:51.292 08:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:51.292 08:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:51.292 08:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:51.292 08:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:51.292 08:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.292 08:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.292 08:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.292 08:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:20:51.292 08:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.292 08:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.292 08:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.292 08:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.292 08:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:51.292 08:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.292 08:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:20:51.292 08:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:51.292 08:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:20:51.292 08:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:51.292 08:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.292 08:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.292 08:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.863 request: 00:20:51.863 { 00:20:51.863 "name": "nvme0", 00:20:51.863 "trtype": "tcp", 00:20:51.863 "traddr": "10.0.0.2", 00:20:51.863 "adrfam": "ipv4", 00:20:51.863 "trsvcid": "4420", 00:20:51.863 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:51.863 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:51.863 "prchk_reftag": false, 00:20:51.863 "prchk_guard": false, 00:20:51.863 "hdgst": false, 00:20:51.863 "ddgst": false, 00:20:51.863 "dhchap_key": "key1", 00:20:51.863 "dhchap_ctrlr_key": "ckey1", 00:20:51.863 "allow_unrecognized_csi": false, 00:20:51.863 "method": "bdev_nvme_attach_controller", 00:20:51.863 "req_id": 1 00:20:51.863 } 00:20:51.863 Got JSON-RPC error response 00:20:51.863 response: 00:20:51.863 { 00:20:51.863 "code": -5, 00:20:51.863 "message": "Input/output error" 00:20:51.863 } 00:20:51.863 08:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:51.864 08:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:51.864 08:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:51.864 08:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:51.864 08:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:51.864 08:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.864 08:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.864 08:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.864 08:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 3729643 00:20:51.864 08:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 3729643 ']' 00:20:51.864 08:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 3729643 00:20:51.864 08:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:20:51.864 08:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:51.864 08:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3729643 00:20:51.864 08:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:51.864 08:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:51.864 08:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3729643' 00:20:51.864 killing process with pid 3729643 00:20:51.864 08:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 3729643 00:20:51.864 08:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 3729643 00:20:52.125 08:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:20:52.125 08:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:20:52.125 08:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:52.125 08:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.125 08:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # nvmfpid=3756403 00:20:52.125 08:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # waitforlisten 3756403 00:20:52.125 08:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:20:52.125 08:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3756403 ']' 00:20:52.125 08:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:52.125 08:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:52.125 08:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:52.125 08:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:52.125 08:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.068 08:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:53.068 08:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:20:53.068 08:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:20:53.068 08:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:53.068 08:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.068 08:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:53.068 08:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:53.068 08:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 3756403 00:20:53.068 08:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3756403 ']' 00:20:53.068 08:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:53.068 08:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:53.068 08:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:53.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:53.068 08:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:53.068 08:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.068 08:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:53.068 08:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:20:53.068 08:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:20:53.068 08:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.068 08:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.328 null0 00:20:53.328 08:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.328 08:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:53.328 08:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Y9c 00:20:53.328 08:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.328 08:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.328 08:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.328 08:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.y8c ]] 00:20:53.328 08:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.y8c 00:20:53.328 08:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.328 08:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.328 08:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.328 08:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:53.328 08:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.s5g 00:20:53.328 08:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.328 08:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.328 08:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.328 08:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.tJI ]] 00:20:53.328 08:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.tJI 00:20:53.328 08:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.328 08:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.328 08:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.328 08:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:53.328 08:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.nAr 00:20:53.328 08:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.328 08:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.328 08:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.328 08:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.XLU ]] 00:20:53.328 08:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.XLU 00:20:53.328 08:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.328 08:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.328 08:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.328 08:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:53.329 08:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.FQI 00:20:53.329 08:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.329 08:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.329 08:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.329 08:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:20:53.329 08:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:20:53.329 08:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:53.329 08:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:53.329 08:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:53.329 08:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:53.329 08:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.329 08:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:53.329 08:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.329 08:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.329 08:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.329 08:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:53.329 08:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:53.329 08:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:54.271 nvme0n1 00:20:54.271 08:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:54.271 08:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:54.271 08:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.532 08:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.532 08:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.532 08:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.532 08:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.532 08:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.532 08:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:54.532 { 00:20:54.532 "cntlid": 1, 00:20:54.532 "qid": 0, 00:20:54.532 "state": "enabled", 00:20:54.532 "thread": "nvmf_tgt_poll_group_000", 00:20:54.532 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:54.532 "listen_address": { 00:20:54.532 "trtype": "TCP", 00:20:54.532 "adrfam": "IPv4", 00:20:54.532 "traddr": "10.0.0.2", 00:20:54.532 "trsvcid": "4420" 00:20:54.532 }, 00:20:54.532 "peer_address": { 00:20:54.532 "trtype": "TCP", 00:20:54.532 "adrfam": "IPv4", 00:20:54.532 "traddr": "10.0.0.1", 00:20:54.532 "trsvcid": "38430" 00:20:54.532 }, 00:20:54.532 "auth": { 00:20:54.532 "state": "completed", 00:20:54.532 "digest": "sha512", 00:20:54.532 "dhgroup": "ffdhe8192" 00:20:54.532 } 00:20:54.532 } 00:20:54.532 ]' 00:20:54.532 08:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:54.532 08:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:54.532 08:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:54.532 08:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:54.532 08:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:54.532 08:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.532 08:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.532 08:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.792 08:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGFlYjc1NWQ0MTBjM2ZkODFkNTZkMTliZmE2ZDY4NDJjNWZmMTc4MmI0MWQ2MjliOTlhYzA4MThlODk3Y2Q2Ycvd40Y=: 00:20:54.792 08:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NGFlYjc1NWQ0MTBjM2ZkODFkNTZkMTliZmE2ZDY4NDJjNWZmMTc4MmI0MWQ2MjliOTlhYzA4MThlODk3Y2Q2Ycvd40Y=: 00:20:55.734 08:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.734 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.734 08:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:55.734 08:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.734 08:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.734 08:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.734 08:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:55.734 08:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.734 08:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.734 08:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.734 08:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:20:55.734 08:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:20:55.734 08:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:20:55.734 08:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:55.734 08:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:20:55.734 08:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:20:55.734 08:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:55.734 08:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:20:55.734 08:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:55.734 08:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:55.734 08:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:55.734 08:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:55.995 request: 00:20:55.995 { 00:20:55.995 "name": "nvme0", 00:20:55.995 "trtype": "tcp", 00:20:55.995 "traddr": "10.0.0.2", 00:20:55.995 "adrfam": "ipv4", 00:20:55.995 "trsvcid": "4420", 00:20:55.995 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:55.995 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:55.995 "prchk_reftag": false, 00:20:55.995 "prchk_guard": false, 00:20:55.995 "hdgst": false, 00:20:55.995 "ddgst": false, 00:20:55.995 "dhchap_key": "key3", 00:20:55.995 "allow_unrecognized_csi": false, 00:20:55.995 "method": "bdev_nvme_attach_controller", 00:20:55.995 "req_id": 1 00:20:55.995 } 00:20:55.995 Got JSON-RPC error response 00:20:55.995 response: 00:20:55.995 { 00:20:55.995 "code": -5, 00:20:55.995 "message": "Input/output error" 00:20:55.995 } 00:20:55.995 08:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:55.995 08:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:55.995 08:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:55.995 08:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:55.995 08:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:20:55.995 08:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:20:55.995 08:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:55.995 08:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:55.995 08:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:20:55.995 08:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:55.995 08:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:20:55.995 08:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:20:55.995 08:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:55.995 08:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:20:55.995 08:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:55.995 08:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:55.995 08:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:55.995 08:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:56.256 request: 00:20:56.256 { 00:20:56.256 "name": "nvme0", 00:20:56.256 "trtype": "tcp", 00:20:56.256 "traddr": "10.0.0.2", 00:20:56.256 "adrfam": "ipv4", 00:20:56.256 "trsvcid": "4420", 00:20:56.256 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:56.256 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:56.256 "prchk_reftag": false, 00:20:56.256 "prchk_guard": false, 00:20:56.256 "hdgst": false, 00:20:56.256 "ddgst": false, 00:20:56.256 "dhchap_key": "key3", 00:20:56.256 "allow_unrecognized_csi": false, 00:20:56.256 "method": "bdev_nvme_attach_controller", 00:20:56.256 "req_id": 1 00:20:56.256 } 00:20:56.256 Got JSON-RPC error response 00:20:56.256 response: 00:20:56.256 { 00:20:56.256 "code": -5, 00:20:56.256 "message": "Input/output error" 00:20:56.256 } 00:20:56.256 08:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:56.256 08:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:56.256 08:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:56.256 08:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:56.256 08:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:20:56.256 08:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:20:56.256 08:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:20:56.256 08:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:56.256 08:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:56.256 08:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:56.516 08:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:56.516 08:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.516 08:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.516 08:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.516 08:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:56.516 08:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.516 08:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.516 08:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.516 08:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:56.516 08:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:56.516 08:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:56.516 08:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:20:56.516 08:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:56.516 08:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:20:56.516 08:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:56.516 08:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:56.516 08:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:56.516 08:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:56.777 request: 00:20:56.777 { 00:20:56.777 "name": "nvme0", 00:20:56.777 "trtype": "tcp", 00:20:56.777 "traddr": "10.0.0.2", 00:20:56.777 "adrfam": "ipv4", 00:20:56.777 "trsvcid": "4420", 00:20:56.777 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:56.777 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:56.777 "prchk_reftag": false, 00:20:56.777 "prchk_guard": false, 00:20:56.777 "hdgst": false, 00:20:56.777 "ddgst": false, 00:20:56.777 "dhchap_key": "key0", 00:20:56.777 "dhchap_ctrlr_key": "key1", 00:20:56.777 "allow_unrecognized_csi": false, 00:20:56.777 "method": "bdev_nvme_attach_controller", 00:20:56.777 "req_id": 1 00:20:56.777 } 00:20:56.777 Got JSON-RPC error response 00:20:56.777 response: 00:20:56.777 { 00:20:56.777 "code": -5, 00:20:56.777 "message": "Input/output error" 00:20:56.777 } 00:20:56.777 08:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:56.777 08:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:56.777 08:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:56.777 08:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:56.777 08:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:20:56.777 08:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:20:56.777 08:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:20:57.037 nvme0n1 00:20:57.037 08:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:20:57.037 08:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:20:57.037 08:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.297 08:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.297 08:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.297 08:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.297 08:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:20:57.297 08:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.297 08:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.297 08:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.297 08:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:20:57.297 08:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:57.297 08:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:58.239 nvme0n1 00:20:58.239 08:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:20:58.239 08:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:20:58.239 08:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.500 08:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.500 08:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:58.500 08:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.500 08:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.500 08:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.500 08:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:20:58.500 08:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:20:58.500 08:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.500 08:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.500 08:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:YTFkYjA0Y2M4YTM5MmE0NDQ5N2JhZmZkNjdjYjMyMjc0ODA1MzZiZmVmZDhlY2Q0siICcA==: --dhchap-ctrl-secret DHHC-1:03:NGFlYjc1NWQ0MTBjM2ZkODFkNTZkMTliZmE2ZDY4NDJjNWZmMTc4MmI0MWQ2MjliOTlhYzA4MThlODk3Y2Q2Ycvd40Y=: 00:20:58.500 08:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YTFkYjA0Y2M4YTM5MmE0NDQ5N2JhZmZkNjdjYjMyMjc0ODA1MzZiZmVmZDhlY2Q0siICcA==: --dhchap-ctrl-secret DHHC-1:03:NGFlYjc1NWQ0MTBjM2ZkODFkNTZkMTliZmE2ZDY4NDJjNWZmMTc4MmI0MWQ2MjliOTlhYzA4MThlODk3Y2Q2Ycvd40Y=: 00:20:59.442 08:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:20:59.442 08:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:20:59.442 08:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:20:59.442 08:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:20:59.442 08:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:20:59.442 08:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:20:59.442 08:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:20:59.442 08:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.442 08:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.442 08:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:20:59.442 08:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:59.442 08:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:20:59.442 08:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:20:59.442 08:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:59.442 08:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:20:59.442 08:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:59.442 08:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:20:59.442 08:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:59.442 08:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:00.013 request: 00:21:00.013 { 00:21:00.013 "name": "nvme0", 00:21:00.013 "trtype": "tcp", 00:21:00.013 "traddr": "10.0.0.2", 00:21:00.013 "adrfam": "ipv4", 00:21:00.013 "trsvcid": "4420", 00:21:00.013 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:00.013 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:00.013 "prchk_reftag": false, 00:21:00.013 "prchk_guard": false, 00:21:00.013 "hdgst": false, 00:21:00.013 "ddgst": false, 00:21:00.013 "dhchap_key": "key1", 00:21:00.013 "allow_unrecognized_csi": false, 00:21:00.013 "method": "bdev_nvme_attach_controller", 00:21:00.013 "req_id": 1 00:21:00.013 } 00:21:00.013 Got JSON-RPC error response 00:21:00.013 response: 00:21:00.013 { 00:21:00.013 "code": -5, 00:21:00.013 "message": "Input/output error" 00:21:00.013 } 00:21:00.013 08:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:00.014 08:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:00.014 08:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:00.014 08:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:00.014 08:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:00.014 08:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:00.014 08:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:00.955 nvme0n1 00:21:00.955 08:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:21:00.955 08:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:21:00.955 08:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.955 08:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.955 08:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.955 08:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.216 08:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:01.216 08:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.216 08:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.216 08:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.216 08:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:21:01.216 08:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:21:01.216 08:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:21:01.477 nvme0n1 00:21:01.477 08:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:21:01.477 08:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:21:01.477 08:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.737 08:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.737 08:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.737 08:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.737 08:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:01.737 08:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.737 08:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.737 08:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.737 08:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NmE0MzYyZGFkOTgxMWNhYzQ3MDU2ZDYyY2NkNDI5MjOJSZcM: '' 2s 00:21:01.737 08:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:21:01.737 08:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:21:01.737 08:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NmE0MzYyZGFkOTgxMWNhYzQ3MDU2ZDYyY2NkNDI5MjOJSZcM: 00:21:01.737 08:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:21:01.737 08:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:21:01.737 08:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:21:01.737 08:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NmE0MzYyZGFkOTgxMWNhYzQ3MDU2ZDYyY2NkNDI5MjOJSZcM: ]] 00:21:01.737 08:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NmE0MzYyZGFkOTgxMWNhYzQ3MDU2ZDYyY2NkNDI5MjOJSZcM: 00:21:01.737 08:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:21:01.737 08:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:21:01.737 08:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:21:04.293 08:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:21:04.293 08:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:21:04.293 08:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:21:04.293 08:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:21:04.293 08:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:21:04.293 08:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:21:04.293 08:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:21:04.293 08:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key2 00:21:04.293 08:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.293 08:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.293 08:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.293 08:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:YTFkYjA0Y2M4YTM5MmE0NDQ5N2JhZmZkNjdjYjMyMjc0ODA1MzZiZmVmZDhlY2Q0siICcA==: 2s 00:21:04.293 08:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:21:04.293 08:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:21:04.293 08:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:21:04.293 08:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:YTFkYjA0Y2M4YTM5MmE0NDQ5N2JhZmZkNjdjYjMyMjc0ODA1MzZiZmVmZDhlY2Q0siICcA==: 00:21:04.293 08:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:21:04.293 08:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:21:04.293 08:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:21:04.293 08:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:YTFkYjA0Y2M4YTM5MmE0NDQ5N2JhZmZkNjdjYjMyMjc0ODA1MzZiZmVmZDhlY2Q0siICcA==: ]] 00:21:04.293 08:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:YTFkYjA0Y2M4YTM5MmE0NDQ5N2JhZmZkNjdjYjMyMjc0ODA1MzZiZmVmZDhlY2Q0siICcA==: 00:21:04.293 08:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:21:04.293 08:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:21:06.206 08:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:21:06.206 08:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:21:06.206 08:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:21:06.206 08:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:21:06.206 08:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:21:06.206 08:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:21:06.206 08:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:21:06.206 08:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.206 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.206 08:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:06.206 08:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.206 08:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.206 08:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.206 08:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:06.206 08:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:06.206 08:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:06.778 nvme0n1 00:21:06.779 08:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:06.779 08:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.779 08:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.779 08:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.779 08:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:06.779 08:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:07.350 08:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:21:07.350 08:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:21:07.350 08:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.611 08:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.611 08:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:07.611 08:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.611 08:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.611 08:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.611 08:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:21:07.611 08:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:21:07.611 08:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:21:07.611 08:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:21:07.611 08:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.872 08:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.872 08:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:07.872 08:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.872 08:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.872 08:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.872 08:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:07.872 08:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:07.872 08:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:07.872 08:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:21:07.872 08:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:07.872 08:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:21:07.872 08:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:07.872 08:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:07.872 08:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:08.445 request: 00:21:08.445 { 00:21:08.445 "name": "nvme0", 00:21:08.445 "dhchap_key": "key1", 00:21:08.445 "dhchap_ctrlr_key": "key3", 00:21:08.445 "method": "bdev_nvme_set_keys", 00:21:08.445 "req_id": 1 00:21:08.445 } 00:21:08.445 Got JSON-RPC error response 00:21:08.445 response: 00:21:08.445 { 00:21:08.445 "code": -13, 00:21:08.445 "message": "Permission denied" 00:21:08.445 } 00:21:08.445 08:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:08.445 08:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:08.445 08:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:08.445 08:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:08.445 08:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:21:08.445 08:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:21:08.445 08:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.445 08:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:21:08.445 08:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:21:09.830 08:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:21:09.830 08:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:21:09.830 08:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.830 08:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:21:09.830 08:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:09.830 08:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.830 08:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.830 08:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.830 08:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:09.830 08:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:09.830 08:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:10.875 nvme0n1 00:21:10.875 08:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:10.875 08:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.875 08:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.875 08:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.875 08:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:10.875 08:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:10.875 08:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:10.875 08:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:21:10.875 08:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:10.876 08:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:21:10.876 08:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:10.876 08:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:10.876 08:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:11.158 request: 00:21:11.158 { 00:21:11.158 "name": "nvme0", 00:21:11.158 "dhchap_key": "key2", 00:21:11.158 "dhchap_ctrlr_key": "key0", 00:21:11.158 "method": "bdev_nvme_set_keys", 00:21:11.158 "req_id": 1 00:21:11.158 } 00:21:11.158 Got JSON-RPC error response 00:21:11.158 response: 00:21:11.158 { 00:21:11.158 "code": -13, 00:21:11.158 "message": "Permission denied" 00:21:11.158 } 00:21:11.158 08:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:11.158 08:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:11.158 08:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:11.158 08:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:11.158 08:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:21:11.158 08:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:21:11.158 08:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.419 08:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:21:11.419 08:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:21:12.361 08:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:21:12.361 08:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:21:12.361 08:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.361 08:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:21:12.361 08:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:21:12.361 08:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:21:12.361 08:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3729681 00:21:12.361 08:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 3729681 ']' 00:21:12.361 08:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 3729681 00:21:12.361 08:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:21:12.361 08:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:12.622 08:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3729681 00:21:12.622 08:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:12.622 08:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:12.622 08:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3729681' 00:21:12.622 killing process with pid 3729681 00:21:12.622 08:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 3729681 00:21:12.622 08:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 3729681 00:21:12.622 08:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:21:12.622 08:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:21:12.622 08:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:21:12.882 08:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:12.882 08:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:21:12.882 08:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:12.882 08:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:12.882 rmmod nvme_tcp 00:21:12.882 rmmod nvme_fabrics 00:21:12.882 rmmod nvme_keyring 00:21:12.882 08:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:12.882 08:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:21:12.882 08:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:21:12.882 08:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@513 -- # '[' -n 3756403 ']' 00:21:12.882 08:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # killprocess 3756403 00:21:12.882 08:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 3756403 ']' 00:21:12.882 08:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 3756403 00:21:12.882 08:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:21:12.882 08:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:12.882 08:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3756403 00:21:12.882 08:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:12.882 08:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:12.882 08:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3756403' 00:21:12.882 killing process with pid 3756403 00:21:12.882 08:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 3756403 00:21:12.882 08:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 3756403 00:21:12.882 08:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:21:12.882 08:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:21:12.882 08:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:21:13.143 08:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:21:13.143 08:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # iptables-save 00:21:13.143 08:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:21:13.143 08:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # iptables-restore 00:21:13.143 08:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:13.143 08:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:13.143 08:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:13.143 08:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:13.143 08:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:15.055 08:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:15.055 08:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Y9c /tmp/spdk.key-sha256.s5g /tmp/spdk.key-sha384.nAr /tmp/spdk.key-sha512.FQI /tmp/spdk.key-sha512.y8c /tmp/spdk.key-sha384.tJI /tmp/spdk.key-sha256.XLU '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:21:15.055 00:21:15.055 real 2m43.225s 00:21:15.055 user 6m4.236s 00:21:15.055 sys 0m23.727s 00:21:15.055 08:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:15.055 08:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.055 ************************************ 00:21:15.055 END TEST nvmf_auth_target 00:21:15.055 ************************************ 00:21:15.055 08:36:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:21:15.055 08:36:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:15.055 08:36:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:21:15.055 08:36:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:15.055 08:36:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:15.055 ************************************ 00:21:15.055 START TEST nvmf_bdevio_no_huge 00:21:15.055 ************************************ 00:21:15.317 08:36:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:15.317 * Looking for test storage... 00:21:15.317 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:15.317 08:36:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:15.317 08:36:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lcov --version 00:21:15.317 08:36:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:15.317 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:15.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:15.318 --rc genhtml_branch_coverage=1 00:21:15.318 --rc genhtml_function_coverage=1 00:21:15.318 --rc genhtml_legend=1 00:21:15.318 --rc geninfo_all_blocks=1 00:21:15.318 --rc geninfo_unexecuted_blocks=1 00:21:15.318 00:21:15.318 ' 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:15.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:15.318 --rc genhtml_branch_coverage=1 00:21:15.318 --rc genhtml_function_coverage=1 00:21:15.318 --rc genhtml_legend=1 00:21:15.318 --rc geninfo_all_blocks=1 00:21:15.318 --rc geninfo_unexecuted_blocks=1 00:21:15.318 00:21:15.318 ' 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:15.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:15.318 --rc genhtml_branch_coverage=1 00:21:15.318 --rc genhtml_function_coverage=1 00:21:15.318 --rc genhtml_legend=1 00:21:15.318 --rc geninfo_all_blocks=1 00:21:15.318 --rc geninfo_unexecuted_blocks=1 00:21:15.318 00:21:15.318 ' 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:15.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:15.318 --rc genhtml_branch_coverage=1 00:21:15.318 --rc genhtml_function_coverage=1 00:21:15.318 --rc genhtml_legend=1 00:21:15.318 --rc geninfo_all_blocks=1 00:21:15.318 --rc geninfo_unexecuted_blocks=1 00:21:15.318 00:21:15.318 ' 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:15.318 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # prepare_net_devs 00:21:15.318 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@434 -- # local -g is_hw=no 00:21:15.319 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # remove_spdk_ns 00:21:15.319 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:15.319 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:15.319 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:15.319 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:21:15.319 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:21:15.319 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:21:15.319 08:36:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:23.461 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:23.461 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:21:23.461 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:23.461 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:23.461 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:23.461 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:23.461 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:23.461 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:21:23.461 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:23.461 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:21:23.461 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:21:23.461 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:21:23.461 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:21:23.461 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:21:23.461 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:21:23.461 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:23.461 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:23.461 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:23.461 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:23.461 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:23.461 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:23.461 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:23.461 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:23.461 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:23.461 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:23.461 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:23.461 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:21:23.461 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:21:23.461 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:21:23.461 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:21:23.461 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:21:23.461 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:21:23.461 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:21:23.461 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:23.462 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:23.462 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ up == up ]] 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:23.462 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ up == up ]] 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:23.462 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # is_hw=yes 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:23.462 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:23.462 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.617 ms 00:21:23.462 00:21:23.462 --- 10.0.0.2 ping statistics --- 00:21:23.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:23.462 rtt min/avg/max/mdev = 0.617/0.617/0.617/0.000 ms 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:23.462 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:23.462 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:21:23.462 00:21:23.462 --- 10.0.0.1 ping statistics --- 00:21:23.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:23.462 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # return 0 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # nvmfpid=3765335 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # waitforlisten 3765335 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 3765335 ']' 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:23.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:23.462 08:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:23.462 [2024-10-01 08:36:14.449942] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:21:23.462 [2024-10-01 08:36:14.450013] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:21:23.462 [2024-10-01 08:36:14.537672] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:23.462 [2024-10-01 08:36:14.633951] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:23.462 [2024-10-01 08:36:14.633988] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:23.462 [2024-10-01 08:36:14.634002] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:23.462 [2024-10-01 08:36:14.634009] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:23.462 [2024-10-01 08:36:14.634016] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:23.462 [2024-10-01 08:36:14.635344] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:21:23.462 [2024-10-01 08:36:14.635495] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:21:23.463 [2024-10-01 08:36:14.635645] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:21:23.463 [2024-10-01 08:36:14.635646] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:21:23.463 08:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:23.463 08:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:21:23.463 08:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:21:23.463 08:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:23.463 08:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:23.724 08:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:23.724 08:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:23.724 08:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.724 08:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:23.724 [2024-10-01 08:36:15.303064] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:23.724 08:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.724 08:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:23.724 08:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.724 08:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:23.724 Malloc0 00:21:23.724 08:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.724 08:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:23.724 08:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.724 08:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:23.724 08:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.724 08:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:23.725 08:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.725 08:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:23.725 08:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.725 08:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:23.725 08:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.725 08:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:23.725 [2024-10-01 08:36:15.356802] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:23.725 08:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.725 08:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:21:23.725 08:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:23.725 08:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # config=() 00:21:23.725 08:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # local subsystem config 00:21:23.725 08:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:21:23.725 08:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:21:23.725 { 00:21:23.725 "params": { 00:21:23.725 "name": "Nvme$subsystem", 00:21:23.725 "trtype": "$TEST_TRANSPORT", 00:21:23.725 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:23.725 "adrfam": "ipv4", 00:21:23.725 "trsvcid": "$NVMF_PORT", 00:21:23.725 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:23.725 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:23.725 "hdgst": ${hdgst:-false}, 00:21:23.725 "ddgst": ${ddgst:-false} 00:21:23.725 }, 00:21:23.725 "method": "bdev_nvme_attach_controller" 00:21:23.725 } 00:21:23.725 EOF 00:21:23.725 )") 00:21:23.725 08:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@578 -- # cat 00:21:23.725 08:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # jq . 00:21:23.725 08:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@581 -- # IFS=, 00:21:23.725 08:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:21:23.725 "params": { 00:21:23.725 "name": "Nvme1", 00:21:23.725 "trtype": "tcp", 00:21:23.725 "traddr": "10.0.0.2", 00:21:23.725 "adrfam": "ipv4", 00:21:23.725 "trsvcid": "4420", 00:21:23.725 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:23.725 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:23.725 "hdgst": false, 00:21:23.725 "ddgst": false 00:21:23.725 }, 00:21:23.725 "method": "bdev_nvme_attach_controller" 00:21:23.725 }' 00:21:23.725 [2024-10-01 08:36:15.421915] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:21:23.725 [2024-10-01 08:36:15.422011] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3765499 ] 00:21:23.725 [2024-10-01 08:36:15.493316] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:23.986 [2024-10-01 08:36:15.591412] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:23.986 [2024-10-01 08:36:15.591529] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:21:23.986 [2024-10-01 08:36:15.591532] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:24.248 I/O targets: 00:21:24.248 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:24.248 00:21:24.248 00:21:24.248 CUnit - A unit testing framework for C - Version 2.1-3 00:21:24.248 http://cunit.sourceforge.net/ 00:21:24.248 00:21:24.248 00:21:24.248 Suite: bdevio tests on: Nvme1n1 00:21:24.248 Test: blockdev write read block ...passed 00:21:24.248 Test: blockdev write zeroes read block ...passed 00:21:24.248 Test: blockdev write zeroes read no split ...passed 00:21:24.509 Test: blockdev write zeroes read split ...passed 00:21:24.509 Test: blockdev write zeroes read split partial ...passed 00:21:24.509 Test: blockdev reset ...[2024-10-01 08:36:16.094860] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:24.509 [2024-10-01 08:36:16.094930] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12aca40 (9): Bad file descriptor 00:21:24.509 [2024-10-01 08:36:16.106247] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:24.509 passed 00:21:24.509 Test: blockdev write read 8 blocks ...passed 00:21:24.509 Test: blockdev write read size > 128k ...passed 00:21:24.509 Test: blockdev write read invalid size ...passed 00:21:24.509 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:24.509 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:24.509 Test: blockdev write read max offset ...passed 00:21:24.509 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:24.509 Test: blockdev writev readv 8 blocks ...passed 00:21:24.509 Test: blockdev writev readv 30 x 1block ...passed 00:21:24.509 Test: blockdev writev readv block ...passed 00:21:24.509 Test: blockdev writev readv size > 128k ...passed 00:21:24.509 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:24.509 Test: blockdev comparev and writev ...[2024-10-01 08:36:16.285456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:24.509 [2024-10-01 08:36:16.285482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:24.509 [2024-10-01 08:36:16.285493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:24.509 [2024-10-01 08:36:16.285499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:24.509 [2024-10-01 08:36:16.285819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:24.509 [2024-10-01 08:36:16.285829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:24.509 [2024-10-01 08:36:16.285839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:24.509 [2024-10-01 08:36:16.285845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:24.509 [2024-10-01 08:36:16.286181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:24.509 [2024-10-01 08:36:16.286191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:24.509 [2024-10-01 08:36:16.286202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:24.509 [2024-10-01 08:36:16.286208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:24.509 [2024-10-01 08:36:16.286526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:24.509 [2024-10-01 08:36:16.286536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:24.509 [2024-10-01 08:36:16.286546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:24.509 [2024-10-01 08:36:16.286552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:24.509 passed 00:21:24.770 Test: blockdev nvme passthru rw ...passed 00:21:24.770 Test: blockdev nvme passthru vendor specific ...[2024-10-01 08:36:16.369435] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:24.770 [2024-10-01 08:36:16.369448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:24.770 [2024-10-01 08:36:16.369637] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:24.770 [2024-10-01 08:36:16.369646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:24.770 [2024-10-01 08:36:16.369850] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:24.770 [2024-10-01 08:36:16.369863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:24.770 [2024-10-01 08:36:16.370069] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:24.770 [2024-10-01 08:36:16.370079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:24.770 passed 00:21:24.770 Test: blockdev nvme admin passthru ...passed 00:21:24.770 Test: blockdev copy ...passed 00:21:24.770 00:21:24.770 Run Summary: Type Total Ran Passed Failed Inactive 00:21:24.770 suites 1 1 n/a 0 0 00:21:24.770 tests 23 23 23 0 0 00:21:24.770 asserts 152 152 152 0 n/a 00:21:24.770 00:21:24.770 Elapsed time = 1.112 seconds 00:21:25.032 08:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:25.032 08:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.032 08:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:25.032 08:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.032 08:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:25.032 08:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:21:25.032 08:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # nvmfcleanup 00:21:25.032 08:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:21:25.032 08:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:25.032 08:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:21:25.032 08:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:25.032 08:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:25.032 rmmod nvme_tcp 00:21:25.032 rmmod nvme_fabrics 00:21:25.032 rmmod nvme_keyring 00:21:25.032 08:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:25.032 08:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:21:25.032 08:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:21:25.032 08:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@513 -- # '[' -n 3765335 ']' 00:21:25.032 08:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # killprocess 3765335 00:21:25.032 08:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 3765335 ']' 00:21:25.032 08:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 3765335 00:21:25.032 08:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:21:25.032 08:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:25.032 08:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3765335 00:21:25.032 08:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:21:25.032 08:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:21:25.032 08:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3765335' 00:21:25.032 killing process with pid 3765335 00:21:25.032 08:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 3765335 00:21:25.032 08:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 3765335 00:21:25.605 08:36:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:21:25.605 08:36:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:21:25.605 08:36:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:21:25.605 08:36:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:21:25.605 08:36:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:21:25.605 08:36:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # iptables-save 00:21:25.605 08:36:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # iptables-restore 00:21:25.605 08:36:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:25.605 08:36:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:25.605 08:36:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:25.605 08:36:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:25.605 08:36:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:27.522 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:27.522 00:21:27.522 real 0m12.394s 00:21:27.522 user 0m14.208s 00:21:27.522 sys 0m6.630s 00:21:27.522 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:27.522 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:27.522 ************************************ 00:21:27.522 END TEST nvmf_bdevio_no_huge 00:21:27.522 ************************************ 00:21:27.522 08:36:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:27.522 08:36:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:27.522 08:36:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:27.522 08:36:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:27.785 ************************************ 00:21:27.785 START TEST nvmf_tls 00:21:27.785 ************************************ 00:21:27.785 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:27.785 * Looking for test storage... 00:21:27.785 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:27.785 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:27.785 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lcov --version 00:21:27.785 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:27.785 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:27.785 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:27.785 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:27.785 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:27.785 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:21:27.785 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:21:27.785 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:21:27.785 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:21:27.785 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:21:27.785 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:21:27.785 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:21:27.785 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:27.785 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:21:27.785 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:21:27.785 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:27.785 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:27.785 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:21:27.785 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:21:27.785 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:27.785 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:21:27.785 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:21:27.785 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:21:27.785 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:21:27.785 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:27.785 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:21:27.785 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:21:27.785 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:27.785 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:27.785 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:21:27.785 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:27.785 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:27.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:27.785 --rc genhtml_branch_coverage=1 00:21:27.785 --rc genhtml_function_coverage=1 00:21:27.785 --rc genhtml_legend=1 00:21:27.785 --rc geninfo_all_blocks=1 00:21:27.785 --rc geninfo_unexecuted_blocks=1 00:21:27.785 00:21:27.785 ' 00:21:27.785 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:27.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:27.785 --rc genhtml_branch_coverage=1 00:21:27.785 --rc genhtml_function_coverage=1 00:21:27.785 --rc genhtml_legend=1 00:21:27.785 --rc geninfo_all_blocks=1 00:21:27.785 --rc geninfo_unexecuted_blocks=1 00:21:27.785 00:21:27.785 ' 00:21:27.785 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:27.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:27.785 --rc genhtml_branch_coverage=1 00:21:27.785 --rc genhtml_function_coverage=1 00:21:27.785 --rc genhtml_legend=1 00:21:27.785 --rc geninfo_all_blocks=1 00:21:27.785 --rc geninfo_unexecuted_blocks=1 00:21:27.785 00:21:27.785 ' 00:21:27.785 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:27.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:27.785 --rc genhtml_branch_coverage=1 00:21:27.785 --rc genhtml_function_coverage=1 00:21:27.785 --rc genhtml_legend=1 00:21:27.785 --rc geninfo_all_blocks=1 00:21:27.785 --rc geninfo_unexecuted_blocks=1 00:21:27.785 00:21:27.785 ' 00:21:27.785 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:27.785 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:21:27.785 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:27.785 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:27.785 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:27.785 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:27.785 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:27.785 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:27.785 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:27.785 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:27.785 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:27.785 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:27.785 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:27.785 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:27.785 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:27.785 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:27.785 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:27.785 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:27.785 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:27.785 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:21:27.785 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:27.785 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:27.785 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:27.786 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.786 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.786 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.786 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:21:27.786 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.786 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:21:27.786 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:27.786 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:27.786 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:27.786 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:27.786 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:27.786 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:27.786 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:27.786 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:27.786 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:27.786 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:27.786 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:27.786 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:21:27.786 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:21:27.786 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:27.786 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # prepare_net_devs 00:21:27.786 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@434 -- # local -g is_hw=no 00:21:27.786 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # remove_spdk_ns 00:21:27.786 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:27.786 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:27.786 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:27.786 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:21:27.786 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:21:27.786 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:21:27.786 08:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:35.925 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:35.925 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:21:35.925 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:35.925 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:35.925 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:35.925 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:35.925 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:35.925 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:21:35.925 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:35.925 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:21:35.925 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:21:35.925 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:21:35.925 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:21:35.925 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:21:35.925 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:21:35.925 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:35.925 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:35.925 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:35.925 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:35.925 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:35.925 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:35.925 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:35.925 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:35.925 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:35.925 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:35.925 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:35.925 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:21:35.925 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:21:35.925 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:21:35.925 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:21:35.925 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:21:35.925 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:21:35.925 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:21:35.925 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:35.925 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:35.925 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:21:35.925 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:21:35.925 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:35.925 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:35.925 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:21:35.925 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:21:35.925 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:35.925 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:35.925 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:21:35.925 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:21:35.925 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:35.925 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:35.925 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:21:35.925 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:21:35.925 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:21:35.925 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:21:35.925 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:21:35.925 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:35.926 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:21:35.926 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:35.926 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ up == up ]] 00:21:35.926 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:21:35.926 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:35.926 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:35.926 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:35.926 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:21:35.926 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:21:35.926 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:35.926 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:21:35.926 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:35.926 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ up == up ]] 00:21:35.926 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:21:35.926 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:35.926 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:35.926 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:35.926 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:21:35.926 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:21:35.926 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # is_hw=yes 00:21:35.926 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:21:35.926 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:21:35.926 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:21:35.926 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:35.926 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:35.926 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:35.926 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:35.926 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:35.926 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:35.926 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:35.926 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:35.926 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:35.926 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:35.926 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:35.926 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:35.926 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:35.926 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:35.926 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:35.926 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:35.926 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:35.926 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:35.926 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:35.926 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:35.926 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:35.926 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:35.926 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:35.926 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:35.926 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.586 ms 00:21:35.926 00:21:35.926 --- 10.0.0.2 ping statistics --- 00:21:35.926 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:35.926 rtt min/avg/max/mdev = 0.586/0.586/0.586/0.000 ms 00:21:35.926 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:35.926 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:35.926 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.333 ms 00:21:35.926 00:21:35.926 --- 10.0.0.1 ping statistics --- 00:21:35.926 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:35.926 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:21:35.926 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:35.926 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # return 0 00:21:35.926 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:21:35.926 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:35.926 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:21:35.926 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:21:35.926 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:35.926 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:21:35.926 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:21:35.926 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:21:35.926 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:21:35.926 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:35.926 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:35.926 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=3770144 00:21:35.926 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 3770144 00:21:35.926 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:21:35.926 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3770144 ']' 00:21:35.926 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:35.927 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:35.927 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:35.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:35.927 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:35.927 08:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:35.927 [2024-10-01 08:36:26.931925] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:21:35.927 [2024-10-01 08:36:26.932005] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:35.927 [2024-10-01 08:36:27.021377] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.927 [2024-10-01 08:36:27.113185] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:35.927 [2024-10-01 08:36:27.113241] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:35.927 [2024-10-01 08:36:27.113250] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:35.927 [2024-10-01 08:36:27.113258] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:35.927 [2024-10-01 08:36:27.113264] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:35.927 [2024-10-01 08:36:27.114036] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:35.927 08:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:35.927 08:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:35.927 08:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:21:35.927 08:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:35.927 08:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:36.188 08:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:36.188 08:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:21:36.188 08:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:21:36.188 true 00:21:36.188 08:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:36.188 08:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:21:36.449 08:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:21:36.449 08:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:21:36.449 08:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:36.710 08:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:36.710 08:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:21:36.710 08:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:21:36.710 08:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:21:36.710 08:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:21:36.971 08:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:36.971 08:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:21:37.232 08:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:21:37.232 08:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:21:37.232 08:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:37.232 08:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:21:37.494 08:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:21:37.494 08:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:21:37.494 08:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:21:37.494 08:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:37.494 08:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:21:37.754 08:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:21:37.754 08:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:21:37.754 08:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:21:38.015 08:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:38.015 08:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:21:38.015 08:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:21:38.015 08:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:21:38.015 08:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:21:38.015 08:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:21:38.015 08:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:21:38.015 08:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:21:38.015 08:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:21:38.015 08:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=1 00:21:38.015 08:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:21:38.276 08:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:38.276 08:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:21:38.276 08:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:21:38.276 08:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:21:38.276 08:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:21:38.276 08:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=ffeeddccbbaa99887766554433221100 00:21:38.276 08:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=1 00:21:38.276 08:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:21:38.277 08:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:38.277 08:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:21:38.277 08:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.PaaVwz6B3z 00:21:38.277 08:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:21:38.277 08:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.z12oH2jiJZ 00:21:38.277 08:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:38.277 08:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:38.277 08:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.PaaVwz6B3z 00:21:38.277 08:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.z12oH2jiJZ 00:21:38.277 08:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:38.536 08:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:21:38.536 08:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.PaaVwz6B3z 00:21:38.536 08:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.PaaVwz6B3z 00:21:38.536 08:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:38.797 [2024-10-01 08:36:30.503460] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:38.797 08:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:39.057 08:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:39.057 [2024-10-01 08:36:30.824227] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:39.057 [2024-10-01 08:36:30.824449] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:39.057 08:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:39.318 malloc0 00:21:39.318 08:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:39.578 08:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.PaaVwz6B3z 00:21:39.578 08:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:39.839 08:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.PaaVwz6B3z 00:21:49.837 Initializing NVMe Controllers 00:21:49.837 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:49.837 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:49.837 Initialization complete. Launching workers. 00:21:49.837 ======================================================== 00:21:49.837 Latency(us) 00:21:49.837 Device Information : IOPS MiB/s Average min max 00:21:49.838 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18763.05 73.29 3410.98 1120.82 4197.97 00:21:49.838 ======================================================== 00:21:49.838 Total : 18763.05 73.29 3410.98 1120.82 4197.97 00:21:49.838 00:21:49.838 08:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PaaVwz6B3z 00:21:49.838 08:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:49.838 08:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:49.838 08:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:49.838 08:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.PaaVwz6B3z 00:21:49.838 08:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:49.838 08:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3772900 00:21:49.838 08:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:49.838 08:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3772900 /var/tmp/bdevperf.sock 00:21:49.838 08:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3772900 ']' 00:21:49.838 08:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:49.838 08:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:49.838 08:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:49.838 08:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:49.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:49.838 08:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:49.838 08:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:49.838 [2024-10-01 08:36:41.654215] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:21:49.838 [2024-10-01 08:36:41.654277] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3772900 ] 00:21:50.097 [2024-10-01 08:36:41.703825] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:50.097 [2024-10-01 08:36:41.755666] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:21:50.097 08:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:50.097 08:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:50.097 08:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.PaaVwz6B3z 00:21:50.357 08:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:50.357 [2024-10-01 08:36:42.140754] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:50.616 TLSTESTn1 00:21:50.616 08:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:50.617 Running I/O for 10 seconds... 00:22:00.610 5955.00 IOPS, 23.26 MiB/s 6067.00 IOPS, 23.70 MiB/s 6024.67 IOPS, 23.53 MiB/s 6104.00 IOPS, 23.84 MiB/s 6120.20 IOPS, 23.91 MiB/s 6137.00 IOPS, 23.97 MiB/s 6128.43 IOPS, 23.94 MiB/s 6127.25 IOPS, 23.93 MiB/s 6165.67 IOPS, 24.08 MiB/s 6180.50 IOPS, 24.14 MiB/s 00:22:00.610 Latency(us) 00:22:00.610 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:00.610 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:00.610 Verification LBA range: start 0x0 length 0x2000 00:22:00.610 TLSTESTn1 : 10.01 6186.12 24.16 0.00 0.00 20662.01 4724.05 29709.65 00:22:00.610 =================================================================================================================== 00:22:00.610 Total : 6186.12 24.16 0.00 0.00 20662.01 4724.05 29709.65 00:22:00.610 { 00:22:00.610 "results": [ 00:22:00.611 { 00:22:00.611 "job": "TLSTESTn1", 00:22:00.611 "core_mask": "0x4", 00:22:00.611 "workload": "verify", 00:22:00.611 "status": "finished", 00:22:00.611 "verify_range": { 00:22:00.611 "start": 0, 00:22:00.611 "length": 8192 00:22:00.611 }, 00:22:00.611 "queue_depth": 128, 00:22:00.611 "io_size": 4096, 00:22:00.611 "runtime": 10.011288, 00:22:00.611 "iops": 6186.117111005097, 00:22:00.611 "mibps": 24.16451996486366, 00:22:00.611 "io_failed": 0, 00:22:00.611 "io_timeout": 0, 00:22:00.611 "avg_latency_us": 20662.009244266468, 00:22:00.611 "min_latency_us": 4724.053333333333, 00:22:00.611 "max_latency_us": 29709.653333333332 00:22:00.611 } 00:22:00.611 ], 00:22:00.611 "core_count": 1 00:22:00.611 } 00:22:00.611 08:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:00.611 08:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3772900 00:22:00.611 08:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3772900 ']' 00:22:00.611 08:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3772900 00:22:00.611 08:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:00.611 08:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:00.611 08:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3772900 00:22:00.871 08:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:00.871 08:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:00.871 08:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3772900' 00:22:00.871 killing process with pid 3772900 00:22:00.871 08:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3772900 00:22:00.871 Received shutdown signal, test time was about 10.000000 seconds 00:22:00.871 00:22:00.871 Latency(us) 00:22:00.871 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:00.871 =================================================================================================================== 00:22:00.871 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:00.871 08:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3772900 00:22:00.871 08:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.z12oH2jiJZ 00:22:00.871 08:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:00.871 08:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.z12oH2jiJZ 00:22:00.871 08:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:00.871 08:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:00.872 08:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:00.872 08:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:00.872 08:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.z12oH2jiJZ 00:22:00.872 08:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:00.872 08:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:00.872 08:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:00.872 08:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.z12oH2jiJZ 00:22:00.872 08:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:00.872 08:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3775048 00:22:00.872 08:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:00.872 08:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3775048 /var/tmp/bdevperf.sock 00:22:00.872 08:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:00.872 08:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3775048 ']' 00:22:00.872 08:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:00.872 08:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:00.872 08:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:00.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:00.872 08:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:00.872 08:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:00.872 [2024-10-01 08:36:52.632315] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:22:00.872 [2024-10-01 08:36:52.632375] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3775048 ] 00:22:00.872 [2024-10-01 08:36:52.683019] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.132 [2024-10-01 08:36:52.734776] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:01.702 08:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:01.702 08:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:01.702 08:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.z12oH2jiJZ 00:22:01.962 08:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:01.962 [2024-10-01 08:36:53.713187] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:01.962 [2024-10-01 08:36:53.720562] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:01.962 [2024-10-01 08:36:53.721456] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x210ae10 (107): Transport endpoint is not connected 00:22:01.962 [2024-10-01 08:36:53.722452] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x210ae10 (9): Bad file descriptor 00:22:01.962 [2024-10-01 08:36:53.723453] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.962 [2024-10-01 08:36:53.723461] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:01.962 [2024-10-01 08:36:53.723467] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:22:01.962 [2024-10-01 08:36:53.723474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.962 request: 00:22:01.962 { 00:22:01.962 "name": "TLSTEST", 00:22:01.962 "trtype": "tcp", 00:22:01.962 "traddr": "10.0.0.2", 00:22:01.962 "adrfam": "ipv4", 00:22:01.962 "trsvcid": "4420", 00:22:01.962 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:01.962 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:01.962 "prchk_reftag": false, 00:22:01.962 "prchk_guard": false, 00:22:01.962 "hdgst": false, 00:22:01.962 "ddgst": false, 00:22:01.962 "psk": "key0", 00:22:01.962 "allow_unrecognized_csi": false, 00:22:01.962 "method": "bdev_nvme_attach_controller", 00:22:01.962 "req_id": 1 00:22:01.962 } 00:22:01.962 Got JSON-RPC error response 00:22:01.962 response: 00:22:01.962 { 00:22:01.962 "code": -5, 00:22:01.962 "message": "Input/output error" 00:22:01.962 } 00:22:01.962 08:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3775048 00:22:01.962 08:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3775048 ']' 00:22:01.962 08:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3775048 00:22:01.962 08:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:01.962 08:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:01.962 08:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3775048 00:22:02.223 08:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:02.223 08:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:02.223 08:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3775048' 00:22:02.223 killing process with pid 3775048 00:22:02.223 08:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3775048 00:22:02.223 Received shutdown signal, test time was about 10.000000 seconds 00:22:02.223 00:22:02.223 Latency(us) 00:22:02.223 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:02.223 =================================================================================================================== 00:22:02.223 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:02.223 08:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3775048 00:22:02.223 08:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:02.223 08:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:02.223 08:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:02.223 08:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:02.223 08:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:02.223 08:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.PaaVwz6B3z 00:22:02.223 08:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:02.223 08:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.PaaVwz6B3z 00:22:02.223 08:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:02.223 08:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:02.223 08:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:02.223 08:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:02.223 08:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.PaaVwz6B3z 00:22:02.223 08:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:02.223 08:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:02.223 08:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:02.223 08:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.PaaVwz6B3z 00:22:02.223 08:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:02.223 08:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3775263 00:22:02.223 08:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:02.223 08:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3775263 /var/tmp/bdevperf.sock 00:22:02.223 08:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:02.223 08:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3775263 ']' 00:22:02.223 08:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:02.223 08:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:02.223 08:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:02.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:02.223 08:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:02.223 08:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:02.223 [2024-10-01 08:36:53.970259] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:22:02.223 [2024-10-01 08:36:53.970319] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3775263 ] 00:22:02.223 [2024-10-01 08:36:54.020561] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.485 [2024-10-01 08:36:54.072301] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:02.485 08:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:02.485 08:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:02.485 08:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.PaaVwz6B3z 00:22:02.745 08:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:22:02.745 [2024-10-01 08:36:54.457141] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:02.745 [2024-10-01 08:36:54.462258] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:02.745 [2024-10-01 08:36:54.462278] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:02.745 [2024-10-01 08:36:54.462297] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:02.745 [2024-10-01 08:36:54.462337] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5ce10 (107): Transport endpoint is not connected 00:22:02.745 [2024-10-01 08:36:54.463324] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5ce10 (9): Bad file descriptor 00:22:02.745 [2024-10-01 08:36:54.464326] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.745 [2024-10-01 08:36:54.464333] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:02.745 [2024-10-01 08:36:54.464339] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:22:02.745 [2024-10-01 08:36:54.464346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.745 request: 00:22:02.745 { 00:22:02.745 "name": "TLSTEST", 00:22:02.745 "trtype": "tcp", 00:22:02.745 "traddr": "10.0.0.2", 00:22:02.745 "adrfam": "ipv4", 00:22:02.745 "trsvcid": "4420", 00:22:02.745 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:02.745 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:02.746 "prchk_reftag": false, 00:22:02.746 "prchk_guard": false, 00:22:02.746 "hdgst": false, 00:22:02.746 "ddgst": false, 00:22:02.746 "psk": "key0", 00:22:02.746 "allow_unrecognized_csi": false, 00:22:02.746 "method": "bdev_nvme_attach_controller", 00:22:02.746 "req_id": 1 00:22:02.746 } 00:22:02.746 Got JSON-RPC error response 00:22:02.746 response: 00:22:02.746 { 00:22:02.746 "code": -5, 00:22:02.746 "message": "Input/output error" 00:22:02.746 } 00:22:02.746 08:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3775263 00:22:02.746 08:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3775263 ']' 00:22:02.746 08:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3775263 00:22:02.746 08:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:02.746 08:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:02.746 08:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3775263 00:22:02.746 08:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:02.746 08:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:02.746 08:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3775263' 00:22:02.746 killing process with pid 3775263 00:22:02.746 08:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3775263 00:22:02.746 Received shutdown signal, test time was about 10.000000 seconds 00:22:02.746 00:22:02.746 Latency(us) 00:22:02.746 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:02.746 =================================================================================================================== 00:22:02.746 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:02.746 08:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3775263 00:22:03.006 08:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:03.006 08:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:03.006 08:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:03.006 08:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:03.006 08:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:03.006 08:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.PaaVwz6B3z 00:22:03.006 08:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:03.006 08:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.PaaVwz6B3z 00:22:03.006 08:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:03.006 08:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:03.006 08:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:03.006 08:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:03.006 08:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.PaaVwz6B3z 00:22:03.006 08:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:03.006 08:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:03.006 08:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:03.006 08:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.PaaVwz6B3z 00:22:03.006 08:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:03.006 08:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3775577 00:22:03.006 08:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:03.006 08:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3775577 /var/tmp/bdevperf.sock 00:22:03.006 08:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:03.006 08:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3775577 ']' 00:22:03.006 08:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:03.006 08:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:03.006 08:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:03.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:03.006 08:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:03.006 08:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:03.006 [2024-10-01 08:36:54.710220] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:22:03.006 [2024-10-01 08:36:54.710278] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3775577 ] 00:22:03.006 [2024-10-01 08:36:54.759831] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.006 [2024-10-01 08:36:54.811229] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:03.266 08:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:03.266 08:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:03.266 08:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.PaaVwz6B3z 00:22:03.266 08:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:03.525 [2024-10-01 08:36:55.228092] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:03.525 [2024-10-01 08:36:55.234138] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:03.525 [2024-10-01 08:36:55.234157] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:03.525 [2024-10-01 08:36:55.234176] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:03.525 [2024-10-01 08:36:55.234481] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2419e10 (107): Transport endpoint is not connected 00:22:03.525 [2024-10-01 08:36:55.235478] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2419e10 (9): Bad file descriptor 00:22:03.525 [2024-10-01 08:36:55.236479] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:03.525 [2024-10-01 08:36:55.236491] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:03.525 [2024-10-01 08:36:55.236497] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:22:03.525 [2024-10-01 08:36:55.236505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:03.525 request: 00:22:03.525 { 00:22:03.525 "name": "TLSTEST", 00:22:03.525 "trtype": "tcp", 00:22:03.525 "traddr": "10.0.0.2", 00:22:03.525 "adrfam": "ipv4", 00:22:03.525 "trsvcid": "4420", 00:22:03.525 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:03.525 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:03.525 "prchk_reftag": false, 00:22:03.525 "prchk_guard": false, 00:22:03.525 "hdgst": false, 00:22:03.525 "ddgst": false, 00:22:03.525 "psk": "key0", 00:22:03.525 "allow_unrecognized_csi": false, 00:22:03.525 "method": "bdev_nvme_attach_controller", 00:22:03.525 "req_id": 1 00:22:03.525 } 00:22:03.525 Got JSON-RPC error response 00:22:03.525 response: 00:22:03.525 { 00:22:03.525 "code": -5, 00:22:03.525 "message": "Input/output error" 00:22:03.525 } 00:22:03.526 08:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3775577 00:22:03.526 08:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3775577 ']' 00:22:03.526 08:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3775577 00:22:03.526 08:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:03.526 08:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:03.526 08:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3775577 00:22:03.526 08:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:03.526 08:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:03.526 08:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3775577' 00:22:03.526 killing process with pid 3775577 00:22:03.526 08:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3775577 00:22:03.526 Received shutdown signal, test time was about 10.000000 seconds 00:22:03.526 00:22:03.526 Latency(us) 00:22:03.526 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:03.526 =================================================================================================================== 00:22:03.526 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:03.526 08:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3775577 00:22:03.786 08:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:03.786 08:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:03.786 08:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:03.786 08:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:03.786 08:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:03.786 08:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:03.786 08:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:03.786 08:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:03.786 08:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:03.786 08:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:03.786 08:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:03.786 08:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:03.786 08:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:03.786 08:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:03.786 08:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:03.786 08:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:03.786 08:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:22:03.786 08:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:03.786 08:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3775617 00:22:03.786 08:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:03.786 08:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3775617 /var/tmp/bdevperf.sock 00:22:03.786 08:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:03.786 08:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3775617 ']' 00:22:03.786 08:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:03.786 08:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:03.786 08:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:03.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:03.786 08:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:03.786 08:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:03.786 [2024-10-01 08:36:55.499704] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:22:03.786 [2024-10-01 08:36:55.499764] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3775617 ] 00:22:03.786 [2024-10-01 08:36:55.549453] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.786 [2024-10-01 08:36:55.601387] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:04.047 08:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:04.047 08:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:04.047 08:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:22:04.047 [2024-10-01 08:36:55.833903] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:22:04.047 [2024-10-01 08:36:55.833925] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:04.047 request: 00:22:04.047 { 00:22:04.047 "name": "key0", 00:22:04.047 "path": "", 00:22:04.047 "method": "keyring_file_add_key", 00:22:04.047 "req_id": 1 00:22:04.047 } 00:22:04.047 Got JSON-RPC error response 00:22:04.047 response: 00:22:04.047 { 00:22:04.047 "code": -1, 00:22:04.047 "message": "Operation not permitted" 00:22:04.047 } 00:22:04.047 08:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:04.307 [2024-10-01 08:36:56.002402] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:04.307 [2024-10-01 08:36:56.002424] bdev_nvme.c:6410:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:22:04.307 request: 00:22:04.307 { 00:22:04.307 "name": "TLSTEST", 00:22:04.307 "trtype": "tcp", 00:22:04.307 "traddr": "10.0.0.2", 00:22:04.307 "adrfam": "ipv4", 00:22:04.307 "trsvcid": "4420", 00:22:04.307 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:04.307 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:04.307 "prchk_reftag": false, 00:22:04.307 "prchk_guard": false, 00:22:04.307 "hdgst": false, 00:22:04.307 "ddgst": false, 00:22:04.307 "psk": "key0", 00:22:04.307 "allow_unrecognized_csi": false, 00:22:04.307 "method": "bdev_nvme_attach_controller", 00:22:04.307 "req_id": 1 00:22:04.307 } 00:22:04.307 Got JSON-RPC error response 00:22:04.307 response: 00:22:04.307 { 00:22:04.307 "code": -126, 00:22:04.307 "message": "Required key not available" 00:22:04.307 } 00:22:04.307 08:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3775617 00:22:04.307 08:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3775617 ']' 00:22:04.307 08:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3775617 00:22:04.307 08:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:04.307 08:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:04.307 08:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3775617 00:22:04.307 08:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:04.307 08:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:04.307 08:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3775617' 00:22:04.308 killing process with pid 3775617 00:22:04.308 08:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3775617 00:22:04.308 Received shutdown signal, test time was about 10.000000 seconds 00:22:04.308 00:22:04.308 Latency(us) 00:22:04.308 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:04.308 =================================================================================================================== 00:22:04.308 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:04.308 08:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3775617 00:22:04.568 08:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:04.568 08:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:04.568 08:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:04.568 08:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:04.568 08:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:04.568 08:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 3770144 00:22:04.568 08:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3770144 ']' 00:22:04.568 08:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3770144 00:22:04.568 08:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:04.568 08:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:04.568 08:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3770144 00:22:04.568 08:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:04.568 08:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:04.568 08:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3770144' 00:22:04.568 killing process with pid 3770144 00:22:04.568 08:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3770144 00:22:04.568 08:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3770144 00:22:04.568 08:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:22:04.568 08:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:22:04.568 08:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:22:04.568 08:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:22:04.568 08:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:22:04.568 08:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=2 00:22:04.568 08:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:22:04.827 08:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:04.827 08:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:22:04.827 08:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.ZvaF733Ncj 00:22:04.827 08:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:04.827 08:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.ZvaF733Ncj 00:22:04.827 08:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:22:04.827 08:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:22:04.827 08:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:04.827 08:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:04.827 08:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=3775963 00:22:04.827 08:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 3775963 00:22:04.827 08:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:04.828 08:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3775963 ']' 00:22:04.828 08:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:04.828 08:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:04.828 08:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:04.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:04.828 08:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:04.828 08:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:04.828 [2024-10-01 08:36:56.496723] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:22:04.828 [2024-10-01 08:36:56.496776] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:04.828 [2024-10-01 08:36:56.578882] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:04.828 [2024-10-01 08:36:56.630988] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:04.828 [2024-10-01 08:36:56.631040] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:04.828 [2024-10-01 08:36:56.631046] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:04.828 [2024-10-01 08:36:56.631051] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:04.828 [2024-10-01 08:36:56.631056] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:04.828 [2024-10-01 08:36:56.631540] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:05.765 08:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:05.765 08:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:05.765 08:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:22:05.765 08:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:05.765 08:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:05.765 08:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:05.765 08:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.ZvaF733Ncj 00:22:05.765 08:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ZvaF733Ncj 00:22:05.765 08:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:05.765 [2024-10-01 08:36:57.470591] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:05.765 08:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:06.025 08:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:06.025 [2024-10-01 08:36:57.807426] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:06.025 [2024-10-01 08:36:57.807633] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:06.025 08:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:06.284 malloc0 00:22:06.284 08:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:06.544 08:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ZvaF733Ncj 00:22:06.544 08:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:06.804 08:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZvaF733Ncj 00:22:06.804 08:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:06.804 08:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:06.804 08:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:06.804 08:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ZvaF733Ncj 00:22:06.804 08:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:06.804 08:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3776327 00:22:06.804 08:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:06.804 08:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3776327 /var/tmp/bdevperf.sock 00:22:06.804 08:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:06.804 08:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3776327 ']' 00:22:06.804 08:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:06.804 08:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:06.804 08:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:06.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:06.804 08:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:06.804 08:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:06.804 [2024-10-01 08:36:58.562518] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:22:06.804 [2024-10-01 08:36:58.562573] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3776327 ] 00:22:06.804 [2024-10-01 08:36:58.612743] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:07.063 [2024-10-01 08:36:58.664583] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:07.633 08:36:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:07.633 08:36:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:07.633 08:36:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ZvaF733Ncj 00:22:07.892 08:36:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:07.892 [2024-10-01 08:36:59.638876] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:07.892 TLSTESTn1 00:22:08.151 08:36:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:08.151 Running I/O for 10 seconds... 00:22:18.433 5489.00 IOPS, 21.44 MiB/s 5420.50 IOPS, 21.17 MiB/s 5793.33 IOPS, 22.63 MiB/s 5693.50 IOPS, 22.24 MiB/s 5612.40 IOPS, 21.92 MiB/s 5693.50 IOPS, 22.24 MiB/s 5686.29 IOPS, 22.21 MiB/s 5621.00 IOPS, 21.96 MiB/s 5590.78 IOPS, 21.84 MiB/s 5568.90 IOPS, 21.75 MiB/s 00:22:18.433 Latency(us) 00:22:18.433 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:18.433 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:18.433 Verification LBA range: start 0x0 length 0x2000 00:22:18.434 TLSTESTn1 : 10.02 5572.14 21.77 0.00 0.00 22939.70 4642.13 29709.65 00:22:18.434 =================================================================================================================== 00:22:18.434 Total : 5572.14 21.77 0.00 0.00 22939.70 4642.13 29709.65 00:22:18.434 { 00:22:18.434 "results": [ 00:22:18.434 { 00:22:18.434 "job": "TLSTESTn1", 00:22:18.434 "core_mask": "0x4", 00:22:18.434 "workload": "verify", 00:22:18.434 "status": "finished", 00:22:18.434 "verify_range": { 00:22:18.434 "start": 0, 00:22:18.434 "length": 8192 00:22:18.434 }, 00:22:18.434 "queue_depth": 128, 00:22:18.434 "io_size": 4096, 00:22:18.434 "runtime": 10.017152, 00:22:18.434 "iops": 5572.142660908011, 00:22:18.434 "mibps": 21.766182269171917, 00:22:18.434 "io_failed": 0, 00:22:18.434 "io_timeout": 0, 00:22:18.434 "avg_latency_us": 22939.704370114243, 00:22:18.434 "min_latency_us": 4642.133333333333, 00:22:18.434 "max_latency_us": 29709.653333333332 00:22:18.434 } 00:22:18.434 ], 00:22:18.434 "core_count": 1 00:22:18.434 } 00:22:18.434 08:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:18.434 08:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3776327 00:22:18.434 08:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3776327 ']' 00:22:18.434 08:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3776327 00:22:18.434 08:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:18.434 08:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:18.434 08:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3776327 00:22:18.434 08:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:18.434 08:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:18.434 08:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3776327' 00:22:18.434 killing process with pid 3776327 00:22:18.434 08:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3776327 00:22:18.434 Received shutdown signal, test time was about 10.000000 seconds 00:22:18.434 00:22:18.434 Latency(us) 00:22:18.434 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:18.434 =================================================================================================================== 00:22:18.434 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:18.434 08:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3776327 00:22:18.434 08:37:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.ZvaF733Ncj 00:22:18.434 08:37:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZvaF733Ncj 00:22:18.434 08:37:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:18.434 08:37:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZvaF733Ncj 00:22:18.434 08:37:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:18.434 08:37:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:18.434 08:37:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:18.434 08:37:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:18.434 08:37:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZvaF733Ncj 00:22:18.434 08:37:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:18.434 08:37:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:18.434 08:37:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:18.434 08:37:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ZvaF733Ncj 00:22:18.434 08:37:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:18.434 08:37:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3778595 00:22:18.434 08:37:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:18.434 08:37:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3778595 /var/tmp/bdevperf.sock 00:22:18.434 08:37:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:18.434 08:37:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3778595 ']' 00:22:18.434 08:37:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:18.434 08:37:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:18.434 08:37:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:18.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:18.434 08:37:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:18.434 08:37:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:18.434 [2024-10-01 08:37:10.125162] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:22:18.434 [2024-10-01 08:37:10.125218] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3778595 ] 00:22:18.434 [2024-10-01 08:37:10.176674] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:18.434 [2024-10-01 08:37:10.228423] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:19.377 08:37:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:19.377 08:37:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:19.377 08:37:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ZvaF733Ncj 00:22:19.377 [2024-10-01 08:37:11.070742] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.ZvaF733Ncj': 0100666 00:22:19.377 [2024-10-01 08:37:11.070771] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:19.377 request: 00:22:19.377 { 00:22:19.377 "name": "key0", 00:22:19.377 "path": "/tmp/tmp.ZvaF733Ncj", 00:22:19.377 "method": "keyring_file_add_key", 00:22:19.377 "req_id": 1 00:22:19.377 } 00:22:19.377 Got JSON-RPC error response 00:22:19.377 response: 00:22:19.377 { 00:22:19.377 "code": -1, 00:22:19.377 "message": "Operation not permitted" 00:22:19.377 } 00:22:19.377 08:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:19.638 [2024-10-01 08:37:11.247258] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:19.638 [2024-10-01 08:37:11.247276] bdev_nvme.c:6410:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:22:19.638 request: 00:22:19.638 { 00:22:19.638 "name": "TLSTEST", 00:22:19.638 "trtype": "tcp", 00:22:19.638 "traddr": "10.0.0.2", 00:22:19.638 "adrfam": "ipv4", 00:22:19.638 "trsvcid": "4420", 00:22:19.638 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:19.638 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:19.638 "prchk_reftag": false, 00:22:19.638 "prchk_guard": false, 00:22:19.638 "hdgst": false, 00:22:19.638 "ddgst": false, 00:22:19.638 "psk": "key0", 00:22:19.638 "allow_unrecognized_csi": false, 00:22:19.638 "method": "bdev_nvme_attach_controller", 00:22:19.638 "req_id": 1 00:22:19.638 } 00:22:19.638 Got JSON-RPC error response 00:22:19.638 response: 00:22:19.638 { 00:22:19.638 "code": -126, 00:22:19.638 "message": "Required key not available" 00:22:19.638 } 00:22:19.638 08:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3778595 00:22:19.638 08:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3778595 ']' 00:22:19.638 08:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3778595 00:22:19.638 08:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:19.638 08:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:19.638 08:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3778595 00:22:19.638 08:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:19.638 08:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:19.638 08:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3778595' 00:22:19.638 killing process with pid 3778595 00:22:19.638 08:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3778595 00:22:19.638 Received shutdown signal, test time was about 10.000000 seconds 00:22:19.638 00:22:19.638 Latency(us) 00:22:19.638 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:19.638 =================================================================================================================== 00:22:19.638 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:19.638 08:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3778595 00:22:19.638 08:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:19.638 08:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:19.638 08:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:19.638 08:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:19.638 08:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:19.638 08:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 3775963 00:22:19.638 08:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3775963 ']' 00:22:19.638 08:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3775963 00:22:19.638 08:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:19.638 08:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:19.900 08:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3775963 00:22:19.900 08:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:19.900 08:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:19.900 08:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3775963' 00:22:19.900 killing process with pid 3775963 00:22:19.900 08:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3775963 00:22:19.900 08:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3775963 00:22:19.900 08:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:22:19.900 08:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:22:19.900 08:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:19.900 08:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:19.900 08:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=3778875 00:22:19.900 08:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 3778875 00:22:19.900 08:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:19.900 08:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3778875 ']' 00:22:19.900 08:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:19.900 08:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:19.900 08:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:19.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:19.900 08:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:19.900 08:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:19.900 [2024-10-01 08:37:11.699942] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:22:19.900 [2024-10-01 08:37:11.700009] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:20.160 [2024-10-01 08:37:11.779905] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:20.160 [2024-10-01 08:37:11.832797] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:20.160 [2024-10-01 08:37:11.832828] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:20.160 [2024-10-01 08:37:11.832834] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:20.160 [2024-10-01 08:37:11.832838] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:20.160 [2024-10-01 08:37:11.832842] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:20.160 [2024-10-01 08:37:11.833289] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:20.729 08:37:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:20.729 08:37:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:20.729 08:37:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:22:20.729 08:37:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:20.729 08:37:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:20.729 08:37:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:20.729 08:37:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.ZvaF733Ncj 00:22:20.729 08:37:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:20.729 08:37:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.ZvaF733Ncj 00:22:20.729 08:37:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:22:20.729 08:37:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:20.729 08:37:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:22:20.729 08:37:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:20.729 08:37:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.ZvaF733Ncj 00:22:20.729 08:37:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ZvaF733Ncj 00:22:20.729 08:37:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:20.989 [2024-10-01 08:37:12.684221] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:20.989 08:37:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:21.250 08:37:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:21.250 [2024-10-01 08:37:13.005014] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:21.250 [2024-10-01 08:37:13.005221] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:21.250 08:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:21.511 malloc0 00:22:21.511 08:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:21.775 08:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ZvaF733Ncj 00:22:21.775 [2024-10-01 08:37:13.524873] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.ZvaF733Ncj': 0100666 00:22:21.775 [2024-10-01 08:37:13.524898] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:21.775 request: 00:22:21.775 { 00:22:21.775 "name": "key0", 00:22:21.775 "path": "/tmp/tmp.ZvaF733Ncj", 00:22:21.775 "method": "keyring_file_add_key", 00:22:21.775 "req_id": 1 00:22:21.775 } 00:22:21.775 Got JSON-RPC error response 00:22:21.775 response: 00:22:21.775 { 00:22:21.775 "code": -1, 00:22:21.775 "message": "Operation not permitted" 00:22:21.775 } 00:22:21.775 08:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:22.035 [2024-10-01 08:37:13.689300] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:22:22.035 [2024-10-01 08:37:13.689327] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:22:22.035 request: 00:22:22.035 { 00:22:22.035 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:22.035 "host": "nqn.2016-06.io.spdk:host1", 00:22:22.035 "psk": "key0", 00:22:22.035 "method": "nvmf_subsystem_add_host", 00:22:22.035 "req_id": 1 00:22:22.035 } 00:22:22.035 Got JSON-RPC error response 00:22:22.035 response: 00:22:22.035 { 00:22:22.035 "code": -32603, 00:22:22.035 "message": "Internal error" 00:22:22.035 } 00:22:22.035 08:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:22.035 08:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:22.035 08:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:22.035 08:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:22.035 08:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 3778875 00:22:22.035 08:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3778875 ']' 00:22:22.035 08:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3778875 00:22:22.035 08:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:22.035 08:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:22.035 08:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3778875 00:22:22.035 08:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:22.035 08:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:22.035 08:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3778875' 00:22:22.035 killing process with pid 3778875 00:22:22.035 08:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3778875 00:22:22.035 08:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3778875 00:22:22.295 08:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.ZvaF733Ncj 00:22:22.295 08:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:22:22.295 08:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:22:22.295 08:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:22.295 08:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:22.295 08:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=3779394 00:22:22.295 08:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 3779394 00:22:22.295 08:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:22.295 08:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3779394 ']' 00:22:22.295 08:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:22.295 08:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:22.295 08:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:22.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:22.295 08:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:22.295 08:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:22.295 [2024-10-01 08:37:13.958734] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:22:22.295 [2024-10-01 08:37:13.958792] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:22.295 [2024-10-01 08:37:14.040277] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:22.295 [2024-10-01 08:37:14.092887] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:22.295 [2024-10-01 08:37:14.092917] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:22.295 [2024-10-01 08:37:14.092923] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:22.295 [2024-10-01 08:37:14.092927] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:22.295 [2024-10-01 08:37:14.092931] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:22.295 [2024-10-01 08:37:14.093356] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:23.235 08:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:23.235 08:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:23.235 08:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:22:23.235 08:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:23.235 08:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:23.235 08:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:23.235 08:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.ZvaF733Ncj 00:22:23.235 08:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ZvaF733Ncj 00:22:23.235 08:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:23.235 [2024-10-01 08:37:14.940146] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:23.235 08:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:23.496 08:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:23.496 [2024-10-01 08:37:15.260934] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:23.496 [2024-10-01 08:37:15.261141] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:23.496 08:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:23.755 malloc0 00:22:23.755 08:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:24.128 08:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ZvaF733Ncj 00:22:24.128 08:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:24.128 08:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=3779760 00:22:24.128 08:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:24.128 08:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:24.128 08:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 3779760 /var/tmp/bdevperf.sock 00:22:24.128 08:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3779760 ']' 00:22:24.128 08:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:24.128 08:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:24.128 08:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:24.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:24.128 08:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:24.128 08:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:24.415 [2024-10-01 08:37:15.982616] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:22:24.415 [2024-10-01 08:37:15.982670] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3779760 ] 00:22:24.415 [2024-10-01 08:37:16.032637] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:24.415 [2024-10-01 08:37:16.085284] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:24.985 08:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:24.985 08:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:24.985 08:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ZvaF733Ncj 00:22:25.246 08:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:25.506 [2024-10-01 08:37:17.079694] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:25.506 TLSTESTn1 00:22:25.506 08:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:22:25.766 08:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:22:25.766 "subsystems": [ 00:22:25.766 { 00:22:25.766 "subsystem": "keyring", 00:22:25.766 "config": [ 00:22:25.766 { 00:22:25.766 "method": "keyring_file_add_key", 00:22:25.766 "params": { 00:22:25.766 "name": "key0", 00:22:25.766 "path": "/tmp/tmp.ZvaF733Ncj" 00:22:25.766 } 00:22:25.766 } 00:22:25.766 ] 00:22:25.766 }, 00:22:25.766 { 00:22:25.766 "subsystem": "iobuf", 00:22:25.766 "config": [ 00:22:25.766 { 00:22:25.766 "method": "iobuf_set_options", 00:22:25.766 "params": { 00:22:25.766 "small_pool_count": 8192, 00:22:25.766 "large_pool_count": 1024, 00:22:25.766 "small_bufsize": 8192, 00:22:25.766 "large_bufsize": 135168 00:22:25.766 } 00:22:25.766 } 00:22:25.766 ] 00:22:25.766 }, 00:22:25.766 { 00:22:25.766 "subsystem": "sock", 00:22:25.766 "config": [ 00:22:25.766 { 00:22:25.766 "method": "sock_set_default_impl", 00:22:25.766 "params": { 00:22:25.766 "impl_name": "posix" 00:22:25.766 } 00:22:25.766 }, 00:22:25.766 { 00:22:25.766 "method": "sock_impl_set_options", 00:22:25.766 "params": { 00:22:25.766 "impl_name": "ssl", 00:22:25.766 "recv_buf_size": 4096, 00:22:25.766 "send_buf_size": 4096, 00:22:25.766 "enable_recv_pipe": true, 00:22:25.766 "enable_quickack": false, 00:22:25.766 "enable_placement_id": 0, 00:22:25.766 "enable_zerocopy_send_server": true, 00:22:25.766 "enable_zerocopy_send_client": false, 00:22:25.766 "zerocopy_threshold": 0, 00:22:25.766 "tls_version": 0, 00:22:25.766 "enable_ktls": false 00:22:25.766 } 00:22:25.766 }, 00:22:25.766 { 00:22:25.766 "method": "sock_impl_set_options", 00:22:25.766 "params": { 00:22:25.766 "impl_name": "posix", 00:22:25.766 "recv_buf_size": 2097152, 00:22:25.766 "send_buf_size": 2097152, 00:22:25.766 "enable_recv_pipe": true, 00:22:25.766 "enable_quickack": false, 00:22:25.766 "enable_placement_id": 0, 00:22:25.766 "enable_zerocopy_send_server": true, 00:22:25.766 "enable_zerocopy_send_client": false, 00:22:25.766 "zerocopy_threshold": 0, 00:22:25.766 "tls_version": 0, 00:22:25.766 "enable_ktls": false 00:22:25.766 } 00:22:25.766 } 00:22:25.766 ] 00:22:25.766 }, 00:22:25.766 { 00:22:25.766 "subsystem": "vmd", 00:22:25.766 "config": [] 00:22:25.766 }, 00:22:25.766 { 00:22:25.766 "subsystem": "accel", 00:22:25.766 "config": [ 00:22:25.766 { 00:22:25.766 "method": "accel_set_options", 00:22:25.766 "params": { 00:22:25.766 "small_cache_size": 128, 00:22:25.767 "large_cache_size": 16, 00:22:25.767 "task_count": 2048, 00:22:25.767 "sequence_count": 2048, 00:22:25.767 "buf_count": 2048 00:22:25.767 } 00:22:25.767 } 00:22:25.767 ] 00:22:25.767 }, 00:22:25.767 { 00:22:25.767 "subsystem": "bdev", 00:22:25.767 "config": [ 00:22:25.767 { 00:22:25.767 "method": "bdev_set_options", 00:22:25.767 "params": { 00:22:25.767 "bdev_io_pool_size": 65535, 00:22:25.767 "bdev_io_cache_size": 256, 00:22:25.767 "bdev_auto_examine": true, 00:22:25.767 "iobuf_small_cache_size": 128, 00:22:25.767 "iobuf_large_cache_size": 16 00:22:25.767 } 00:22:25.767 }, 00:22:25.767 { 00:22:25.767 "method": "bdev_raid_set_options", 00:22:25.767 "params": { 00:22:25.767 "process_window_size_kb": 1024, 00:22:25.767 "process_max_bandwidth_mb_sec": 0 00:22:25.767 } 00:22:25.767 }, 00:22:25.767 { 00:22:25.767 "method": "bdev_iscsi_set_options", 00:22:25.767 "params": { 00:22:25.767 "timeout_sec": 30 00:22:25.767 } 00:22:25.767 }, 00:22:25.767 { 00:22:25.767 "method": "bdev_nvme_set_options", 00:22:25.767 "params": { 00:22:25.767 "action_on_timeout": "none", 00:22:25.767 "timeout_us": 0, 00:22:25.767 "timeout_admin_us": 0, 00:22:25.767 "keep_alive_timeout_ms": 10000, 00:22:25.767 "arbitration_burst": 0, 00:22:25.767 "low_priority_weight": 0, 00:22:25.767 "medium_priority_weight": 0, 00:22:25.767 "high_priority_weight": 0, 00:22:25.767 "nvme_adminq_poll_period_us": 10000, 00:22:25.767 "nvme_ioq_poll_period_us": 0, 00:22:25.767 "io_queue_requests": 0, 00:22:25.767 "delay_cmd_submit": true, 00:22:25.767 "transport_retry_count": 4, 00:22:25.767 "bdev_retry_count": 3, 00:22:25.767 "transport_ack_timeout": 0, 00:22:25.767 "ctrlr_loss_timeout_sec": 0, 00:22:25.767 "reconnect_delay_sec": 0, 00:22:25.767 "fast_io_fail_timeout_sec": 0, 00:22:25.767 "disable_auto_failback": false, 00:22:25.767 "generate_uuids": false, 00:22:25.767 "transport_tos": 0, 00:22:25.767 "nvme_error_stat": false, 00:22:25.767 "rdma_srq_size": 0, 00:22:25.767 "io_path_stat": false, 00:22:25.767 "allow_accel_sequence": false, 00:22:25.767 "rdma_max_cq_size": 0, 00:22:25.767 "rdma_cm_event_timeout_ms": 0, 00:22:25.767 "dhchap_digests": [ 00:22:25.767 "sha256", 00:22:25.767 "sha384", 00:22:25.767 "sha512" 00:22:25.767 ], 00:22:25.767 "dhchap_dhgroups": [ 00:22:25.767 "null", 00:22:25.767 "ffdhe2048", 00:22:25.767 "ffdhe3072", 00:22:25.767 "ffdhe4096", 00:22:25.767 "ffdhe6144", 00:22:25.767 "ffdhe8192" 00:22:25.767 ] 00:22:25.767 } 00:22:25.767 }, 00:22:25.767 { 00:22:25.767 "method": "bdev_nvme_set_hotplug", 00:22:25.767 "params": { 00:22:25.767 "period_us": 100000, 00:22:25.767 "enable": false 00:22:25.767 } 00:22:25.767 }, 00:22:25.767 { 00:22:25.767 "method": "bdev_malloc_create", 00:22:25.767 "params": { 00:22:25.767 "name": "malloc0", 00:22:25.767 "num_blocks": 8192, 00:22:25.767 "block_size": 4096, 00:22:25.767 "physical_block_size": 4096, 00:22:25.767 "uuid": "2013ee2a-b896-427a-8cd3-97857cce96dc", 00:22:25.767 "optimal_io_boundary": 0, 00:22:25.767 "md_size": 0, 00:22:25.767 "dif_type": 0, 00:22:25.767 "dif_is_head_of_md": false, 00:22:25.767 "dif_pi_format": 0 00:22:25.767 } 00:22:25.767 }, 00:22:25.767 { 00:22:25.767 "method": "bdev_wait_for_examine" 00:22:25.767 } 00:22:25.767 ] 00:22:25.767 }, 00:22:25.767 { 00:22:25.767 "subsystem": "nbd", 00:22:25.767 "config": [] 00:22:25.767 }, 00:22:25.767 { 00:22:25.767 "subsystem": "scheduler", 00:22:25.767 "config": [ 00:22:25.767 { 00:22:25.767 "method": "framework_set_scheduler", 00:22:25.767 "params": { 00:22:25.767 "name": "static" 00:22:25.767 } 00:22:25.767 } 00:22:25.767 ] 00:22:25.767 }, 00:22:25.767 { 00:22:25.767 "subsystem": "nvmf", 00:22:25.767 "config": [ 00:22:25.767 { 00:22:25.767 "method": "nvmf_set_config", 00:22:25.767 "params": { 00:22:25.767 "discovery_filter": "match_any", 00:22:25.767 "admin_cmd_passthru": { 00:22:25.767 "identify_ctrlr": false 00:22:25.767 }, 00:22:25.767 "dhchap_digests": [ 00:22:25.767 "sha256", 00:22:25.767 "sha384", 00:22:25.767 "sha512" 00:22:25.767 ], 00:22:25.767 "dhchap_dhgroups": [ 00:22:25.767 "null", 00:22:25.767 "ffdhe2048", 00:22:25.767 "ffdhe3072", 00:22:25.767 "ffdhe4096", 00:22:25.767 "ffdhe6144", 00:22:25.767 "ffdhe8192" 00:22:25.767 ] 00:22:25.767 } 00:22:25.767 }, 00:22:25.767 { 00:22:25.767 "method": "nvmf_set_max_subsystems", 00:22:25.767 "params": { 00:22:25.767 "max_subsystems": 1024 00:22:25.767 } 00:22:25.767 }, 00:22:25.767 { 00:22:25.767 "method": "nvmf_set_crdt", 00:22:25.767 "params": { 00:22:25.767 "crdt1": 0, 00:22:25.767 "crdt2": 0, 00:22:25.767 "crdt3": 0 00:22:25.767 } 00:22:25.767 }, 00:22:25.767 { 00:22:25.767 "method": "nvmf_create_transport", 00:22:25.767 "params": { 00:22:25.767 "trtype": "TCP", 00:22:25.767 "max_queue_depth": 128, 00:22:25.767 "max_io_qpairs_per_ctrlr": 127, 00:22:25.767 "in_capsule_data_size": 4096, 00:22:25.767 "max_io_size": 131072, 00:22:25.767 "io_unit_size": 131072, 00:22:25.767 "max_aq_depth": 128, 00:22:25.767 "num_shared_buffers": 511, 00:22:25.767 "buf_cache_size": 4294967295, 00:22:25.767 "dif_insert_or_strip": false, 00:22:25.767 "zcopy": false, 00:22:25.767 "c2h_success": false, 00:22:25.767 "sock_priority": 0, 00:22:25.767 "abort_timeout_sec": 1, 00:22:25.767 "ack_timeout": 0, 00:22:25.767 "data_wr_pool_size": 0 00:22:25.767 } 00:22:25.767 }, 00:22:25.767 { 00:22:25.767 "method": "nvmf_create_subsystem", 00:22:25.767 "params": { 00:22:25.767 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:25.767 "allow_any_host": false, 00:22:25.767 "serial_number": "SPDK00000000000001", 00:22:25.767 "model_number": "SPDK bdev Controller", 00:22:25.767 "max_namespaces": 10, 00:22:25.767 "min_cntlid": 1, 00:22:25.767 "max_cntlid": 65519, 00:22:25.767 "ana_reporting": false 00:22:25.767 } 00:22:25.767 }, 00:22:25.767 { 00:22:25.767 "method": "nvmf_subsystem_add_host", 00:22:25.767 "params": { 00:22:25.767 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:25.767 "host": "nqn.2016-06.io.spdk:host1", 00:22:25.767 "psk": "key0" 00:22:25.767 } 00:22:25.767 }, 00:22:25.767 { 00:22:25.767 "method": "nvmf_subsystem_add_ns", 00:22:25.767 "params": { 00:22:25.767 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:25.767 "namespace": { 00:22:25.767 "nsid": 1, 00:22:25.767 "bdev_name": "malloc0", 00:22:25.767 "nguid": "2013EE2AB896427A8CD397857CCE96DC", 00:22:25.767 "uuid": "2013ee2a-b896-427a-8cd3-97857cce96dc", 00:22:25.767 "no_auto_visible": false 00:22:25.767 } 00:22:25.767 } 00:22:25.767 }, 00:22:25.767 { 00:22:25.767 "method": "nvmf_subsystem_add_listener", 00:22:25.767 "params": { 00:22:25.767 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:25.767 "listen_address": { 00:22:25.767 "trtype": "TCP", 00:22:25.767 "adrfam": "IPv4", 00:22:25.767 "traddr": "10.0.0.2", 00:22:25.767 "trsvcid": "4420" 00:22:25.767 }, 00:22:25.767 "secure_channel": true 00:22:25.767 } 00:22:25.767 } 00:22:25.767 ] 00:22:25.767 } 00:22:25.767 ] 00:22:25.767 }' 00:22:25.767 08:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:26.028 08:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:22:26.028 "subsystems": [ 00:22:26.028 { 00:22:26.028 "subsystem": "keyring", 00:22:26.028 "config": [ 00:22:26.028 { 00:22:26.028 "method": "keyring_file_add_key", 00:22:26.028 "params": { 00:22:26.028 "name": "key0", 00:22:26.028 "path": "/tmp/tmp.ZvaF733Ncj" 00:22:26.028 } 00:22:26.028 } 00:22:26.028 ] 00:22:26.028 }, 00:22:26.028 { 00:22:26.028 "subsystem": "iobuf", 00:22:26.028 "config": [ 00:22:26.028 { 00:22:26.028 "method": "iobuf_set_options", 00:22:26.028 "params": { 00:22:26.028 "small_pool_count": 8192, 00:22:26.028 "large_pool_count": 1024, 00:22:26.028 "small_bufsize": 8192, 00:22:26.028 "large_bufsize": 135168 00:22:26.028 } 00:22:26.028 } 00:22:26.028 ] 00:22:26.028 }, 00:22:26.028 { 00:22:26.028 "subsystem": "sock", 00:22:26.028 "config": [ 00:22:26.028 { 00:22:26.028 "method": "sock_set_default_impl", 00:22:26.028 "params": { 00:22:26.028 "impl_name": "posix" 00:22:26.028 } 00:22:26.028 }, 00:22:26.028 { 00:22:26.028 "method": "sock_impl_set_options", 00:22:26.028 "params": { 00:22:26.028 "impl_name": "ssl", 00:22:26.028 "recv_buf_size": 4096, 00:22:26.028 "send_buf_size": 4096, 00:22:26.028 "enable_recv_pipe": true, 00:22:26.028 "enable_quickack": false, 00:22:26.028 "enable_placement_id": 0, 00:22:26.028 "enable_zerocopy_send_server": true, 00:22:26.028 "enable_zerocopy_send_client": false, 00:22:26.028 "zerocopy_threshold": 0, 00:22:26.028 "tls_version": 0, 00:22:26.028 "enable_ktls": false 00:22:26.028 } 00:22:26.028 }, 00:22:26.028 { 00:22:26.028 "method": "sock_impl_set_options", 00:22:26.028 "params": { 00:22:26.028 "impl_name": "posix", 00:22:26.028 "recv_buf_size": 2097152, 00:22:26.028 "send_buf_size": 2097152, 00:22:26.028 "enable_recv_pipe": true, 00:22:26.028 "enable_quickack": false, 00:22:26.028 "enable_placement_id": 0, 00:22:26.028 "enable_zerocopy_send_server": true, 00:22:26.028 "enable_zerocopy_send_client": false, 00:22:26.028 "zerocopy_threshold": 0, 00:22:26.028 "tls_version": 0, 00:22:26.028 "enable_ktls": false 00:22:26.028 } 00:22:26.028 } 00:22:26.028 ] 00:22:26.028 }, 00:22:26.028 { 00:22:26.028 "subsystem": "vmd", 00:22:26.028 "config": [] 00:22:26.028 }, 00:22:26.028 { 00:22:26.028 "subsystem": "accel", 00:22:26.028 "config": [ 00:22:26.028 { 00:22:26.028 "method": "accel_set_options", 00:22:26.028 "params": { 00:22:26.028 "small_cache_size": 128, 00:22:26.028 "large_cache_size": 16, 00:22:26.028 "task_count": 2048, 00:22:26.028 "sequence_count": 2048, 00:22:26.028 "buf_count": 2048 00:22:26.028 } 00:22:26.028 } 00:22:26.028 ] 00:22:26.028 }, 00:22:26.028 { 00:22:26.028 "subsystem": "bdev", 00:22:26.028 "config": [ 00:22:26.028 { 00:22:26.028 "method": "bdev_set_options", 00:22:26.028 "params": { 00:22:26.028 "bdev_io_pool_size": 65535, 00:22:26.028 "bdev_io_cache_size": 256, 00:22:26.028 "bdev_auto_examine": true, 00:22:26.028 "iobuf_small_cache_size": 128, 00:22:26.028 "iobuf_large_cache_size": 16 00:22:26.028 } 00:22:26.028 }, 00:22:26.028 { 00:22:26.028 "method": "bdev_raid_set_options", 00:22:26.028 "params": { 00:22:26.028 "process_window_size_kb": 1024, 00:22:26.028 "process_max_bandwidth_mb_sec": 0 00:22:26.028 } 00:22:26.028 }, 00:22:26.028 { 00:22:26.028 "method": "bdev_iscsi_set_options", 00:22:26.028 "params": { 00:22:26.028 "timeout_sec": 30 00:22:26.028 } 00:22:26.028 }, 00:22:26.028 { 00:22:26.028 "method": "bdev_nvme_set_options", 00:22:26.028 "params": { 00:22:26.028 "action_on_timeout": "none", 00:22:26.028 "timeout_us": 0, 00:22:26.028 "timeout_admin_us": 0, 00:22:26.028 "keep_alive_timeout_ms": 10000, 00:22:26.028 "arbitration_burst": 0, 00:22:26.028 "low_priority_weight": 0, 00:22:26.028 "medium_priority_weight": 0, 00:22:26.028 "high_priority_weight": 0, 00:22:26.028 "nvme_adminq_poll_period_us": 10000, 00:22:26.028 "nvme_ioq_poll_period_us": 0, 00:22:26.028 "io_queue_requests": 512, 00:22:26.028 "delay_cmd_submit": true, 00:22:26.028 "transport_retry_count": 4, 00:22:26.028 "bdev_retry_count": 3, 00:22:26.028 "transport_ack_timeout": 0, 00:22:26.028 "ctrlr_loss_timeout_sec": 0, 00:22:26.028 "reconnect_delay_sec": 0, 00:22:26.028 "fast_io_fail_timeout_sec": 0, 00:22:26.028 "disable_auto_failback": false, 00:22:26.028 "generate_uuids": false, 00:22:26.028 "transport_tos": 0, 00:22:26.028 "nvme_error_stat": false, 00:22:26.028 "rdma_srq_size": 0, 00:22:26.028 "io_path_stat": false, 00:22:26.028 "allow_accel_sequence": false, 00:22:26.028 "rdma_max_cq_size": 0, 00:22:26.028 "rdma_cm_event_timeout_ms": 0, 00:22:26.028 "dhchap_digests": [ 00:22:26.028 "sha256", 00:22:26.028 "sha384", 00:22:26.028 "sha512" 00:22:26.028 ], 00:22:26.028 "dhchap_dhgroups": [ 00:22:26.028 "null", 00:22:26.028 "ffdhe2048", 00:22:26.028 "ffdhe3072", 00:22:26.028 "ffdhe4096", 00:22:26.028 "ffdhe6144", 00:22:26.028 "ffdhe8192" 00:22:26.028 ] 00:22:26.028 } 00:22:26.028 }, 00:22:26.028 { 00:22:26.028 "method": "bdev_nvme_attach_controller", 00:22:26.028 "params": { 00:22:26.028 "name": "TLSTEST", 00:22:26.028 "trtype": "TCP", 00:22:26.028 "adrfam": "IPv4", 00:22:26.028 "traddr": "10.0.0.2", 00:22:26.028 "trsvcid": "4420", 00:22:26.028 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:26.028 "prchk_reftag": false, 00:22:26.028 "prchk_guard": false, 00:22:26.028 "ctrlr_loss_timeout_sec": 0, 00:22:26.028 "reconnect_delay_sec": 0, 00:22:26.028 "fast_io_fail_timeout_sec": 0, 00:22:26.028 "psk": "key0", 00:22:26.028 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:26.028 "hdgst": false, 00:22:26.028 "ddgst": false 00:22:26.028 } 00:22:26.028 }, 00:22:26.028 { 00:22:26.028 "method": "bdev_nvme_set_hotplug", 00:22:26.028 "params": { 00:22:26.028 "period_us": 100000, 00:22:26.028 "enable": false 00:22:26.028 } 00:22:26.028 }, 00:22:26.028 { 00:22:26.028 "method": "bdev_wait_for_examine" 00:22:26.028 } 00:22:26.028 ] 00:22:26.028 }, 00:22:26.028 { 00:22:26.028 "subsystem": "nbd", 00:22:26.028 "config": [] 00:22:26.028 } 00:22:26.028 ] 00:22:26.028 }' 00:22:26.028 08:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 3779760 00:22:26.028 08:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3779760 ']' 00:22:26.028 08:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3779760 00:22:26.028 08:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:26.028 08:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:26.028 08:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3779760 00:22:26.028 08:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:26.028 08:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:26.028 08:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3779760' 00:22:26.028 killing process with pid 3779760 00:22:26.028 08:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3779760 00:22:26.028 Received shutdown signal, test time was about 10.000000 seconds 00:22:26.028 00:22:26.028 Latency(us) 00:22:26.028 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:26.028 =================================================================================================================== 00:22:26.028 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:26.028 08:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3779760 00:22:26.289 08:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 3779394 00:22:26.289 08:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3779394 ']' 00:22:26.289 08:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3779394 00:22:26.289 08:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:26.289 08:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:26.289 08:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3779394 00:22:26.289 08:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:26.289 08:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:26.289 08:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3779394' 00:22:26.289 killing process with pid 3779394 00:22:26.289 08:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3779394 00:22:26.289 08:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3779394 00:22:26.289 08:37:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:22:26.289 08:37:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:22:26.289 08:37:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:26.289 08:37:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:26.289 08:37:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:22:26.289 "subsystems": [ 00:22:26.289 { 00:22:26.289 "subsystem": "keyring", 00:22:26.289 "config": [ 00:22:26.289 { 00:22:26.289 "method": "keyring_file_add_key", 00:22:26.289 "params": { 00:22:26.289 "name": "key0", 00:22:26.289 "path": "/tmp/tmp.ZvaF733Ncj" 00:22:26.289 } 00:22:26.289 } 00:22:26.289 ] 00:22:26.289 }, 00:22:26.289 { 00:22:26.289 "subsystem": "iobuf", 00:22:26.289 "config": [ 00:22:26.289 { 00:22:26.289 "method": "iobuf_set_options", 00:22:26.289 "params": { 00:22:26.289 "small_pool_count": 8192, 00:22:26.289 "large_pool_count": 1024, 00:22:26.289 "small_bufsize": 8192, 00:22:26.289 "large_bufsize": 135168 00:22:26.289 } 00:22:26.289 } 00:22:26.289 ] 00:22:26.289 }, 00:22:26.289 { 00:22:26.289 "subsystem": "sock", 00:22:26.289 "config": [ 00:22:26.289 { 00:22:26.289 "method": "sock_set_default_impl", 00:22:26.289 "params": { 00:22:26.289 "impl_name": "posix" 00:22:26.289 } 00:22:26.289 }, 00:22:26.289 { 00:22:26.290 "method": "sock_impl_set_options", 00:22:26.290 "params": { 00:22:26.290 "impl_name": "ssl", 00:22:26.290 "recv_buf_size": 4096, 00:22:26.290 "send_buf_size": 4096, 00:22:26.290 "enable_recv_pipe": true, 00:22:26.290 "enable_quickack": false, 00:22:26.290 "enable_placement_id": 0, 00:22:26.290 "enable_zerocopy_send_server": true, 00:22:26.290 "enable_zerocopy_send_client": false, 00:22:26.290 "zerocopy_threshold": 0, 00:22:26.290 "tls_version": 0, 00:22:26.290 "enable_ktls": false 00:22:26.290 } 00:22:26.290 }, 00:22:26.290 { 00:22:26.290 "method": "sock_impl_set_options", 00:22:26.290 "params": { 00:22:26.290 "impl_name": "posix", 00:22:26.290 "recv_buf_size": 2097152, 00:22:26.290 "send_buf_size": 2097152, 00:22:26.290 "enable_recv_pipe": true, 00:22:26.290 "enable_quickack": false, 00:22:26.290 "enable_placement_id": 0, 00:22:26.290 "enable_zerocopy_send_server": true, 00:22:26.290 "enable_zerocopy_send_client": false, 00:22:26.290 "zerocopy_threshold": 0, 00:22:26.290 "tls_version": 0, 00:22:26.290 "enable_ktls": false 00:22:26.290 } 00:22:26.290 } 00:22:26.290 ] 00:22:26.290 }, 00:22:26.290 { 00:22:26.290 "subsystem": "vmd", 00:22:26.290 "config": [] 00:22:26.290 }, 00:22:26.290 { 00:22:26.290 "subsystem": "accel", 00:22:26.290 "config": [ 00:22:26.290 { 00:22:26.290 "method": "accel_set_options", 00:22:26.290 "params": { 00:22:26.290 "small_cache_size": 128, 00:22:26.290 "large_cache_size": 16, 00:22:26.290 "task_count": 2048, 00:22:26.290 "sequence_count": 2048, 00:22:26.290 "buf_count": 2048 00:22:26.290 } 00:22:26.290 } 00:22:26.290 ] 00:22:26.290 }, 00:22:26.290 { 00:22:26.290 "subsystem": "bdev", 00:22:26.290 "config": [ 00:22:26.290 { 00:22:26.290 "method": "bdev_set_options", 00:22:26.290 "params": { 00:22:26.290 "bdev_io_pool_size": 65535, 00:22:26.290 "bdev_io_cache_size": 256, 00:22:26.290 "bdev_auto_examine": true, 00:22:26.290 "iobuf_small_cache_size": 128, 00:22:26.290 "iobuf_large_cache_size": 16 00:22:26.290 } 00:22:26.290 }, 00:22:26.290 { 00:22:26.290 "method": "bdev_raid_set_options", 00:22:26.290 "params": { 00:22:26.290 "process_window_size_kb": 1024, 00:22:26.290 "process_max_bandwidth_mb_sec": 0 00:22:26.290 } 00:22:26.290 }, 00:22:26.290 { 00:22:26.290 "method": "bdev_iscsi_set_options", 00:22:26.290 "params": { 00:22:26.290 "timeout_sec": 30 00:22:26.290 } 00:22:26.290 }, 00:22:26.290 { 00:22:26.290 "method": "bdev_nvme_set_options", 00:22:26.290 "params": { 00:22:26.290 "action_on_timeout": "none", 00:22:26.290 "timeout_us": 0, 00:22:26.290 "timeout_admin_us": 0, 00:22:26.290 "keep_alive_timeout_ms": 10000, 00:22:26.290 "arbitration_burst": 0, 00:22:26.290 "low_priority_weight": 0, 00:22:26.290 "medium_priority_weight": 0, 00:22:26.290 "high_priority_weight": 0, 00:22:26.290 "nvme_adminq_poll_period_us": 10000, 00:22:26.290 "nvme_ioq_poll_period_us": 0, 00:22:26.290 "io_queue_requests": 0, 00:22:26.290 "delay_cmd_submit": true, 00:22:26.290 "transport_retry_count": 4, 00:22:26.290 "bdev_retry_count": 3, 00:22:26.290 "transport_ack_timeout": 0, 00:22:26.290 "ctrlr_loss_timeout_sec": 0, 00:22:26.290 "reconnect_delay_sec": 0, 00:22:26.290 "fast_io_fail_timeout_sec": 0, 00:22:26.290 "disable_auto_failback": false, 00:22:26.290 "generate_uuids": false, 00:22:26.290 "transport_tos": 0, 00:22:26.290 "nvme_error_stat": false, 00:22:26.290 "rdma_srq_size": 0, 00:22:26.290 "io_path_stat": false, 00:22:26.290 "allow_accel_sequence": false, 00:22:26.290 "rdma_max_cq_size": 0, 00:22:26.290 "rdma_cm_event_timeout_ms": 0, 00:22:26.290 "dhchap_digests": [ 00:22:26.290 "sha256", 00:22:26.290 "sha384", 00:22:26.290 "sha512" 00:22:26.290 ], 00:22:26.290 "dhchap_dhgroups": [ 00:22:26.290 "null", 00:22:26.290 "ffdhe2048", 00:22:26.290 "ffdhe3072", 00:22:26.290 "ffdhe4096", 00:22:26.290 "ffdhe6144", 00:22:26.290 "ffdhe8192" 00:22:26.290 ] 00:22:26.290 } 00:22:26.290 }, 00:22:26.290 { 00:22:26.290 "method": "bdev_nvme_set_hotplug", 00:22:26.290 "params": { 00:22:26.290 "period_us": 100000, 00:22:26.290 "enable": false 00:22:26.290 } 00:22:26.290 }, 00:22:26.290 { 00:22:26.290 "method": "bdev_malloc_create", 00:22:26.290 "params": { 00:22:26.290 "name": "malloc0", 00:22:26.290 "num_blocks": 8192, 00:22:26.290 "block_size": 4096, 00:22:26.290 "physical_block_size": 4096, 00:22:26.290 "uuid": "2013ee2a-b896-427a-8cd3-97857cce96dc", 00:22:26.290 "optimal_io_boundary": 0, 00:22:26.290 "md_size": 0, 00:22:26.290 "dif_type": 0, 00:22:26.290 "dif_is_head_of_md": false, 00:22:26.290 "dif_pi_format": 0 00:22:26.290 } 00:22:26.290 }, 00:22:26.290 { 00:22:26.290 "method": "bdev_wait_for_examine" 00:22:26.290 } 00:22:26.290 ] 00:22:26.290 }, 00:22:26.290 { 00:22:26.290 "subsystem": "nbd", 00:22:26.290 "config": [] 00:22:26.290 }, 00:22:26.290 { 00:22:26.290 "subsystem": "scheduler", 00:22:26.290 "config": [ 00:22:26.290 { 00:22:26.290 "method": "framework_set_scheduler", 00:22:26.290 "params": { 00:22:26.290 "name": "static" 00:22:26.290 } 00:22:26.290 } 00:22:26.290 ] 00:22:26.290 }, 00:22:26.290 { 00:22:26.290 "subsystem": "nvmf", 00:22:26.290 "config": [ 00:22:26.290 { 00:22:26.290 "method": "nvmf_set_config", 00:22:26.290 "params": { 00:22:26.290 "discovery_filter": "match_any", 00:22:26.290 "admin_cmd_passthru": { 00:22:26.290 "identify_ctrlr": false 00:22:26.290 }, 00:22:26.290 "dhchap_digests": [ 00:22:26.290 "sha256", 00:22:26.290 "sha384", 00:22:26.290 "sha512" 00:22:26.290 ], 00:22:26.290 "dhchap_dhgroups": [ 00:22:26.290 "null", 00:22:26.290 "ffdhe2048", 00:22:26.290 "ffdhe3072", 00:22:26.290 "ffdhe4096", 00:22:26.290 "ffdhe6144", 00:22:26.290 "ffdhe8192" 00:22:26.290 ] 00:22:26.290 } 00:22:26.290 }, 00:22:26.290 { 00:22:26.290 "method": "nvmf_set_max_subsystems", 00:22:26.290 "params": { 00:22:26.290 "max_subsystems": 1024 00:22:26.290 } 00:22:26.290 }, 00:22:26.290 { 00:22:26.290 "method": "nvmf_set_crdt", 00:22:26.290 "params": { 00:22:26.290 "crdt1": 0, 00:22:26.290 "crdt2": 0, 00:22:26.290 "crdt3": 0 00:22:26.290 } 00:22:26.290 }, 00:22:26.290 { 00:22:26.290 "method": "nvmf_create_transport", 00:22:26.290 "params": { 00:22:26.290 "trtype": "TCP", 00:22:26.290 "max_queue_depth": 128, 00:22:26.290 "max_io_qpairs_per_ctrlr": 127, 00:22:26.290 "in_capsule_data_size": 4096, 00:22:26.290 "max_io_size": 131072, 00:22:26.290 "io_unit_size": 131072, 00:22:26.290 "max_aq_depth": 128, 00:22:26.290 "num_shared_buffers": 511, 00:22:26.290 "buf_cache_size": 4294967295, 00:22:26.290 "dif_insert_or_strip": false, 00:22:26.290 "zcopy": false, 00:22:26.290 "c2h_success": false, 00:22:26.290 "sock_priority": 0, 00:22:26.290 "abort_timeout_sec": 1, 00:22:26.290 "ack_timeout": 0, 00:22:26.290 "data_wr_pool_size": 0 00:22:26.290 } 00:22:26.290 }, 00:22:26.290 { 00:22:26.290 "method": "nvmf_create_subsystem", 00:22:26.290 "params": { 00:22:26.290 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:26.290 "allow_any_host": false, 00:22:26.290 "serial_number": "SPDK00000000000001", 00:22:26.290 "model_number": "SPDK bdev Controller", 00:22:26.290 "max_namespaces": 10, 00:22:26.290 "min_cntlid": 1, 00:22:26.290 "max_cntlid": 65519, 00:22:26.290 "ana_reporting": false 00:22:26.290 } 00:22:26.290 }, 00:22:26.290 { 00:22:26.290 "method": "nvmf_subsystem_add_host", 00:22:26.290 "params": { 00:22:26.290 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:26.290 "host": "nqn.2016-06.io.spdk:host1", 00:22:26.290 "psk": "key0" 00:22:26.290 } 00:22:26.290 }, 00:22:26.290 { 00:22:26.290 "method": "nvmf_subsystem_add_ns", 00:22:26.290 "params": { 00:22:26.290 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:26.291 "namespace": { 00:22:26.291 "nsid": 1, 00:22:26.291 "bdev_name": "malloc0", 00:22:26.291 "nguid": "2013EE2AB896427A8CD397857CCE96DC", 00:22:26.291 "uuid": "2013ee2a-b896-427a-8cd3-97857cce96dc", 00:22:26.291 "no_auto_visible": false 00:22:26.291 } 00:22:26.291 } 00:22:26.291 }, 00:22:26.291 { 00:22:26.291 "method": "nvmf_subsystem_add_listener", 00:22:26.291 "params": { 00:22:26.291 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:26.291 "listen_address": { 00:22:26.291 "trtype": "TCP", 00:22:26.291 "adrfam": "IPv4", 00:22:26.291 "traddr": "10.0.0.2", 00:22:26.291 "trsvcid": "4420" 00:22:26.291 }, 00:22:26.291 "secure_channel": true 00:22:26.291 } 00:22:26.291 } 00:22:26.291 ] 00:22:26.291 } 00:22:26.291 ] 00:22:26.291 }' 00:22:26.291 08:37:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=3780123 00:22:26.291 08:37:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 3780123 00:22:26.291 08:37:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:22:26.291 08:37:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3780123 ']' 00:22:26.291 08:37:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:26.291 08:37:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:26.291 08:37:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:26.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:26.291 08:37:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:26.291 08:37:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:26.291 [2024-10-01 08:37:18.102601] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:22:26.291 [2024-10-01 08:37:18.102658] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:26.551 [2024-10-01 08:37:18.186022] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:26.551 [2024-10-01 08:37:18.238958] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:26.551 [2024-10-01 08:37:18.238989] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:26.551 [2024-10-01 08:37:18.238999] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:26.551 [2024-10-01 08:37:18.239004] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:26.551 [2024-10-01 08:37:18.239008] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:26.551 [2024-10-01 08:37:18.239466] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:26.811 [2024-10-01 08:37:18.452329] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:26.811 [2024-10-01 08:37:18.484315] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:26.811 [2024-10-01 08:37:18.484526] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:27.071 08:37:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:27.071 08:37:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:27.071 08:37:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:22:27.071 08:37:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:27.071 08:37:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:27.334 08:37:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:27.334 08:37:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=3780464 00:22:27.334 08:37:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 3780464 /var/tmp/bdevperf.sock 00:22:27.334 08:37:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3780464 ']' 00:22:27.334 08:37:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:27.334 08:37:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:27.334 08:37:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:27.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:27.334 08:37:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:22:27.334 08:37:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:27.334 08:37:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:27.334 08:37:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:22:27.334 "subsystems": [ 00:22:27.334 { 00:22:27.334 "subsystem": "keyring", 00:22:27.334 "config": [ 00:22:27.334 { 00:22:27.334 "method": "keyring_file_add_key", 00:22:27.334 "params": { 00:22:27.334 "name": "key0", 00:22:27.334 "path": "/tmp/tmp.ZvaF733Ncj" 00:22:27.334 } 00:22:27.334 } 00:22:27.334 ] 00:22:27.334 }, 00:22:27.334 { 00:22:27.334 "subsystem": "iobuf", 00:22:27.334 "config": [ 00:22:27.334 { 00:22:27.334 "method": "iobuf_set_options", 00:22:27.334 "params": { 00:22:27.334 "small_pool_count": 8192, 00:22:27.334 "large_pool_count": 1024, 00:22:27.334 "small_bufsize": 8192, 00:22:27.334 "large_bufsize": 135168 00:22:27.334 } 00:22:27.334 } 00:22:27.334 ] 00:22:27.334 }, 00:22:27.334 { 00:22:27.334 "subsystem": "sock", 00:22:27.334 "config": [ 00:22:27.334 { 00:22:27.334 "method": "sock_set_default_impl", 00:22:27.334 "params": { 00:22:27.334 "impl_name": "posix" 00:22:27.334 } 00:22:27.334 }, 00:22:27.334 { 00:22:27.334 "method": "sock_impl_set_options", 00:22:27.334 "params": { 00:22:27.334 "impl_name": "ssl", 00:22:27.334 "recv_buf_size": 4096, 00:22:27.334 "send_buf_size": 4096, 00:22:27.334 "enable_recv_pipe": true, 00:22:27.334 "enable_quickack": false, 00:22:27.334 "enable_placement_id": 0, 00:22:27.334 "enable_zerocopy_send_server": true, 00:22:27.334 "enable_zerocopy_send_client": false, 00:22:27.334 "zerocopy_threshold": 0, 00:22:27.334 "tls_version": 0, 00:22:27.334 "enable_ktls": false 00:22:27.334 } 00:22:27.334 }, 00:22:27.334 { 00:22:27.334 "method": "sock_impl_set_options", 00:22:27.334 "params": { 00:22:27.334 "impl_name": "posix", 00:22:27.334 "recv_buf_size": 2097152, 00:22:27.334 "send_buf_size": 2097152, 00:22:27.334 "enable_recv_pipe": true, 00:22:27.334 "enable_quickack": false, 00:22:27.334 "enable_placement_id": 0, 00:22:27.334 "enable_zerocopy_send_server": true, 00:22:27.334 "enable_zerocopy_send_client": false, 00:22:27.334 "zerocopy_threshold": 0, 00:22:27.334 "tls_version": 0, 00:22:27.334 "enable_ktls": false 00:22:27.334 } 00:22:27.334 } 00:22:27.334 ] 00:22:27.334 }, 00:22:27.334 { 00:22:27.334 "subsystem": "vmd", 00:22:27.334 "config": [] 00:22:27.334 }, 00:22:27.334 { 00:22:27.334 "subsystem": "accel", 00:22:27.334 "config": [ 00:22:27.334 { 00:22:27.334 "method": "accel_set_options", 00:22:27.334 "params": { 00:22:27.334 "small_cache_size": 128, 00:22:27.334 "large_cache_size": 16, 00:22:27.334 "task_count": 2048, 00:22:27.334 "sequence_count": 2048, 00:22:27.334 "buf_count": 2048 00:22:27.335 } 00:22:27.335 } 00:22:27.335 ] 00:22:27.335 }, 00:22:27.335 { 00:22:27.335 "subsystem": "bdev", 00:22:27.335 "config": [ 00:22:27.335 { 00:22:27.335 "method": "bdev_set_options", 00:22:27.335 "params": { 00:22:27.335 "bdev_io_pool_size": 65535, 00:22:27.335 "bdev_io_cache_size": 256, 00:22:27.335 "bdev_auto_examine": true, 00:22:27.335 "iobuf_small_cache_size": 128, 00:22:27.335 "iobuf_large_cache_size": 16 00:22:27.335 } 00:22:27.335 }, 00:22:27.335 { 00:22:27.335 "method": "bdev_raid_set_options", 00:22:27.335 "params": { 00:22:27.335 "process_window_size_kb": 1024, 00:22:27.335 "process_max_bandwidth_mb_sec": 0 00:22:27.335 } 00:22:27.335 }, 00:22:27.335 { 00:22:27.335 "method": "bdev_iscsi_set_options", 00:22:27.335 "params": { 00:22:27.335 "timeout_sec": 30 00:22:27.335 } 00:22:27.335 }, 00:22:27.335 { 00:22:27.335 "method": "bdev_nvme_set_options", 00:22:27.335 "params": { 00:22:27.335 "action_on_timeout": "none", 00:22:27.335 "timeout_us": 0, 00:22:27.335 "timeout_admin_us": 0, 00:22:27.335 "keep_alive_timeout_ms": 10000, 00:22:27.335 "arbitration_burst": 0, 00:22:27.335 "low_priority_weight": 0, 00:22:27.335 "medium_priority_weight": 0, 00:22:27.335 "high_priority_weight": 0, 00:22:27.335 "nvme_adminq_poll_period_us": 10000, 00:22:27.335 "nvme_ioq_poll_period_us": 0, 00:22:27.335 "io_queue_requests": 512, 00:22:27.335 "delay_cmd_submit": true, 00:22:27.335 "transport_retry_count": 4, 00:22:27.335 "bdev_retry_count": 3, 00:22:27.335 "transport_ack_timeout": 0, 00:22:27.335 "ctrlr_loss_timeout_sec": 0, 00:22:27.335 "reconnect_delay_sec": 0, 00:22:27.335 "fast_io_fail_timeout_sec": 0, 00:22:27.335 "disable_auto_failback": false, 00:22:27.335 "generate_uuids": false, 00:22:27.335 "transport_tos": 0, 00:22:27.335 "nvme_error_stat": false, 00:22:27.335 "rdma_srq_size": 0, 00:22:27.335 "io_path_stat": false, 00:22:27.335 "allow_accel_sequence": false, 00:22:27.335 "rdma_max_cq_size": 0, 00:22:27.335 "rdma_cm_event_timeout_ms": 0, 00:22:27.335 "dhchap_digests": [ 00:22:27.335 "sha256", 00:22:27.335 "sha384", 00:22:27.335 "sha512" 00:22:27.335 ], 00:22:27.335 "dhchap_dhgroups": [ 00:22:27.335 "null", 00:22:27.335 "ffdhe2048", 00:22:27.335 "ffdhe3072", 00:22:27.335 "ffdhe4096", 00:22:27.335 "ffdhe6144", 00:22:27.335 "ffdhe8192" 00:22:27.335 ] 00:22:27.335 } 00:22:27.335 }, 00:22:27.335 { 00:22:27.335 "method": "bdev_nvme_attach_controller", 00:22:27.335 "params": { 00:22:27.335 "name": "TLSTEST", 00:22:27.335 "trtype": "TCP", 00:22:27.335 "adrfam": "IPv4", 00:22:27.335 "traddr": "10.0.0.2", 00:22:27.335 "trsvcid": "4420", 00:22:27.335 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:27.335 "prchk_reftag": false, 00:22:27.335 "prchk_guard": false, 00:22:27.335 "ctrlr_loss_timeout_sec": 0, 00:22:27.335 "reconnect_delay_sec": 0, 00:22:27.335 "fast_io_fail_timeout_sec": 0, 00:22:27.335 "psk": "key0", 00:22:27.335 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:27.335 "hdgst": false, 00:22:27.335 "ddgst": false 00:22:27.335 } 00:22:27.335 }, 00:22:27.335 { 00:22:27.335 "method": "bdev_nvme_set_hotplug", 00:22:27.335 "params": { 00:22:27.335 "period_us": 100000, 00:22:27.335 "enable": false 00:22:27.335 } 00:22:27.335 }, 00:22:27.335 { 00:22:27.335 "method": "bdev_wait_for_examine" 00:22:27.335 } 00:22:27.335 ] 00:22:27.335 }, 00:22:27.335 { 00:22:27.335 "subsystem": "nbd", 00:22:27.335 "config": [] 00:22:27.335 } 00:22:27.335 ] 00:22:27.335 }' 00:22:27.335 [2024-10-01 08:37:18.978802] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:22:27.335 [2024-10-01 08:37:18.978857] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3780464 ] 00:22:27.335 [2024-10-01 08:37:19.028455] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.335 [2024-10-01 08:37:19.080470] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:27.595 [2024-10-01 08:37:19.214066] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:28.166 08:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:28.166 08:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:28.166 08:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:28.166 Running I/O for 10 seconds... 00:22:38.468 6553.00 IOPS, 25.60 MiB/s 6497.00 IOPS, 25.38 MiB/s 6394.00 IOPS, 24.98 MiB/s 6096.00 IOPS, 23.81 MiB/s 5703.00 IOPS, 22.28 MiB/s 5635.67 IOPS, 22.01 MiB/s 5570.29 IOPS, 21.76 MiB/s 5536.00 IOPS, 21.62 MiB/s 5459.56 IOPS, 21.33 MiB/s 5394.80 IOPS, 21.07 MiB/s 00:22:38.468 Latency(us) 00:22:38.468 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:38.468 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:38.468 Verification LBA range: start 0x0 length 0x2000 00:22:38.468 TLSTESTn1 : 10.01 5400.19 21.09 0.00 0.00 23671.63 4450.99 64662.19 00:22:38.468 =================================================================================================================== 00:22:38.468 Total : 5400.19 21.09 0.00 0.00 23671.63 4450.99 64662.19 00:22:38.468 { 00:22:38.468 "results": [ 00:22:38.468 { 00:22:38.468 "job": "TLSTESTn1", 00:22:38.468 "core_mask": "0x4", 00:22:38.468 "workload": "verify", 00:22:38.468 "status": "finished", 00:22:38.468 "verify_range": { 00:22:38.468 "start": 0, 00:22:38.468 "length": 8192 00:22:38.468 }, 00:22:38.468 "queue_depth": 128, 00:22:38.468 "io_size": 4096, 00:22:38.468 "runtime": 10.013544, 00:22:38.468 "iops": 5400.185988097721, 00:22:38.468 "mibps": 21.094476516006722, 00:22:38.468 "io_failed": 0, 00:22:38.468 "io_timeout": 0, 00:22:38.468 "avg_latency_us": 23671.632741192785, 00:22:38.468 "min_latency_us": 4450.986666666667, 00:22:38.468 "max_latency_us": 64662.18666666667 00:22:38.468 } 00:22:38.468 ], 00:22:38.468 "core_count": 1 00:22:38.468 } 00:22:38.468 08:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:38.468 08:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 3780464 00:22:38.468 08:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3780464 ']' 00:22:38.468 08:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3780464 00:22:38.468 08:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:38.468 08:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:38.468 08:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3780464 00:22:38.468 08:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:38.468 08:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:38.468 08:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3780464' 00:22:38.468 killing process with pid 3780464 00:22:38.468 08:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3780464 00:22:38.468 Received shutdown signal, test time was about 10.000000 seconds 00:22:38.468 00:22:38.468 Latency(us) 00:22:38.468 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:38.468 =================================================================================================================== 00:22:38.468 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:38.468 08:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3780464 00:22:38.468 08:37:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 3780123 00:22:38.468 08:37:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3780123 ']' 00:22:38.468 08:37:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3780123 00:22:38.468 08:37:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:38.468 08:37:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:38.468 08:37:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3780123 00:22:38.468 08:37:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:38.468 08:37:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:38.468 08:37:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3780123' 00:22:38.468 killing process with pid 3780123 00:22:38.468 08:37:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3780123 00:22:38.468 08:37:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3780123 00:22:38.468 08:37:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:22:38.468 08:37:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:22:38.468 08:37:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:38.468 08:37:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:38.728 08:37:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=3782512 00:22:38.728 08:37:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 3782512 00:22:38.728 08:37:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:38.728 08:37:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3782512 ']' 00:22:38.728 08:37:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:38.728 08:37:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:38.728 08:37:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:38.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:38.728 08:37:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:38.728 08:37:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:38.728 [2024-10-01 08:37:30.350472] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:22:38.728 [2024-10-01 08:37:30.350532] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:38.728 [2024-10-01 08:37:30.416355] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:38.728 [2024-10-01 08:37:30.480539] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:38.728 [2024-10-01 08:37:30.480576] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:38.728 [2024-10-01 08:37:30.480584] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:38.728 [2024-10-01 08:37:30.480591] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:38.728 [2024-10-01 08:37:30.480597] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:38.728 [2024-10-01 08:37:30.481168] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:39.670 08:37:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:39.670 08:37:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:39.670 08:37:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:22:39.670 08:37:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:39.670 08:37:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:39.670 08:37:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:39.670 08:37:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.ZvaF733Ncj 00:22:39.670 08:37:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ZvaF733Ncj 00:22:39.670 08:37:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:39.670 [2024-10-01 08:37:31.325169] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:39.670 08:37:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:39.931 08:37:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:39.931 [2024-10-01 08:37:31.674049] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:39.931 [2024-10-01 08:37:31.674287] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:39.931 08:37:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:40.191 malloc0 00:22:40.191 08:37:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:40.451 08:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ZvaF733Ncj 00:22:40.451 08:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:40.712 08:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=3783075 00:22:40.712 08:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:40.712 08:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:40.712 08:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 3783075 /var/tmp/bdevperf.sock 00:22:40.712 08:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3783075 ']' 00:22:40.712 08:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:40.712 08:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:40.712 08:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:40.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:40.712 08:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:40.712 08:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:40.712 [2024-10-01 08:37:32.434420] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:22:40.712 [2024-10-01 08:37:32.434478] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3783075 ] 00:22:40.712 [2024-10-01 08:37:32.508466] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.973 [2024-10-01 08:37:32.561964] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:41.544 08:37:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:41.544 08:37:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:41.544 08:37:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ZvaF733Ncj 00:22:41.804 08:37:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:41.804 [2024-10-01 08:37:33.545028] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:41.804 nvme0n1 00:22:42.064 08:37:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:42.064 Running I/O for 1 seconds... 00:22:43.003 4080.00 IOPS, 15.94 MiB/s 00:22:43.003 Latency(us) 00:22:43.003 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:43.003 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:43.003 Verification LBA range: start 0x0 length 0x2000 00:22:43.003 nvme0n1 : 1.02 4137.99 16.16 0.00 0.00 30739.62 4532.91 69468.16 00:22:43.004 =================================================================================================================== 00:22:43.004 Total : 4137.99 16.16 0.00 0.00 30739.62 4532.91 69468.16 00:22:43.004 { 00:22:43.004 "results": [ 00:22:43.004 { 00:22:43.004 "job": "nvme0n1", 00:22:43.004 "core_mask": "0x2", 00:22:43.004 "workload": "verify", 00:22:43.004 "status": "finished", 00:22:43.004 "verify_range": { 00:22:43.004 "start": 0, 00:22:43.004 "length": 8192 00:22:43.004 }, 00:22:43.004 "queue_depth": 128, 00:22:43.004 "io_size": 4096, 00:22:43.004 "runtime": 1.016918, 00:22:43.004 "iops": 4137.993427198653, 00:22:43.004 "mibps": 16.16403682499474, 00:22:43.004 "io_failed": 0, 00:22:43.004 "io_timeout": 0, 00:22:43.004 "avg_latency_us": 30739.623422053235, 00:22:43.004 "min_latency_us": 4532.906666666667, 00:22:43.004 "max_latency_us": 69468.16 00:22:43.004 } 00:22:43.004 ], 00:22:43.004 "core_count": 1 00:22:43.004 } 00:22:43.004 08:37:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 3783075 00:22:43.004 08:37:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3783075 ']' 00:22:43.004 08:37:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3783075 00:22:43.004 08:37:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:43.004 08:37:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:43.004 08:37:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3783075 00:22:43.263 08:37:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:43.263 08:37:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:43.263 08:37:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3783075' 00:22:43.263 killing process with pid 3783075 00:22:43.263 08:37:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3783075 00:22:43.263 Received shutdown signal, test time was about 1.000000 seconds 00:22:43.263 00:22:43.263 Latency(us) 00:22:43.263 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:43.263 =================================================================================================================== 00:22:43.263 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:43.263 08:37:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3783075 00:22:43.263 08:37:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 3782512 00:22:43.263 08:37:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3782512 ']' 00:22:43.263 08:37:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3782512 00:22:43.263 08:37:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:43.263 08:37:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:43.263 08:37:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3782512 00:22:43.263 08:37:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:43.263 08:37:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:43.263 08:37:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3782512' 00:22:43.264 killing process with pid 3782512 00:22:43.264 08:37:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3782512 00:22:43.264 08:37:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3782512 00:22:43.524 08:37:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:22:43.524 08:37:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:22:43.524 08:37:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:43.524 08:37:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:43.524 08:37:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=3783532 00:22:43.524 08:37:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 3783532 00:22:43.524 08:37:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:43.524 08:37:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3783532 ']' 00:22:43.524 08:37:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:43.524 08:37:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:43.524 08:37:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:43.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:43.524 08:37:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:43.524 08:37:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:43.524 [2024-10-01 08:37:35.239535] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:22:43.524 [2024-10-01 08:37:35.239595] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:43.524 [2024-10-01 08:37:35.305653] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:43.784 [2024-10-01 08:37:35.369829] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:43.784 [2024-10-01 08:37:35.369866] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:43.784 [2024-10-01 08:37:35.369874] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:43.784 [2024-10-01 08:37:35.369880] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:43.784 [2024-10-01 08:37:35.369886] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:43.784 [2024-10-01 08:37:35.370484] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:44.355 08:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:44.355 08:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:44.355 08:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:22:44.355 08:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:44.355 08:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:44.355 08:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:44.355 08:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:22:44.355 08:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.355 08:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:44.355 [2024-10-01 08:37:36.053968] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:44.355 malloc0 00:22:44.355 [2024-10-01 08:37:36.092304] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:44.355 [2024-10-01 08:37:36.092535] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:44.355 08:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.355 08:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=3783876 00:22:44.355 08:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 3783876 /var/tmp/bdevperf.sock 00:22:44.355 08:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:44.355 08:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3783876 ']' 00:22:44.355 08:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:44.355 08:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:44.355 08:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:44.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:44.355 08:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:44.355 08:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:44.355 [2024-10-01 08:37:36.171905] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:22:44.355 [2024-10-01 08:37:36.171955] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3783876 ] 00:22:44.615 [2024-10-01 08:37:36.246279] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.615 [2024-10-01 08:37:36.299606] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:45.186 08:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:45.186 08:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:45.186 08:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ZvaF733Ncj 00:22:45.447 08:37:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:45.707 [2024-10-01 08:37:37.270697] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:45.707 nvme0n1 00:22:45.707 08:37:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:45.707 Running I/O for 1 seconds... 00:22:46.649 5478.00 IOPS, 21.40 MiB/s 00:22:46.649 Latency(us) 00:22:46.649 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:46.649 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:46.649 Verification LBA range: start 0x0 length 0x2000 00:22:46.649 nvme0n1 : 1.02 5522.22 21.57 0.00 0.00 23022.11 7427.41 23156.05 00:22:46.649 =================================================================================================================== 00:22:46.649 Total : 5522.22 21.57 0.00 0.00 23022.11 7427.41 23156.05 00:22:46.649 { 00:22:46.649 "results": [ 00:22:46.649 { 00:22:46.649 "job": "nvme0n1", 00:22:46.649 "core_mask": "0x2", 00:22:46.649 "workload": "verify", 00:22:46.649 "status": "finished", 00:22:46.649 "verify_range": { 00:22:46.649 "start": 0, 00:22:46.649 "length": 8192 00:22:46.649 }, 00:22:46.649 "queue_depth": 128, 00:22:46.649 "io_size": 4096, 00:22:46.649 "runtime": 1.015172, 00:22:46.649 "iops": 5522.216924816681, 00:22:46.649 "mibps": 21.57115986256516, 00:22:46.649 "io_failed": 0, 00:22:46.649 "io_timeout": 0, 00:22:46.649 "avg_latency_us": 23022.106485907956, 00:22:46.649 "min_latency_us": 7427.413333333333, 00:22:46.649 "max_latency_us": 23156.053333333333 00:22:46.649 } 00:22:46.649 ], 00:22:46.649 "core_count": 1 00:22:46.649 } 00:22:46.909 08:37:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:22:46.909 08:37:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.909 08:37:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:46.909 08:37:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.909 08:37:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:22:46.909 "subsystems": [ 00:22:46.909 { 00:22:46.909 "subsystem": "keyring", 00:22:46.909 "config": [ 00:22:46.909 { 00:22:46.909 "method": "keyring_file_add_key", 00:22:46.909 "params": { 00:22:46.909 "name": "key0", 00:22:46.909 "path": "/tmp/tmp.ZvaF733Ncj" 00:22:46.909 } 00:22:46.909 } 00:22:46.909 ] 00:22:46.909 }, 00:22:46.909 { 00:22:46.909 "subsystem": "iobuf", 00:22:46.909 "config": [ 00:22:46.909 { 00:22:46.909 "method": "iobuf_set_options", 00:22:46.909 "params": { 00:22:46.909 "small_pool_count": 8192, 00:22:46.909 "large_pool_count": 1024, 00:22:46.909 "small_bufsize": 8192, 00:22:46.909 "large_bufsize": 135168 00:22:46.909 } 00:22:46.909 } 00:22:46.909 ] 00:22:46.909 }, 00:22:46.909 { 00:22:46.909 "subsystem": "sock", 00:22:46.909 "config": [ 00:22:46.909 { 00:22:46.909 "method": "sock_set_default_impl", 00:22:46.909 "params": { 00:22:46.909 "impl_name": "posix" 00:22:46.909 } 00:22:46.909 }, 00:22:46.909 { 00:22:46.909 "method": "sock_impl_set_options", 00:22:46.909 "params": { 00:22:46.909 "impl_name": "ssl", 00:22:46.909 "recv_buf_size": 4096, 00:22:46.909 "send_buf_size": 4096, 00:22:46.909 "enable_recv_pipe": true, 00:22:46.909 "enable_quickack": false, 00:22:46.909 "enable_placement_id": 0, 00:22:46.909 "enable_zerocopy_send_server": true, 00:22:46.909 "enable_zerocopy_send_client": false, 00:22:46.909 "zerocopy_threshold": 0, 00:22:46.909 "tls_version": 0, 00:22:46.909 "enable_ktls": false 00:22:46.909 } 00:22:46.909 }, 00:22:46.909 { 00:22:46.909 "method": "sock_impl_set_options", 00:22:46.909 "params": { 00:22:46.909 "impl_name": "posix", 00:22:46.909 "recv_buf_size": 2097152, 00:22:46.909 "send_buf_size": 2097152, 00:22:46.909 "enable_recv_pipe": true, 00:22:46.909 "enable_quickack": false, 00:22:46.909 "enable_placement_id": 0, 00:22:46.909 "enable_zerocopy_send_server": true, 00:22:46.909 "enable_zerocopy_send_client": false, 00:22:46.909 "zerocopy_threshold": 0, 00:22:46.909 "tls_version": 0, 00:22:46.909 "enable_ktls": false 00:22:46.909 } 00:22:46.909 } 00:22:46.909 ] 00:22:46.909 }, 00:22:46.909 { 00:22:46.909 "subsystem": "vmd", 00:22:46.909 "config": [] 00:22:46.909 }, 00:22:46.909 { 00:22:46.909 "subsystem": "accel", 00:22:46.909 "config": [ 00:22:46.909 { 00:22:46.909 "method": "accel_set_options", 00:22:46.909 "params": { 00:22:46.909 "small_cache_size": 128, 00:22:46.909 "large_cache_size": 16, 00:22:46.909 "task_count": 2048, 00:22:46.909 "sequence_count": 2048, 00:22:46.909 "buf_count": 2048 00:22:46.909 } 00:22:46.909 } 00:22:46.909 ] 00:22:46.909 }, 00:22:46.909 { 00:22:46.909 "subsystem": "bdev", 00:22:46.909 "config": [ 00:22:46.909 { 00:22:46.909 "method": "bdev_set_options", 00:22:46.909 "params": { 00:22:46.909 "bdev_io_pool_size": 65535, 00:22:46.909 "bdev_io_cache_size": 256, 00:22:46.909 "bdev_auto_examine": true, 00:22:46.909 "iobuf_small_cache_size": 128, 00:22:46.909 "iobuf_large_cache_size": 16 00:22:46.909 } 00:22:46.909 }, 00:22:46.909 { 00:22:46.910 "method": "bdev_raid_set_options", 00:22:46.910 "params": { 00:22:46.910 "process_window_size_kb": 1024, 00:22:46.910 "process_max_bandwidth_mb_sec": 0 00:22:46.910 } 00:22:46.910 }, 00:22:46.910 { 00:22:46.910 "method": "bdev_iscsi_set_options", 00:22:46.910 "params": { 00:22:46.910 "timeout_sec": 30 00:22:46.910 } 00:22:46.910 }, 00:22:46.910 { 00:22:46.910 "method": "bdev_nvme_set_options", 00:22:46.910 "params": { 00:22:46.910 "action_on_timeout": "none", 00:22:46.910 "timeout_us": 0, 00:22:46.910 "timeout_admin_us": 0, 00:22:46.910 "keep_alive_timeout_ms": 10000, 00:22:46.910 "arbitration_burst": 0, 00:22:46.910 "low_priority_weight": 0, 00:22:46.910 "medium_priority_weight": 0, 00:22:46.910 "high_priority_weight": 0, 00:22:46.910 "nvme_adminq_poll_period_us": 10000, 00:22:46.910 "nvme_ioq_poll_period_us": 0, 00:22:46.910 "io_queue_requests": 0, 00:22:46.910 "delay_cmd_submit": true, 00:22:46.910 "transport_retry_count": 4, 00:22:46.910 "bdev_retry_count": 3, 00:22:46.910 "transport_ack_timeout": 0, 00:22:46.910 "ctrlr_loss_timeout_sec": 0, 00:22:46.910 "reconnect_delay_sec": 0, 00:22:46.910 "fast_io_fail_timeout_sec": 0, 00:22:46.910 "disable_auto_failback": false, 00:22:46.910 "generate_uuids": false, 00:22:46.910 "transport_tos": 0, 00:22:46.910 "nvme_error_stat": false, 00:22:46.910 "rdma_srq_size": 0, 00:22:46.910 "io_path_stat": false, 00:22:46.910 "allow_accel_sequence": false, 00:22:46.910 "rdma_max_cq_size": 0, 00:22:46.910 "rdma_cm_event_timeout_ms": 0, 00:22:46.910 "dhchap_digests": [ 00:22:46.910 "sha256", 00:22:46.910 "sha384", 00:22:46.910 "sha512" 00:22:46.910 ], 00:22:46.910 "dhchap_dhgroups": [ 00:22:46.910 "null", 00:22:46.910 "ffdhe2048", 00:22:46.910 "ffdhe3072", 00:22:46.910 "ffdhe4096", 00:22:46.910 "ffdhe6144", 00:22:46.910 "ffdhe8192" 00:22:46.910 ] 00:22:46.910 } 00:22:46.910 }, 00:22:46.910 { 00:22:46.910 "method": "bdev_nvme_set_hotplug", 00:22:46.910 "params": { 00:22:46.910 "period_us": 100000, 00:22:46.910 "enable": false 00:22:46.910 } 00:22:46.910 }, 00:22:46.910 { 00:22:46.910 "method": "bdev_malloc_create", 00:22:46.910 "params": { 00:22:46.910 "name": "malloc0", 00:22:46.910 "num_blocks": 8192, 00:22:46.910 "block_size": 4096, 00:22:46.910 "physical_block_size": 4096, 00:22:46.910 "uuid": "f359a326-fad2-4376-a4de-68d7901be255", 00:22:46.910 "optimal_io_boundary": 0, 00:22:46.910 "md_size": 0, 00:22:46.910 "dif_type": 0, 00:22:46.910 "dif_is_head_of_md": false, 00:22:46.910 "dif_pi_format": 0 00:22:46.910 } 00:22:46.910 }, 00:22:46.910 { 00:22:46.910 "method": "bdev_wait_for_examine" 00:22:46.910 } 00:22:46.910 ] 00:22:46.910 }, 00:22:46.910 { 00:22:46.910 "subsystem": "nbd", 00:22:46.910 "config": [] 00:22:46.910 }, 00:22:46.910 { 00:22:46.910 "subsystem": "scheduler", 00:22:46.910 "config": [ 00:22:46.910 { 00:22:46.910 "method": "framework_set_scheduler", 00:22:46.910 "params": { 00:22:46.910 "name": "static" 00:22:46.910 } 00:22:46.910 } 00:22:46.910 ] 00:22:46.910 }, 00:22:46.910 { 00:22:46.910 "subsystem": "nvmf", 00:22:46.910 "config": [ 00:22:46.910 { 00:22:46.910 "method": "nvmf_set_config", 00:22:46.910 "params": { 00:22:46.910 "discovery_filter": "match_any", 00:22:46.910 "admin_cmd_passthru": { 00:22:46.910 "identify_ctrlr": false 00:22:46.910 }, 00:22:46.910 "dhchap_digests": [ 00:22:46.910 "sha256", 00:22:46.910 "sha384", 00:22:46.910 "sha512" 00:22:46.910 ], 00:22:46.910 "dhchap_dhgroups": [ 00:22:46.910 "null", 00:22:46.910 "ffdhe2048", 00:22:46.910 "ffdhe3072", 00:22:46.910 "ffdhe4096", 00:22:46.910 "ffdhe6144", 00:22:46.910 "ffdhe8192" 00:22:46.910 ] 00:22:46.910 } 00:22:46.910 }, 00:22:46.910 { 00:22:46.910 "method": "nvmf_set_max_subsystems", 00:22:46.910 "params": { 00:22:46.910 "max_subsystems": 1024 00:22:46.910 } 00:22:46.910 }, 00:22:46.910 { 00:22:46.910 "method": "nvmf_set_crdt", 00:22:46.910 "params": { 00:22:46.910 "crdt1": 0, 00:22:46.910 "crdt2": 0, 00:22:46.910 "crdt3": 0 00:22:46.910 } 00:22:46.910 }, 00:22:46.910 { 00:22:46.910 "method": "nvmf_create_transport", 00:22:46.910 "params": { 00:22:46.910 "trtype": "TCP", 00:22:46.910 "max_queue_depth": 128, 00:22:46.910 "max_io_qpairs_per_ctrlr": 127, 00:22:46.910 "in_capsule_data_size": 4096, 00:22:46.910 "max_io_size": 131072, 00:22:46.910 "io_unit_size": 131072, 00:22:46.910 "max_aq_depth": 128, 00:22:46.910 "num_shared_buffers": 511, 00:22:46.910 "buf_cache_size": 4294967295, 00:22:46.910 "dif_insert_or_strip": false, 00:22:46.910 "zcopy": false, 00:22:46.910 "c2h_success": false, 00:22:46.910 "sock_priority": 0, 00:22:46.910 "abort_timeout_sec": 1, 00:22:46.910 "ack_timeout": 0, 00:22:46.910 "data_wr_pool_size": 0 00:22:46.910 } 00:22:46.910 }, 00:22:46.910 { 00:22:46.910 "method": "nvmf_create_subsystem", 00:22:46.910 "params": { 00:22:46.910 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:46.910 "allow_any_host": false, 00:22:46.910 "serial_number": "00000000000000000000", 00:22:46.910 "model_number": "SPDK bdev Controller", 00:22:46.910 "max_namespaces": 32, 00:22:46.910 "min_cntlid": 1, 00:22:46.910 "max_cntlid": 65519, 00:22:46.910 "ana_reporting": false 00:22:46.910 } 00:22:46.910 }, 00:22:46.910 { 00:22:46.910 "method": "nvmf_subsystem_add_host", 00:22:46.910 "params": { 00:22:46.910 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:46.910 "host": "nqn.2016-06.io.spdk:host1", 00:22:46.910 "psk": "key0" 00:22:46.910 } 00:22:46.910 }, 00:22:46.910 { 00:22:46.910 "method": "nvmf_subsystem_add_ns", 00:22:46.910 "params": { 00:22:46.910 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:46.910 "namespace": { 00:22:46.910 "nsid": 1, 00:22:46.910 "bdev_name": "malloc0", 00:22:46.910 "nguid": "F359A326FAD24376A4DE68D7901BE255", 00:22:46.910 "uuid": "f359a326-fad2-4376-a4de-68d7901be255", 00:22:46.910 "no_auto_visible": false 00:22:46.910 } 00:22:46.910 } 00:22:46.910 }, 00:22:46.910 { 00:22:46.910 "method": "nvmf_subsystem_add_listener", 00:22:46.910 "params": { 00:22:46.910 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:46.910 "listen_address": { 00:22:46.910 "trtype": "TCP", 00:22:46.910 "adrfam": "IPv4", 00:22:46.910 "traddr": "10.0.0.2", 00:22:46.910 "trsvcid": "4420" 00:22:46.910 }, 00:22:46.910 "secure_channel": false, 00:22:46.910 "sock_impl": "ssl" 00:22:46.910 } 00:22:46.910 } 00:22:46.910 ] 00:22:46.910 } 00:22:46.910 ] 00:22:46.910 }' 00:22:46.910 08:37:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:47.172 08:37:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:22:47.172 "subsystems": [ 00:22:47.172 { 00:22:47.172 "subsystem": "keyring", 00:22:47.172 "config": [ 00:22:47.172 { 00:22:47.172 "method": "keyring_file_add_key", 00:22:47.172 "params": { 00:22:47.172 "name": "key0", 00:22:47.172 "path": "/tmp/tmp.ZvaF733Ncj" 00:22:47.172 } 00:22:47.172 } 00:22:47.172 ] 00:22:47.172 }, 00:22:47.172 { 00:22:47.172 "subsystem": "iobuf", 00:22:47.172 "config": [ 00:22:47.172 { 00:22:47.172 "method": "iobuf_set_options", 00:22:47.172 "params": { 00:22:47.172 "small_pool_count": 8192, 00:22:47.172 "large_pool_count": 1024, 00:22:47.172 "small_bufsize": 8192, 00:22:47.172 "large_bufsize": 135168 00:22:47.172 } 00:22:47.172 } 00:22:47.172 ] 00:22:47.172 }, 00:22:47.172 { 00:22:47.172 "subsystem": "sock", 00:22:47.172 "config": [ 00:22:47.172 { 00:22:47.172 "method": "sock_set_default_impl", 00:22:47.172 "params": { 00:22:47.172 "impl_name": "posix" 00:22:47.172 } 00:22:47.172 }, 00:22:47.172 { 00:22:47.172 "method": "sock_impl_set_options", 00:22:47.172 "params": { 00:22:47.172 "impl_name": "ssl", 00:22:47.172 "recv_buf_size": 4096, 00:22:47.172 "send_buf_size": 4096, 00:22:47.172 "enable_recv_pipe": true, 00:22:47.172 "enable_quickack": false, 00:22:47.172 "enable_placement_id": 0, 00:22:47.172 "enable_zerocopy_send_server": true, 00:22:47.172 "enable_zerocopy_send_client": false, 00:22:47.172 "zerocopy_threshold": 0, 00:22:47.172 "tls_version": 0, 00:22:47.172 "enable_ktls": false 00:22:47.172 } 00:22:47.172 }, 00:22:47.172 { 00:22:47.172 "method": "sock_impl_set_options", 00:22:47.172 "params": { 00:22:47.172 "impl_name": "posix", 00:22:47.172 "recv_buf_size": 2097152, 00:22:47.172 "send_buf_size": 2097152, 00:22:47.172 "enable_recv_pipe": true, 00:22:47.172 "enable_quickack": false, 00:22:47.172 "enable_placement_id": 0, 00:22:47.172 "enable_zerocopy_send_server": true, 00:22:47.172 "enable_zerocopy_send_client": false, 00:22:47.172 "zerocopy_threshold": 0, 00:22:47.172 "tls_version": 0, 00:22:47.172 "enable_ktls": false 00:22:47.172 } 00:22:47.172 } 00:22:47.172 ] 00:22:47.172 }, 00:22:47.172 { 00:22:47.172 "subsystem": "vmd", 00:22:47.172 "config": [] 00:22:47.172 }, 00:22:47.172 { 00:22:47.172 "subsystem": "accel", 00:22:47.172 "config": [ 00:22:47.172 { 00:22:47.172 "method": "accel_set_options", 00:22:47.172 "params": { 00:22:47.172 "small_cache_size": 128, 00:22:47.173 "large_cache_size": 16, 00:22:47.173 "task_count": 2048, 00:22:47.173 "sequence_count": 2048, 00:22:47.173 "buf_count": 2048 00:22:47.173 } 00:22:47.173 } 00:22:47.173 ] 00:22:47.173 }, 00:22:47.173 { 00:22:47.173 "subsystem": "bdev", 00:22:47.173 "config": [ 00:22:47.173 { 00:22:47.173 "method": "bdev_set_options", 00:22:47.173 "params": { 00:22:47.173 "bdev_io_pool_size": 65535, 00:22:47.173 "bdev_io_cache_size": 256, 00:22:47.173 "bdev_auto_examine": true, 00:22:47.173 "iobuf_small_cache_size": 128, 00:22:47.173 "iobuf_large_cache_size": 16 00:22:47.173 } 00:22:47.173 }, 00:22:47.173 { 00:22:47.173 "method": "bdev_raid_set_options", 00:22:47.173 "params": { 00:22:47.173 "process_window_size_kb": 1024, 00:22:47.173 "process_max_bandwidth_mb_sec": 0 00:22:47.173 } 00:22:47.173 }, 00:22:47.173 { 00:22:47.173 "method": "bdev_iscsi_set_options", 00:22:47.173 "params": { 00:22:47.173 "timeout_sec": 30 00:22:47.173 } 00:22:47.173 }, 00:22:47.173 { 00:22:47.173 "method": "bdev_nvme_set_options", 00:22:47.173 "params": { 00:22:47.173 "action_on_timeout": "none", 00:22:47.173 "timeout_us": 0, 00:22:47.173 "timeout_admin_us": 0, 00:22:47.173 "keep_alive_timeout_ms": 10000, 00:22:47.173 "arbitration_burst": 0, 00:22:47.173 "low_priority_weight": 0, 00:22:47.173 "medium_priority_weight": 0, 00:22:47.173 "high_priority_weight": 0, 00:22:47.173 "nvme_adminq_poll_period_us": 10000, 00:22:47.173 "nvme_ioq_poll_period_us": 0, 00:22:47.173 "io_queue_requests": 512, 00:22:47.173 "delay_cmd_submit": true, 00:22:47.173 "transport_retry_count": 4, 00:22:47.173 "bdev_retry_count": 3, 00:22:47.173 "transport_ack_timeout": 0, 00:22:47.173 "ctrlr_loss_timeout_sec": 0, 00:22:47.173 "reconnect_delay_sec": 0, 00:22:47.173 "fast_io_fail_timeout_sec": 0, 00:22:47.173 "disable_auto_failback": false, 00:22:47.173 "generate_uuids": false, 00:22:47.173 "transport_tos": 0, 00:22:47.173 "nvme_error_stat": false, 00:22:47.173 "rdma_srq_size": 0, 00:22:47.173 "io_path_stat": false, 00:22:47.173 "allow_accel_sequence": false, 00:22:47.173 "rdma_max_cq_size": 0, 00:22:47.173 "rdma_cm_event_timeout_ms": 0, 00:22:47.173 "dhchap_digests": [ 00:22:47.173 "sha256", 00:22:47.173 "sha384", 00:22:47.173 "sha512" 00:22:47.173 ], 00:22:47.173 "dhchap_dhgroups": [ 00:22:47.173 "null", 00:22:47.173 "ffdhe2048", 00:22:47.173 "ffdhe3072", 00:22:47.173 "ffdhe4096", 00:22:47.173 "ffdhe6144", 00:22:47.173 "ffdhe8192" 00:22:47.173 ] 00:22:47.173 } 00:22:47.173 }, 00:22:47.173 { 00:22:47.173 "method": "bdev_nvme_attach_controller", 00:22:47.173 "params": { 00:22:47.173 "name": "nvme0", 00:22:47.173 "trtype": "TCP", 00:22:47.173 "adrfam": "IPv4", 00:22:47.173 "traddr": "10.0.0.2", 00:22:47.173 "trsvcid": "4420", 00:22:47.173 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:47.173 "prchk_reftag": false, 00:22:47.173 "prchk_guard": false, 00:22:47.173 "ctrlr_loss_timeout_sec": 0, 00:22:47.173 "reconnect_delay_sec": 0, 00:22:47.173 "fast_io_fail_timeout_sec": 0, 00:22:47.173 "psk": "key0", 00:22:47.173 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:47.173 "hdgst": false, 00:22:47.173 "ddgst": false 00:22:47.173 } 00:22:47.173 }, 00:22:47.173 { 00:22:47.173 "method": "bdev_nvme_set_hotplug", 00:22:47.173 "params": { 00:22:47.173 "period_us": 100000, 00:22:47.173 "enable": false 00:22:47.173 } 00:22:47.173 }, 00:22:47.173 { 00:22:47.173 "method": "bdev_enable_histogram", 00:22:47.173 "params": { 00:22:47.173 "name": "nvme0n1", 00:22:47.173 "enable": true 00:22:47.173 } 00:22:47.173 }, 00:22:47.173 { 00:22:47.173 "method": "bdev_wait_for_examine" 00:22:47.173 } 00:22:47.173 ] 00:22:47.173 }, 00:22:47.173 { 00:22:47.173 "subsystem": "nbd", 00:22:47.173 "config": [] 00:22:47.173 } 00:22:47.173 ] 00:22:47.173 }' 00:22:47.173 08:37:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 3783876 00:22:47.173 08:37:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3783876 ']' 00:22:47.173 08:37:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3783876 00:22:47.173 08:37:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:47.173 08:37:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:47.173 08:37:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3783876 00:22:47.173 08:37:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:47.173 08:37:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:47.173 08:37:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3783876' 00:22:47.173 killing process with pid 3783876 00:22:47.173 08:37:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3783876 00:22:47.173 Received shutdown signal, test time was about 1.000000 seconds 00:22:47.173 00:22:47.173 Latency(us) 00:22:47.173 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:47.173 =================================================================================================================== 00:22:47.173 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:47.173 08:37:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3783876 00:22:47.434 08:37:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 3783532 00:22:47.434 08:37:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3783532 ']' 00:22:47.434 08:37:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3783532 00:22:47.434 08:37:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:47.434 08:37:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:47.434 08:37:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3783532 00:22:47.434 08:37:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:47.434 08:37:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:47.434 08:37:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3783532' 00:22:47.434 killing process with pid 3783532 00:22:47.434 08:37:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3783532 00:22:47.434 08:37:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3783532 00:22:47.434 08:37:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:22:47.434 08:37:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:22:47.434 08:37:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:47.434 08:37:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:22:47.434 "subsystems": [ 00:22:47.434 { 00:22:47.434 "subsystem": "keyring", 00:22:47.434 "config": [ 00:22:47.434 { 00:22:47.434 "method": "keyring_file_add_key", 00:22:47.434 "params": { 00:22:47.434 "name": "key0", 00:22:47.434 "path": "/tmp/tmp.ZvaF733Ncj" 00:22:47.434 } 00:22:47.434 } 00:22:47.434 ] 00:22:47.434 }, 00:22:47.434 { 00:22:47.435 "subsystem": "iobuf", 00:22:47.435 "config": [ 00:22:47.435 { 00:22:47.435 "method": "iobuf_set_options", 00:22:47.435 "params": { 00:22:47.435 "small_pool_count": 8192, 00:22:47.435 "large_pool_count": 1024, 00:22:47.435 "small_bufsize": 8192, 00:22:47.435 "large_bufsize": 135168 00:22:47.435 } 00:22:47.435 } 00:22:47.435 ] 00:22:47.435 }, 00:22:47.435 { 00:22:47.435 "subsystem": "sock", 00:22:47.435 "config": [ 00:22:47.435 { 00:22:47.435 "method": "sock_set_default_impl", 00:22:47.435 "params": { 00:22:47.435 "impl_name": "posix" 00:22:47.435 } 00:22:47.435 }, 00:22:47.435 { 00:22:47.435 "method": "sock_impl_set_options", 00:22:47.435 "params": { 00:22:47.435 "impl_name": "ssl", 00:22:47.435 "recv_buf_size": 4096, 00:22:47.435 "send_buf_size": 4096, 00:22:47.435 "enable_recv_pipe": true, 00:22:47.435 "enable_quickack": false, 00:22:47.435 "enable_placement_id": 0, 00:22:47.435 "enable_zerocopy_send_server": true, 00:22:47.435 "enable_zerocopy_send_client": false, 00:22:47.435 "zerocopy_threshold": 0, 00:22:47.435 "tls_version": 0, 00:22:47.435 "enable_ktls": false 00:22:47.435 } 00:22:47.435 }, 00:22:47.435 { 00:22:47.435 "method": "sock_impl_set_options", 00:22:47.435 "params": { 00:22:47.435 "impl_name": "posix", 00:22:47.435 "recv_buf_size": 2097152, 00:22:47.435 "send_buf_size": 2097152, 00:22:47.435 "enable_recv_pipe": true, 00:22:47.435 "enable_quickack": false, 00:22:47.435 "enable_placement_id": 0, 00:22:47.435 "enable_zerocopy_send_server": true, 00:22:47.435 "enable_zerocopy_send_client": false, 00:22:47.435 "zerocopy_threshold": 0, 00:22:47.435 "tls_version": 0, 00:22:47.435 "enable_ktls": false 00:22:47.435 } 00:22:47.435 } 00:22:47.435 ] 00:22:47.435 }, 00:22:47.435 { 00:22:47.435 "subsystem": "vmd", 00:22:47.435 "config": [] 00:22:47.435 }, 00:22:47.435 { 00:22:47.435 "subsystem": "accel", 00:22:47.435 "config": [ 00:22:47.435 { 00:22:47.435 "method": "accel_set_options", 00:22:47.435 "params": { 00:22:47.435 "small_cache_size": 128, 00:22:47.435 "large_cache_size": 16, 00:22:47.435 "task_count": 2048, 00:22:47.435 "sequence_count": 2048, 00:22:47.435 "buf_count": 2048 00:22:47.435 } 00:22:47.435 } 00:22:47.435 ] 00:22:47.435 }, 00:22:47.435 { 00:22:47.435 "subsystem": "bdev", 00:22:47.435 "config": [ 00:22:47.435 { 00:22:47.435 "method": "bdev_set_options", 00:22:47.435 "params": { 00:22:47.435 "bdev_io_pool_size": 65535, 00:22:47.435 "bdev_io_cache_size": 256, 00:22:47.435 "bdev_auto_examine": true, 00:22:47.435 "iobuf_small_cache_size": 128, 00:22:47.435 "iobuf_large_cache_size": 16 00:22:47.435 } 00:22:47.435 }, 00:22:47.435 { 00:22:47.435 "method": "bdev_raid_set_options", 00:22:47.435 "params": { 00:22:47.435 "process_window_size_kb": 1024, 00:22:47.435 "process_max_bandwidth_mb_sec": 0 00:22:47.435 } 00:22:47.435 }, 00:22:47.435 { 00:22:47.435 "method": "bdev_iscsi_set_options", 00:22:47.435 "params": { 00:22:47.435 "timeout_sec": 30 00:22:47.435 } 00:22:47.435 }, 00:22:47.435 { 00:22:47.435 "method": "bdev_nvme_set_options", 00:22:47.435 "params": { 00:22:47.435 "action_on_timeout": "none", 00:22:47.435 "timeout_us": 0, 00:22:47.435 "timeout_admin_us": 0, 00:22:47.435 "keep_alive_timeout_ms": 10000, 00:22:47.435 "arbitration_burst": 0, 00:22:47.435 "low_priority_weight": 0, 00:22:47.435 "medium_priority_weight": 0, 00:22:47.435 "high_priority_weight": 0, 00:22:47.435 "nvme_adminq_poll_period_us": 10000, 00:22:47.435 "nvme_ioq_poll_period_us": 0, 00:22:47.435 "io_queue_requests": 0, 00:22:47.435 "delay_cmd_submit": true, 00:22:47.435 "transport_retry_count": 4, 00:22:47.435 "bdev_retry_count": 3, 00:22:47.435 "transport_ack_timeout": 0, 00:22:47.435 "ctrlr_loss_timeout_sec": 0, 00:22:47.435 "reconnect_delay_sec": 0, 00:22:47.435 "fast_io_fail_timeout_sec": 0, 00:22:47.435 "disable_auto_failback": false, 00:22:47.435 "generate_uuids": false, 00:22:47.435 "transport_tos": 0, 00:22:47.435 "nvme_error_stat": false, 00:22:47.435 "rdma_srq_size": 0, 00:22:47.435 "io_path_stat": false, 00:22:47.435 "allow_accel_sequence": false, 00:22:47.435 "rdma_max_cq_size": 0, 00:22:47.435 "rdma_cm_event_timeout_ms": 0, 00:22:47.435 "dhchap_digests": [ 00:22:47.435 "sha256", 00:22:47.435 "sha384", 00:22:47.435 "sha512" 00:22:47.435 ], 00:22:47.435 "dhchap_dhgroups": [ 00:22:47.435 "null", 00:22:47.435 "ffdhe2048", 00:22:47.435 "ffdhe3072", 00:22:47.435 "ffdhe4096", 00:22:47.435 "ffdhe6144", 00:22:47.435 "ffdhe8192" 00:22:47.435 ] 00:22:47.435 } 00:22:47.435 }, 00:22:47.435 { 00:22:47.435 "method": "bdev_nvme_set_hotplug", 00:22:47.435 "params": { 00:22:47.435 "period_us": 100000, 00:22:47.435 "enable": false 00:22:47.435 } 00:22:47.435 }, 00:22:47.435 { 00:22:47.435 "method": "bdev_malloc_create", 00:22:47.435 "params": { 00:22:47.435 "name": "malloc0", 00:22:47.435 "num_blocks": 8192, 00:22:47.435 "block_size": 4096, 00:22:47.435 "physical_block_size": 4096, 00:22:47.435 "uuid": "f359a326-fad2-4376-a4de-68d7901be255", 00:22:47.435 "optimal_io_boundary": 0, 00:22:47.435 "md_size": 0, 00:22:47.435 "dif_type": 0, 00:22:47.435 "dif_is_head_of_md": false, 00:22:47.435 "dif_pi_format": 0 00:22:47.435 } 00:22:47.435 }, 00:22:47.435 { 00:22:47.435 "method": "bdev_wait_for_examine" 00:22:47.435 } 00:22:47.435 ] 00:22:47.435 }, 00:22:47.435 { 00:22:47.435 "subsystem": "nbd", 00:22:47.435 "config": [] 00:22:47.435 }, 00:22:47.435 { 00:22:47.435 "subsystem": "scheduler", 00:22:47.435 "config": [ 00:22:47.435 { 00:22:47.435 "method": "framework_set_scheduler", 00:22:47.435 "params": { 00:22:47.435 "name": "static" 00:22:47.435 } 00:22:47.435 } 00:22:47.435 ] 00:22:47.435 }, 00:22:47.435 { 00:22:47.435 "subsystem": "nvmf", 00:22:47.435 "config": [ 00:22:47.435 { 00:22:47.435 "method": "nvmf_set_config", 00:22:47.435 "params": { 00:22:47.435 "discovery_filter": "match_any", 00:22:47.435 "admin_cmd_passthru": { 00:22:47.435 "identify_ctrlr": false 00:22:47.435 }, 00:22:47.435 "dhchap_digests": [ 00:22:47.435 "sha256", 00:22:47.435 "sha384", 00:22:47.435 "sha512" 00:22:47.435 ], 00:22:47.435 "dhchap_dhgroups": [ 00:22:47.435 "null", 00:22:47.435 "ffdhe2048", 00:22:47.435 "ffdhe3072", 00:22:47.435 "ffdhe4096", 00:22:47.435 "ffdhe6144", 00:22:47.435 "ffdhe8192" 00:22:47.435 ] 00:22:47.435 } 00:22:47.435 }, 00:22:47.435 { 00:22:47.435 "method": "nvmf_set_max_subsystems", 00:22:47.435 "params": { 00:22:47.435 "max_subsystems": 1024 00:22:47.435 } 00:22:47.435 }, 00:22:47.435 { 00:22:47.435 "method": "nvmf_set_crdt", 00:22:47.435 "params": { 00:22:47.435 "crdt1": 0, 00:22:47.435 "crdt2": 0, 00:22:47.435 "crdt3": 0 00:22:47.435 } 00:22:47.435 }, 00:22:47.435 { 00:22:47.435 "method": "nvmf_create_transport", 00:22:47.435 "params": { 00:22:47.435 "trtype": "TCP", 00:22:47.435 "max_queue_depth": 128, 00:22:47.435 "max_io_qpairs_per_ctrlr": 127, 00:22:47.435 "in_capsule_data_size": 4096, 00:22:47.435 "max_io_size": 131072, 00:22:47.435 "io_unit_size": 131072, 00:22:47.435 "max_aq_depth": 128, 00:22:47.435 "num_shared_buffers": 511, 00:22:47.435 "buf_cache_size": 4294967295, 00:22:47.435 "dif_insert_or_strip": false, 00:22:47.435 "zcopy": false, 00:22:47.435 "c2h_success": false, 00:22:47.435 "sock_priority": 0, 00:22:47.435 "abort_timeout_sec": 1, 00:22:47.435 "ack_timeout": 0, 00:22:47.435 "data_wr_pool_size": 0 00:22:47.435 } 00:22:47.435 }, 00:22:47.435 { 00:22:47.435 "method": "nvmf_create_subsystem", 00:22:47.435 "params": { 00:22:47.435 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:47.435 "allow_any_host": false, 00:22:47.435 "serial_number": "00000000000000000000", 00:22:47.435 "model_number": "SPDK bdev Controller", 00:22:47.435 "max_namespaces": 32, 00:22:47.435 "min_cntlid": 1, 00:22:47.435 "max_cntlid": 65519, 00:22:47.435 "ana_reporting": false 00:22:47.435 } 00:22:47.435 }, 00:22:47.435 { 00:22:47.435 "method": "nvmf_subsystem_add_host", 00:22:47.435 "params": { 00:22:47.435 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:47.435 "host": "nqn.2016-06.io.spdk:host1", 00:22:47.435 "psk": "key0" 00:22:47.435 } 00:22:47.435 }, 00:22:47.435 { 00:22:47.435 "method": "nvmf_subsystem_add_ns", 00:22:47.435 "params": { 00:22:47.435 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:47.435 "namespace": { 00:22:47.435 "nsid": 1, 00:22:47.435 "bdev_name": "malloc0", 00:22:47.435 "nguid": "F359A326FAD24376A4DE68D7901BE255", 00:22:47.435 "uuid": "f359a326-fad2-4376-a4de-68d7901be255", 00:22:47.435 "no_auto_visible": false 00:22:47.435 } 00:22:47.435 } 00:22:47.435 }, 00:22:47.435 { 00:22:47.435 "method": "nvmf_subsystem_add_listener", 00:22:47.435 "params": { 00:22:47.435 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:47.435 "listen_address": { 00:22:47.435 "trtype": "TCP", 00:22:47.435 "adrfam": "IPv4", 00:22:47.435 "traddr": "10.0.0.2", 00:22:47.435 "trsvcid": "4420" 00:22:47.435 }, 00:22:47.435 "secure_channel": false, 00:22:47.435 "sock_impl": "ssl" 00:22:47.435 } 00:22:47.436 } 00:22:47.436 ] 00:22:47.436 } 00:22:47.436 ] 00:22:47.436 }' 00:22:47.436 08:37:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:47.697 08:37:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=3784442 00:22:47.697 08:37:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 3784442 00:22:47.697 08:37:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:22:47.697 08:37:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3784442 ']' 00:22:47.697 08:37:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:47.697 08:37:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:47.697 08:37:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:47.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:47.697 08:37:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:47.697 08:37:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:47.697 [2024-10-01 08:37:39.321266] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:22:47.697 [2024-10-01 08:37:39.321327] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:47.698 [2024-10-01 08:37:39.386836] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.698 [2024-10-01 08:37:39.451193] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:47.698 [2024-10-01 08:37:39.451228] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:47.698 [2024-10-01 08:37:39.451236] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:47.698 [2024-10-01 08:37:39.451242] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:47.698 [2024-10-01 08:37:39.451248] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:47.698 [2024-10-01 08:37:39.451823] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:47.959 [2024-10-01 08:37:39.658481] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:47.959 [2024-10-01 08:37:39.690490] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:47.959 [2024-10-01 08:37:39.690716] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:48.530 08:37:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:48.530 08:37:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:48.530 08:37:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:22:48.530 08:37:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:48.530 08:37:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:48.530 08:37:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:48.530 08:37:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=3784593 00:22:48.530 08:37:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 3784593 /var/tmp/bdevperf.sock 00:22:48.530 08:37:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3784593 ']' 00:22:48.530 08:37:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:48.530 08:37:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:48.530 08:37:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:48.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:48.530 08:37:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:48.530 08:37:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:22:48.530 08:37:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:48.531 08:37:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:22:48.531 "subsystems": [ 00:22:48.531 { 00:22:48.531 "subsystem": "keyring", 00:22:48.531 "config": [ 00:22:48.531 { 00:22:48.531 "method": "keyring_file_add_key", 00:22:48.531 "params": { 00:22:48.531 "name": "key0", 00:22:48.531 "path": "/tmp/tmp.ZvaF733Ncj" 00:22:48.531 } 00:22:48.531 } 00:22:48.531 ] 00:22:48.531 }, 00:22:48.531 { 00:22:48.531 "subsystem": "iobuf", 00:22:48.531 "config": [ 00:22:48.531 { 00:22:48.531 "method": "iobuf_set_options", 00:22:48.531 "params": { 00:22:48.531 "small_pool_count": 8192, 00:22:48.531 "large_pool_count": 1024, 00:22:48.531 "small_bufsize": 8192, 00:22:48.531 "large_bufsize": 135168 00:22:48.531 } 00:22:48.531 } 00:22:48.531 ] 00:22:48.531 }, 00:22:48.531 { 00:22:48.531 "subsystem": "sock", 00:22:48.531 "config": [ 00:22:48.531 { 00:22:48.531 "method": "sock_set_default_impl", 00:22:48.531 "params": { 00:22:48.531 "impl_name": "posix" 00:22:48.531 } 00:22:48.531 }, 00:22:48.531 { 00:22:48.531 "method": "sock_impl_set_options", 00:22:48.531 "params": { 00:22:48.531 "impl_name": "ssl", 00:22:48.531 "recv_buf_size": 4096, 00:22:48.531 "send_buf_size": 4096, 00:22:48.531 "enable_recv_pipe": true, 00:22:48.531 "enable_quickack": false, 00:22:48.531 "enable_placement_id": 0, 00:22:48.531 "enable_zerocopy_send_server": true, 00:22:48.531 "enable_zerocopy_send_client": false, 00:22:48.531 "zerocopy_threshold": 0, 00:22:48.531 "tls_version": 0, 00:22:48.531 "enable_ktls": false 00:22:48.531 } 00:22:48.531 }, 00:22:48.531 { 00:22:48.531 "method": "sock_impl_set_options", 00:22:48.531 "params": { 00:22:48.531 "impl_name": "posix", 00:22:48.531 "recv_buf_size": 2097152, 00:22:48.531 "send_buf_size": 2097152, 00:22:48.531 "enable_recv_pipe": true, 00:22:48.531 "enable_quickack": false, 00:22:48.531 "enable_placement_id": 0, 00:22:48.531 "enable_zerocopy_send_server": true, 00:22:48.531 "enable_zerocopy_send_client": false, 00:22:48.531 "zerocopy_threshold": 0, 00:22:48.531 "tls_version": 0, 00:22:48.531 "enable_ktls": false 00:22:48.531 } 00:22:48.531 } 00:22:48.531 ] 00:22:48.531 }, 00:22:48.531 { 00:22:48.531 "subsystem": "vmd", 00:22:48.531 "config": [] 00:22:48.531 }, 00:22:48.531 { 00:22:48.531 "subsystem": "accel", 00:22:48.531 "config": [ 00:22:48.531 { 00:22:48.531 "method": "accel_set_options", 00:22:48.531 "params": { 00:22:48.531 "small_cache_size": 128, 00:22:48.531 "large_cache_size": 16, 00:22:48.531 "task_count": 2048, 00:22:48.531 "sequence_count": 2048, 00:22:48.531 "buf_count": 2048 00:22:48.531 } 00:22:48.531 } 00:22:48.531 ] 00:22:48.531 }, 00:22:48.531 { 00:22:48.531 "subsystem": "bdev", 00:22:48.531 "config": [ 00:22:48.531 { 00:22:48.531 "method": "bdev_set_options", 00:22:48.531 "params": { 00:22:48.531 "bdev_io_pool_size": 65535, 00:22:48.531 "bdev_io_cache_size": 256, 00:22:48.531 "bdev_auto_examine": true, 00:22:48.531 "iobuf_small_cache_size": 128, 00:22:48.531 "iobuf_large_cache_size": 16 00:22:48.531 } 00:22:48.531 }, 00:22:48.531 { 00:22:48.531 "method": "bdev_raid_set_options", 00:22:48.531 "params": { 00:22:48.531 "process_window_size_kb": 1024, 00:22:48.531 "process_max_bandwidth_mb_sec": 0 00:22:48.531 } 00:22:48.531 }, 00:22:48.531 { 00:22:48.531 "method": "bdev_iscsi_set_options", 00:22:48.531 "params": { 00:22:48.531 "timeout_sec": 30 00:22:48.531 } 00:22:48.531 }, 00:22:48.531 { 00:22:48.531 "method": "bdev_nvme_set_options", 00:22:48.531 "params": { 00:22:48.531 "action_on_timeout": "none", 00:22:48.531 "timeout_us": 0, 00:22:48.531 "timeout_admin_us": 0, 00:22:48.531 "keep_alive_timeout_ms": 10000, 00:22:48.531 "arbitration_burst": 0, 00:22:48.531 "low_priority_weight": 0, 00:22:48.531 "medium_priority_weight": 0, 00:22:48.531 "high_priority_weight": 0, 00:22:48.531 "nvme_adminq_poll_period_us": 10000, 00:22:48.531 "nvme_ioq_poll_period_us": 0, 00:22:48.531 "io_queue_requests": 512, 00:22:48.531 "delay_cmd_submit": true, 00:22:48.531 "transport_retry_count": 4, 00:22:48.531 "bdev_retry_count": 3, 00:22:48.531 "transport_ack_timeout": 0, 00:22:48.531 "ctrlr_loss_timeout_sec": 0, 00:22:48.531 "reconnect_delay_sec": 0, 00:22:48.531 "fast_io_fail_timeout_sec": 0, 00:22:48.531 "disable_auto_failback": false, 00:22:48.531 "generate_uuids": false, 00:22:48.531 "transport_tos": 0, 00:22:48.531 "nvme_error_stat": false, 00:22:48.531 "rdma_srq_size": 0, 00:22:48.531 "io_path_stat": false, 00:22:48.531 "allow_accel_sequence": false, 00:22:48.531 "rdma_max_cq_size": 0, 00:22:48.531 "rdma_cm_event_timeout_ms": 0, 00:22:48.531 "dhchap_digests": [ 00:22:48.531 "sha256", 00:22:48.531 "sha384", 00:22:48.531 "sha512" 00:22:48.531 ], 00:22:48.531 "dhchap_dhgroups": [ 00:22:48.531 "null", 00:22:48.531 "ffdhe2048", 00:22:48.531 "ffdhe3072", 00:22:48.531 "ffdhe4096", 00:22:48.531 "ffdhe6144", 00:22:48.531 "ffdhe8192" 00:22:48.531 ] 00:22:48.531 } 00:22:48.531 }, 00:22:48.531 { 00:22:48.531 "method": "bdev_nvme_attach_controller", 00:22:48.531 "params": { 00:22:48.531 "name": "nvme0", 00:22:48.531 "trtype": "TCP", 00:22:48.531 "adrfam": "IPv4", 00:22:48.531 "traddr": "10.0.0.2", 00:22:48.531 "trsvcid": "4420", 00:22:48.531 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:48.531 "prchk_reftag": false, 00:22:48.531 "prchk_guard": false, 00:22:48.531 "ctrlr_loss_timeout_sec": 0, 00:22:48.531 "reconnect_delay_sec": 0, 00:22:48.531 "fast_io_fail_timeout_sec": 0, 00:22:48.531 "psk": "key0", 00:22:48.531 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:48.531 "hdgst": false, 00:22:48.531 "ddgst": false 00:22:48.531 } 00:22:48.531 }, 00:22:48.531 { 00:22:48.531 "method": "bdev_nvme_set_hotplug", 00:22:48.531 "params": { 00:22:48.531 "period_us": 100000, 00:22:48.531 "enable": false 00:22:48.531 } 00:22:48.531 }, 00:22:48.531 { 00:22:48.531 "method": "bdev_enable_histogram", 00:22:48.531 "params": { 00:22:48.531 "name": "nvme0n1", 00:22:48.531 "enable": true 00:22:48.531 } 00:22:48.531 }, 00:22:48.531 { 00:22:48.531 "method": "bdev_wait_for_examine" 00:22:48.531 } 00:22:48.531 ] 00:22:48.531 }, 00:22:48.531 { 00:22:48.531 "subsystem": "nbd", 00:22:48.531 "config": [] 00:22:48.531 } 00:22:48.531 ] 00:22:48.531 }' 00:22:48.531 [2024-10-01 08:37:40.193589] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:22:48.531 [2024-10-01 08:37:40.193646] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3784593 ] 00:22:48.531 [2024-10-01 08:37:40.267497] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.531 [2024-10-01 08:37:40.320771] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:48.793 [2024-10-01 08:37:40.455500] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:49.366 08:37:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:49.366 08:37:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:49.366 08:37:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:49.366 08:37:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:22:49.366 08:37:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:49.366 08:37:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:49.626 Running I/O for 1 seconds... 00:22:50.567 3876.00 IOPS, 15.14 MiB/s 00:22:50.567 Latency(us) 00:22:50.567 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:50.567 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:50.567 Verification LBA range: start 0x0 length 0x2000 00:22:50.567 nvme0n1 : 1.02 3944.29 15.41 0.00 0.00 32245.18 4696.75 72526.51 00:22:50.567 =================================================================================================================== 00:22:50.567 Total : 3944.29 15.41 0.00 0.00 32245.18 4696.75 72526.51 00:22:50.567 { 00:22:50.567 "results": [ 00:22:50.567 { 00:22:50.567 "job": "nvme0n1", 00:22:50.567 "core_mask": "0x2", 00:22:50.567 "workload": "verify", 00:22:50.567 "status": "finished", 00:22:50.567 "verify_range": { 00:22:50.567 "start": 0, 00:22:50.567 "length": 8192 00:22:50.567 }, 00:22:50.567 "queue_depth": 128, 00:22:50.567 "io_size": 4096, 00:22:50.567 "runtime": 1.015393, 00:22:50.567 "iops": 3944.2856115809345, 00:22:50.567 "mibps": 15.407365670238026, 00:22:50.567 "io_failed": 0, 00:22:50.567 "io_timeout": 0, 00:22:50.567 "avg_latency_us": 32245.177900957136, 00:22:50.567 "min_latency_us": 4696.746666666667, 00:22:50.567 "max_latency_us": 72526.50666666667 00:22:50.567 } 00:22:50.567 ], 00:22:50.567 "core_count": 1 00:22:50.567 } 00:22:50.567 08:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:22:50.567 08:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:22:50.567 08:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:22:50.567 08:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:22:50.567 08:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:22:50.567 08:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:22:50.567 08:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:50.567 08:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:22:50.568 08:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:22:50.568 08:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:22:50.568 08:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:50.568 nvmf_trace.0 00:22:50.568 08:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:22:50.568 08:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 3784593 00:22:50.568 08:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3784593 ']' 00:22:50.568 08:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3784593 00:22:50.568 08:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:50.568 08:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:50.568 08:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3784593 00:22:50.829 08:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:50.829 08:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:50.829 08:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3784593' 00:22:50.829 killing process with pid 3784593 00:22:50.829 08:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3784593 00:22:50.829 Received shutdown signal, test time was about 1.000000 seconds 00:22:50.829 00:22:50.829 Latency(us) 00:22:50.829 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:50.829 =================================================================================================================== 00:22:50.829 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:50.829 08:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3784593 00:22:50.829 08:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:22:50.829 08:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # nvmfcleanup 00:22:50.829 08:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:22:50.829 08:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:50.829 08:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:22:50.829 08:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:50.829 08:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:50.829 rmmod nvme_tcp 00:22:50.829 rmmod nvme_fabrics 00:22:50.829 rmmod nvme_keyring 00:22:50.829 08:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:50.829 08:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:22:50.829 08:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:22:50.829 08:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@513 -- # '[' -n 3784442 ']' 00:22:50.829 08:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # killprocess 3784442 00:22:50.829 08:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3784442 ']' 00:22:50.829 08:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3784442 00:22:50.829 08:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:50.829 08:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:50.829 08:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3784442 00:22:51.090 08:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:51.090 08:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:51.090 08:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3784442' 00:22:51.090 killing process with pid 3784442 00:22:51.090 08:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3784442 00:22:51.090 08:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3784442 00:22:51.090 08:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:22:51.090 08:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:22:51.090 08:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:22:51.090 08:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:22:51.090 08:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # iptables-save 00:22:51.090 08:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:22:51.090 08:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # iptables-restore 00:22:51.090 08:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:51.090 08:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:51.090 08:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:51.090 08:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:51.090 08:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:53.637 08:37:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:53.637 08:37:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.PaaVwz6B3z /tmp/tmp.z12oH2jiJZ /tmp/tmp.ZvaF733Ncj 00:22:53.637 00:22:53.637 real 1m25.572s 00:22:53.637 user 2m13.616s 00:22:53.637 sys 0m26.958s 00:22:53.637 08:37:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:53.637 08:37:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:53.637 ************************************ 00:22:53.637 END TEST nvmf_tls 00:22:53.637 ************************************ 00:22:53.637 08:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:53.637 08:37:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:53.637 08:37:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:53.637 08:37:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:53.637 ************************************ 00:22:53.637 START TEST nvmf_fips 00:22:53.637 ************************************ 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:53.638 * Looking for test storage... 00:22:53.638 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lcov --version 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:53.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:53.638 --rc genhtml_branch_coverage=1 00:22:53.638 --rc genhtml_function_coverage=1 00:22:53.638 --rc genhtml_legend=1 00:22:53.638 --rc geninfo_all_blocks=1 00:22:53.638 --rc geninfo_unexecuted_blocks=1 00:22:53.638 00:22:53.638 ' 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:53.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:53.638 --rc genhtml_branch_coverage=1 00:22:53.638 --rc genhtml_function_coverage=1 00:22:53.638 --rc genhtml_legend=1 00:22:53.638 --rc geninfo_all_blocks=1 00:22:53.638 --rc geninfo_unexecuted_blocks=1 00:22:53.638 00:22:53.638 ' 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:53.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:53.638 --rc genhtml_branch_coverage=1 00:22:53.638 --rc genhtml_function_coverage=1 00:22:53.638 --rc genhtml_legend=1 00:22:53.638 --rc geninfo_all_blocks=1 00:22:53.638 --rc geninfo_unexecuted_blocks=1 00:22:53.638 00:22:53.638 ' 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:53.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:53.638 --rc genhtml_branch_coverage=1 00:22:53.638 --rc genhtml_function_coverage=1 00:22:53.638 --rc genhtml_legend=1 00:22:53.638 --rc geninfo_all_blocks=1 00:22:53.638 --rc geninfo_unexecuted_blocks=1 00:22:53.638 00:22:53.638 ' 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:53.638 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:22:53.638 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:22:53.639 Error setting digest 00:22:53.639 40D26A09A07F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:22:53.639 40D26A09A07F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # prepare_net_devs 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@434 -- # local -g is_hw=no 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # remove_spdk_ns 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:22:53.639 08:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:01.785 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:01.785 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ up == up ]] 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:01.785 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ up == up ]] 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:01.785 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # is_hw=yes 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:01.785 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:01.785 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.599 ms 00:23:01.785 00:23:01.785 --- 10.0.0.2 ping statistics --- 00:23:01.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:01.785 rtt min/avg/max/mdev = 0.599/0.599/0.599/0.000 ms 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:01.785 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:01.785 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:23:01.785 00:23:01.785 --- 10.0.0.1 ping statistics --- 00:23:01.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:01.785 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:01.785 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # return 0 00:23:01.786 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:23:01.786 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:01.786 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:23:01.786 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:23:01.786 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:01.786 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:23:01.786 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:23:01.786 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:23:01.786 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:01.786 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:01.786 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:01.786 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # nvmfpid=3789299 00:23:01.786 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # waitforlisten 3789299 00:23:01.786 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:01.786 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 3789299 ']' 00:23:01.786 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:01.786 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:01.786 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:01.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:01.786 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:01.786 08:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:01.786 [2024-10-01 08:37:52.874701] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:23:01.786 [2024-10-01 08:37:52.874774] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:01.786 [2024-10-01 08:37:52.964632] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.786 [2024-10-01 08:37:53.056962] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:01.786 [2024-10-01 08:37:53.057031] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:01.786 [2024-10-01 08:37:53.057040] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:01.786 [2024-10-01 08:37:53.057047] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:01.786 [2024-10-01 08:37:53.057053] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:01.786 [2024-10-01 08:37:53.057844] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:02.048 08:37:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:02.048 08:37:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:23:02.048 08:37:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:02.048 08:37:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:02.048 08:37:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:02.048 08:37:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:02.048 08:37:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:23:02.048 08:37:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:02.048 08:37:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:23:02.048 08:37:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.G3E 00:23:02.048 08:37:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:02.048 08:37:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.G3E 00:23:02.048 08:37:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.G3E 00:23:02.048 08:37:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.G3E 00:23:02.048 08:37:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:02.309 [2024-10-01 08:37:53.889043] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:02.309 [2024-10-01 08:37:53.905030] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:02.309 [2024-10-01 08:37:53.905339] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:02.309 malloc0 00:23:02.309 08:37:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:02.309 08:37:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=3789649 00:23:02.309 08:37:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 3789649 /var/tmp/bdevperf.sock 00:23:02.309 08:37:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:02.309 08:37:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 3789649 ']' 00:23:02.309 08:37:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:02.309 08:37:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:02.309 08:37:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:02.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:02.309 08:37:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:02.309 08:37:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:02.309 [2024-10-01 08:37:54.045786] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:23:02.310 [2024-10-01 08:37:54.045863] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3789649 ] 00:23:02.310 [2024-10-01 08:37:54.103981] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:02.570 [2024-10-01 08:37:54.167069] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:03.141 08:37:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:03.141 08:37:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:23:03.141 08:37:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.G3E 00:23:03.402 08:37:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:03.402 [2024-10-01 08:37:55.184510] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:03.662 TLSTESTn1 00:23:03.662 08:37:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:03.662 Running I/O for 10 seconds... 00:23:13.870 5538.00 IOPS, 21.63 MiB/s 5510.50 IOPS, 21.53 MiB/s 5506.00 IOPS, 21.51 MiB/s 5486.25 IOPS, 21.43 MiB/s 5491.80 IOPS, 21.45 MiB/s 5538.67 IOPS, 21.64 MiB/s 5468.43 IOPS, 21.36 MiB/s 5397.75 IOPS, 21.08 MiB/s 5368.00 IOPS, 20.97 MiB/s 5320.30 IOPS, 20.78 MiB/s 00:23:13.870 Latency(us) 00:23:13.870 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:13.870 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:13.870 Verification LBA range: start 0x0 length 0x2000 00:23:13.870 TLSTESTn1 : 10.03 5319.19 20.78 0.00 0.00 24019.89 4915.20 37792.43 00:23:13.870 =================================================================================================================== 00:23:13.870 Total : 5319.19 20.78 0.00 0.00 24019.89 4915.20 37792.43 00:23:13.870 { 00:23:13.870 "results": [ 00:23:13.870 { 00:23:13.870 "job": "TLSTESTn1", 00:23:13.870 "core_mask": "0x4", 00:23:13.870 "workload": "verify", 00:23:13.870 "status": "finished", 00:23:13.870 "verify_range": { 00:23:13.870 "start": 0, 00:23:13.870 "length": 8192 00:23:13.870 }, 00:23:13.870 "queue_depth": 128, 00:23:13.870 "io_size": 4096, 00:23:13.870 "runtime": 10.02596, 00:23:13.870 "iops": 5319.19137917965, 00:23:13.870 "mibps": 20.778091324920506, 00:23:13.870 "io_failed": 0, 00:23:13.870 "io_timeout": 0, 00:23:13.870 "avg_latency_us": 24019.88588286768, 00:23:13.870 "min_latency_us": 4915.2, 00:23:13.870 "max_latency_us": 37792.426666666666 00:23:13.870 } 00:23:13.870 ], 00:23:13.870 "core_count": 1 00:23:13.870 } 00:23:13.870 08:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:23:13.870 08:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:23:13.870 08:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:23:13.870 08:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:23:13.870 08:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:23:13.870 08:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:13.870 08:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:23:13.870 08:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:23:13.870 08:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:23:13.870 08:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:13.870 nvmf_trace.0 00:23:13.870 08:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:23:13.870 08:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3789649 00:23:13.870 08:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 3789649 ']' 00:23:13.870 08:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 3789649 00:23:13.870 08:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:23:13.870 08:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:13.870 08:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3789649 00:23:13.870 08:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:13.870 08:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:13.870 08:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3789649' 00:23:13.870 killing process with pid 3789649 00:23:13.870 08:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 3789649 00:23:13.870 Received shutdown signal, test time was about 10.000000 seconds 00:23:13.870 00:23:13.870 Latency(us) 00:23:13.870 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:13.870 =================================================================================================================== 00:23:13.871 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:13.871 08:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 3789649 00:23:14.132 08:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:23:14.132 08:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # nvmfcleanup 00:23:14.132 08:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:23:14.132 08:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:14.132 08:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:23:14.132 08:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:14.132 08:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:14.132 rmmod nvme_tcp 00:23:14.132 rmmod nvme_fabrics 00:23:14.132 rmmod nvme_keyring 00:23:14.132 08:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:14.132 08:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:23:14.132 08:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:23:14.132 08:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@513 -- # '[' -n 3789299 ']' 00:23:14.132 08:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # killprocess 3789299 00:23:14.132 08:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 3789299 ']' 00:23:14.132 08:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 3789299 00:23:14.132 08:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:23:14.132 08:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:14.132 08:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3789299 00:23:14.132 08:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:14.132 08:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:14.132 08:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3789299' 00:23:14.132 killing process with pid 3789299 00:23:14.132 08:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 3789299 00:23:14.132 08:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 3789299 00:23:14.392 08:38:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:23:14.392 08:38:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:23:14.392 08:38:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:23:14.392 08:38:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:23:14.392 08:38:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:23:14.392 08:38:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # iptables-save 00:23:14.392 08:38:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # iptables-restore 00:23:14.392 08:38:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:14.392 08:38:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:14.392 08:38:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:14.392 08:38:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:14.392 08:38:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:16.306 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:16.306 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.G3E 00:23:16.306 00:23:16.306 real 0m23.078s 00:23:16.306 user 0m24.232s 00:23:16.306 sys 0m10.140s 00:23:16.306 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:16.306 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:16.306 ************************************ 00:23:16.306 END TEST nvmf_fips 00:23:16.306 ************************************ 00:23:16.306 08:38:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:23:16.306 08:38:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:16.306 08:38:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:16.306 08:38:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:16.567 ************************************ 00:23:16.567 START TEST nvmf_control_msg_list 00:23:16.567 ************************************ 00:23:16.567 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:23:16.567 * Looking for test storage... 00:23:16.567 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:16.567 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:16.567 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lcov --version 00:23:16.567 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:16.567 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:16.567 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:16.567 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:16.567 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:16.567 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:23:16.567 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:23:16.567 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:23:16.567 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:23:16.567 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:23:16.567 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:23:16.567 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:23:16.567 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:16.567 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:23:16.567 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:23:16.567 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:16.567 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:16.567 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:23:16.567 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:23:16.567 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:16.567 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:23:16.567 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:23:16.567 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:23:16.567 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:23:16.567 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:16.567 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:23:16.567 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:23:16.567 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:16.567 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:16.567 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:23:16.567 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:16.567 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:16.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:16.567 --rc genhtml_branch_coverage=1 00:23:16.567 --rc genhtml_function_coverage=1 00:23:16.567 --rc genhtml_legend=1 00:23:16.567 --rc geninfo_all_blocks=1 00:23:16.567 --rc geninfo_unexecuted_blocks=1 00:23:16.567 00:23:16.567 ' 00:23:16.568 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:16.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:16.568 --rc genhtml_branch_coverage=1 00:23:16.568 --rc genhtml_function_coverage=1 00:23:16.568 --rc genhtml_legend=1 00:23:16.568 --rc geninfo_all_blocks=1 00:23:16.568 --rc geninfo_unexecuted_blocks=1 00:23:16.568 00:23:16.568 ' 00:23:16.568 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:16.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:16.568 --rc genhtml_branch_coverage=1 00:23:16.568 --rc genhtml_function_coverage=1 00:23:16.568 --rc genhtml_legend=1 00:23:16.568 --rc geninfo_all_blocks=1 00:23:16.568 --rc geninfo_unexecuted_blocks=1 00:23:16.568 00:23:16.568 ' 00:23:16.568 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:16.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:16.568 --rc genhtml_branch_coverage=1 00:23:16.568 --rc genhtml_function_coverage=1 00:23:16.568 --rc genhtml_legend=1 00:23:16.568 --rc geninfo_all_blocks=1 00:23:16.568 --rc geninfo_unexecuted_blocks=1 00:23:16.568 00:23:16.568 ' 00:23:16.568 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:16.568 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:23:16.568 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:16.568 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:16.568 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:16.568 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:16.568 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:16.568 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:16.568 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:16.568 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:16.568 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:16.568 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:16.568 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:16.568 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:16.568 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:16.568 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:16.568 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:16.568 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:16.568 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:16.568 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:23:16.829 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:16.829 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:16.829 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:16.829 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.829 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.829 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.829 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:23:16.829 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.829 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:23:16.829 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:16.829 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:16.829 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:16.829 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:16.829 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:16.829 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:16.829 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:16.829 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:16.829 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:16.829 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:16.829 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:23:16.829 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:23:16.829 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:16.829 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # prepare_net_devs 00:23:16.829 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@434 -- # local -g is_hw=no 00:23:16.829 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # remove_spdk_ns 00:23:16.829 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:16.829 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:16.829 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:16.829 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:23:16.829 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:23:16.829 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:23:16.829 08:38:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:24.977 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:24.977 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:23:24.977 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:24.977 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:24.977 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:24.977 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:24.977 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:24.977 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:23:24.977 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:24.977 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:23:24.977 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:23:24.977 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:23:24.977 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:23:24.977 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:23:24.977 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:23:24.977 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:24.977 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:24.977 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:24.977 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:24.977 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:24.977 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:24.977 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:24.977 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:24.977 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:24.977 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:24.977 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:24.977 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:23:24.977 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:23:24.977 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:23:24.977 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:23:24.977 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:23:24.977 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:23:24.977 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:23:24.977 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:24.977 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:24.977 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:24.978 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ up == up ]] 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:24.978 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ up == up ]] 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:24.978 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # is_hw=yes 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:24.978 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:24.978 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.648 ms 00:23:24.978 00:23:24.978 --- 10.0.0.2 ping statistics --- 00:23:24.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:24.978 rtt min/avg/max/mdev = 0.648/0.648/0.648/0.000 ms 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:24.978 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:24.978 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:23:24.978 00:23:24.978 --- 10.0.0.1 ping statistics --- 00:23:24.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:24.978 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # return 0 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # nvmfpid=3796006 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # waitforlisten 3796006 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 3796006 ']' 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:24.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:24.978 08:38:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:24.978 [2024-10-01 08:38:15.818180] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:23:24.978 [2024-10-01 08:38:15.818254] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:24.978 [2024-10-01 08:38:15.893257] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.978 [2024-10-01 08:38:15.967632] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:24.978 [2024-10-01 08:38:15.967675] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:24.978 [2024-10-01 08:38:15.967683] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:24.978 [2024-10-01 08:38:15.967690] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:24.978 [2024-10-01 08:38:15.967696] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:24.978 [2024-10-01 08:38:15.968296] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:24.978 08:38:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:24.978 08:38:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:23:24.978 08:38:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:24.978 08:38:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:24.978 08:38:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:24.978 08:38:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:24.978 08:38:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:23:24.978 08:38:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:23:24.978 08:38:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:23:24.978 08:38:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.978 08:38:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:24.979 [2024-10-01 08:38:16.652832] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:24.979 08:38:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.979 08:38:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:23:24.979 08:38:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.979 08:38:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:24.979 08:38:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.979 08:38:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:23:24.979 08:38:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.979 08:38:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:24.979 Malloc0 00:23:24.979 08:38:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.979 08:38:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:23:24.979 08:38:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.979 08:38:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:24.979 08:38:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.979 08:38:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:24.979 08:38:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.979 08:38:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:24.979 [2024-10-01 08:38:16.719647] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:24.979 08:38:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.979 08:38:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=3796351 00:23:24.979 08:38:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:24.979 08:38:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=3796352 00:23:24.979 08:38:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:24.979 08:38:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=3796353 00:23:24.979 08:38:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 3796351 00:23:24.979 08:38:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:24.979 [2024-10-01 08:38:16.790105] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:25.239 [2024-10-01 08:38:16.810107] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:25.239 [2024-10-01 08:38:16.810332] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:26.182 Initializing NVMe Controllers 00:23:26.182 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:23:26.182 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:23:26.182 Initialization complete. Launching workers. 00:23:26.182 ======================================================== 00:23:26.182 Latency(us) 00:23:26.182 Device Information : IOPS MiB/s Average min max 00:23:26.182 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 1627.00 6.36 614.71 238.93 869.75 00:23:26.182 ======================================================== 00:23:26.182 Total : 1627.00 6.36 614.71 238.93 869.75 00:23:26.182 00:23:26.182 [2024-10-01 08:38:17.854020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88a50 is same with the state(6) to be set 00:23:26.182 [2024-10-01 08:38:17.854064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88a50 is same with the state(6) to be set 00:23:26.182 Initializing NVMe Controllers 00:23:26.182 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:23:26.182 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:23:26.182 Initialization complete. Launching workers. 00:23:26.182 ======================================================== 00:23:26.182 Latency(us) 00:23:26.182 Device Information : IOPS MiB/s Average min max 00:23:26.182 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40903.21 40794.75 40970.65 00:23:26.182 ======================================================== 00:23:26.183 Total : 25.00 0.10 40903.21 40794.75 40970.65 00:23:26.183 00:23:26.183 [2024-10-01 08:38:17.895962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8e520 is same with the state(6) to be set 00:23:26.183 08:38:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 3796352 00:23:26.183 08:38:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 3796353 00:23:26.183 Initializing NVMe Controllers 00:23:26.183 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:23:26.183 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:23:26.183 Initialization complete. Launching workers. 00:23:26.183 ======================================================== 00:23:26.183 Latency(us) 00:23:26.183 Device Information : IOPS MiB/s Average min max 00:23:26.183 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 2050.00 8.01 487.58 144.77 741.27 00:23:26.183 ======================================================== 00:23:26.183 Total : 2050.00 8.01 487.58 144.77 741.27 00:23:26.183 00:23:26.183 08:38:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:23:26.183 08:38:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:23:26.183 08:38:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # nvmfcleanup 00:23:26.183 08:38:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:23:26.183 08:38:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:26.183 08:38:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:23:26.183 08:38:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:26.183 08:38:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:26.183 rmmod nvme_tcp 00:23:26.183 rmmod nvme_fabrics 00:23:26.443 rmmod nvme_keyring 00:23:26.443 08:38:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:26.443 08:38:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:23:26.443 08:38:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:23:26.443 08:38:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@513 -- # '[' -n 3796006 ']' 00:23:26.443 08:38:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # killprocess 3796006 00:23:26.443 08:38:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 3796006 ']' 00:23:26.443 08:38:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 3796006 00:23:26.443 08:38:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:23:26.443 08:38:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:26.443 08:38:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3796006 00:23:26.443 08:38:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:26.443 08:38:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:26.443 08:38:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3796006' 00:23:26.443 killing process with pid 3796006 00:23:26.443 08:38:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 3796006 00:23:26.443 08:38:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 3796006 00:23:26.443 08:38:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:23:26.443 08:38:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:23:26.443 08:38:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:23:26.443 08:38:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:23:26.443 08:38:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # iptables-save 00:23:26.443 08:38:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:23:26.443 08:38:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # iptables-restore 00:23:26.704 08:38:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:26.704 08:38:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:26.704 08:38:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:26.704 08:38:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:26.704 08:38:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:28.620 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:28.620 00:23:28.620 real 0m12.173s 00:23:28.620 user 0m7.721s 00:23:28.620 sys 0m6.410s 00:23:28.620 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:28.620 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:28.620 ************************************ 00:23:28.620 END TEST nvmf_control_msg_list 00:23:28.620 ************************************ 00:23:28.620 08:38:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:23:28.620 08:38:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:28.620 08:38:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:28.620 08:38:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:28.620 ************************************ 00:23:28.620 START TEST nvmf_wait_for_buf 00:23:28.620 ************************************ 00:23:28.620 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:23:28.882 * Looking for test storage... 00:23:28.882 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:28.882 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:28.882 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lcov --version 00:23:28.882 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:28.882 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:28.882 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:28.882 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:28.882 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:28.882 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:23:28.882 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:23:28.882 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:23:28.882 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:23:28.882 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:23:28.882 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:23:28.882 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:23:28.882 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:28.882 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:23:28.882 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:23:28.882 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:28.882 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:28.882 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:23:28.882 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:23:28.882 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:28.882 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:23:28.882 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:23:28.882 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:23:28.882 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:23:28.882 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:28.882 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:23:28.882 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:23:28.882 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:28.882 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:28.882 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:23:28.882 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:28.882 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:28.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:28.882 --rc genhtml_branch_coverage=1 00:23:28.882 --rc genhtml_function_coverage=1 00:23:28.882 --rc genhtml_legend=1 00:23:28.882 --rc geninfo_all_blocks=1 00:23:28.882 --rc geninfo_unexecuted_blocks=1 00:23:28.882 00:23:28.882 ' 00:23:28.882 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:28.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:28.882 --rc genhtml_branch_coverage=1 00:23:28.882 --rc genhtml_function_coverage=1 00:23:28.882 --rc genhtml_legend=1 00:23:28.882 --rc geninfo_all_blocks=1 00:23:28.882 --rc geninfo_unexecuted_blocks=1 00:23:28.882 00:23:28.882 ' 00:23:28.882 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:28.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:28.882 --rc genhtml_branch_coverage=1 00:23:28.882 --rc genhtml_function_coverage=1 00:23:28.882 --rc genhtml_legend=1 00:23:28.882 --rc geninfo_all_blocks=1 00:23:28.882 --rc geninfo_unexecuted_blocks=1 00:23:28.882 00:23:28.882 ' 00:23:28.882 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:28.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:28.882 --rc genhtml_branch_coverage=1 00:23:28.882 --rc genhtml_function_coverage=1 00:23:28.882 --rc genhtml_legend=1 00:23:28.882 --rc geninfo_all_blocks=1 00:23:28.882 --rc geninfo_unexecuted_blocks=1 00:23:28.882 00:23:28.882 ' 00:23:28.882 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:28.882 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:23:28.882 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:28.882 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:28.882 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:28.882 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:28.882 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:28.882 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:28.882 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:28.882 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:28.882 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:28.882 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:28.883 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:28.883 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:28.883 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:28.883 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:28.883 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:28.883 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:28.883 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:28.883 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:23:28.883 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:28.883 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:28.883 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:28.883 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.883 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.883 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.883 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:23:28.883 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.883 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:23:28.883 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:28.883 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:28.883 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:28.883 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:28.883 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:28.883 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:28.883 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:28.883 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:28.883 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:28.883 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:28.883 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:23:28.883 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:23:28.883 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:28.883 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # prepare_net_devs 00:23:28.883 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:23:28.883 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:23:28.883 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:28.883 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:28.883 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:28.883 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:23:28.883 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:23:28.883 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:23:28.883 08:38:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:37.026 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:37.026 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:37.026 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:37.026 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # is_hw=yes 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:37.026 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:37.027 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:37.027 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:37.027 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:37.027 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:37.027 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:37.027 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:37.027 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:37.027 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:37.027 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:37.027 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:37.027 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:37.027 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:37.027 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:37.027 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:37.027 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:37.027 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:37.027 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.502 ms 00:23:37.027 00:23:37.027 --- 10.0.0.2 ping statistics --- 00:23:37.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:37.027 rtt min/avg/max/mdev = 0.502/0.502/0.502/0.000 ms 00:23:37.027 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:37.027 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:37.027 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:23:37.027 00:23:37.027 --- 10.0.0.1 ping statistics --- 00:23:37.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:37.027 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:23:37.027 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:37.027 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # return 0 00:23:37.027 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:23:37.027 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:37.027 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:23:37.027 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:23:37.027 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:37.027 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:23:37.027 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:23:37.027 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:23:37.027 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:37.027 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:37.027 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:37.027 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # nvmfpid=3800685 00:23:37.027 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # waitforlisten 3800685 00:23:37.027 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:23:37.027 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 3800685 ']' 00:23:37.027 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:37.027 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:37.027 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:37.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:37.027 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:37.027 08:38:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:37.027 [2024-10-01 08:38:27.894135] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:23:37.027 [2024-10-01 08:38:27.894204] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:37.027 [2024-10-01 08:38:27.965255] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.027 [2024-10-01 08:38:28.038612] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:37.027 [2024-10-01 08:38:28.038649] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:37.027 [2024-10-01 08:38:28.038657] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:37.027 [2024-10-01 08:38:28.038664] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:37.027 [2024-10-01 08:38:28.038670] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:37.027 [2024-10-01 08:38:28.039278] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:37.027 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:37.027 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:23:37.027 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:37.027 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:37.027 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:37.027 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:37.027 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:23:37.027 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:23:37.027 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:23:37.027 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.027 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:37.027 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.027 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:23:37.027 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.027 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:37.027 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.027 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:23:37.027 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.027 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:37.027 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.027 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:23:37.027 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.027 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:37.027 Malloc0 00:23:37.027 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.027 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:23:37.027 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.027 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:37.027 [2024-10-01 08:38:28.821922] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:37.027 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.027 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:23:37.027 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.027 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:37.027 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.027 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:23:37.027 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.027 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:37.288 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.289 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:37.289 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.289 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:37.289 [2024-10-01 08:38:28.858162] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:37.289 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.289 08:38:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:37.289 [2024-10-01 08:38:28.938874] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:38.672 Initializing NVMe Controllers 00:23:38.672 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:23:38.672 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:23:38.672 Initialization complete. Launching workers. 00:23:38.672 ======================================================== 00:23:38.672 Latency(us) 00:23:38.672 Device Information : IOPS MiB/s Average min max 00:23:38.672 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32295.16 7987.27 63852.49 00:23:38.672 ======================================================== 00:23:38.672 Total : 129.00 16.12 32295.16 7987.27 63852.49 00:23:38.672 00:23:38.672 08:38:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:23:38.672 08:38:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:23:38.672 08:38:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.672 08:38:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:38.672 08:38:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.672 08:38:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:23:38.672 08:38:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:23:38.672 08:38:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:23:38.672 08:38:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:23:38.672 08:38:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # nvmfcleanup 00:23:38.672 08:38:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:23:38.672 08:38:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:38.672 08:38:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:23:38.672 08:38:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:38.672 08:38:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:38.672 rmmod nvme_tcp 00:23:38.672 rmmod nvme_fabrics 00:23:38.672 rmmod nvme_keyring 00:23:38.672 08:38:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:38.672 08:38:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:23:38.672 08:38:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:23:38.672 08:38:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@513 -- # '[' -n 3800685 ']' 00:23:38.672 08:38:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # killprocess 3800685 00:23:38.672 08:38:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 3800685 ']' 00:23:38.672 08:38:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 3800685 00:23:38.672 08:38:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:23:38.672 08:38:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:38.672 08:38:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3800685 00:23:38.933 08:38:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:38.933 08:38:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:38.934 08:38:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3800685' 00:23:38.934 killing process with pid 3800685 00:23:38.934 08:38:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 3800685 00:23:38.934 08:38:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 3800685 00:23:38.934 08:38:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:23:38.934 08:38:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:23:38.934 08:38:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:23:38.934 08:38:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:23:38.934 08:38:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # iptables-save 00:23:38.934 08:38:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:23:38.934 08:38:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # iptables-restore 00:23:38.934 08:38:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:38.934 08:38:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:38.934 08:38:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:38.934 08:38:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:38.934 08:38:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:41.475 08:38:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:41.475 00:23:41.475 real 0m12.346s 00:23:41.475 user 0m5.074s 00:23:41.475 sys 0m5.836s 00:23:41.475 08:38:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:41.475 08:38:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:41.475 ************************************ 00:23:41.475 END TEST nvmf_wait_for_buf 00:23:41.475 ************************************ 00:23:41.475 08:38:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:23:41.475 08:38:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:23:41.475 08:38:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:23:41.475 08:38:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:23:41.475 08:38:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:23:41.475 08:38:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:48.191 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:48.191 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:23:48.191 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:48.191 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:48.191 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:48.191 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:48.191 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:48.191 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:23:48.191 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:48.191 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:23:48.191 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:23:48.191 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:23:48.191 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:23:48.191 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:23:48.191 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:23:48.191 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:48.191 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:48.191 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:48.191 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:48.191 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:48.191 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:48.191 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:48.191 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:48.192 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:48.192 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ up == up ]] 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:48.192 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ up == up ]] 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:48.192 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:48.192 ************************************ 00:23:48.192 START TEST nvmf_perf_adq 00:23:48.192 ************************************ 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:23:48.192 * Looking for test storage... 00:23:48.192 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # lcov --version 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:48.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:48.192 --rc genhtml_branch_coverage=1 00:23:48.192 --rc genhtml_function_coverage=1 00:23:48.192 --rc genhtml_legend=1 00:23:48.192 --rc geninfo_all_blocks=1 00:23:48.192 --rc geninfo_unexecuted_blocks=1 00:23:48.192 00:23:48.192 ' 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:48.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:48.192 --rc genhtml_branch_coverage=1 00:23:48.192 --rc genhtml_function_coverage=1 00:23:48.192 --rc genhtml_legend=1 00:23:48.192 --rc geninfo_all_blocks=1 00:23:48.192 --rc geninfo_unexecuted_blocks=1 00:23:48.192 00:23:48.192 ' 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:48.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:48.192 --rc genhtml_branch_coverage=1 00:23:48.192 --rc genhtml_function_coverage=1 00:23:48.192 --rc genhtml_legend=1 00:23:48.192 --rc geninfo_all_blocks=1 00:23:48.192 --rc geninfo_unexecuted_blocks=1 00:23:48.192 00:23:48.192 ' 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:48.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:48.192 --rc genhtml_branch_coverage=1 00:23:48.192 --rc genhtml_function_coverage=1 00:23:48.192 --rc genhtml_legend=1 00:23:48.192 --rc geninfo_all_blocks=1 00:23:48.192 --rc geninfo_unexecuted_blocks=1 00:23:48.192 00:23:48.192 ' 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:48.192 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:48.193 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:48.193 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:48.193 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:48.193 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:48.193 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:48.193 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:48.193 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:48.193 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:48.193 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:48.193 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:48.193 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:48.193 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:23:48.193 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:48.193 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:48.193 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:48.193 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.193 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.193 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.193 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:23:48.193 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.193 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:23:48.193 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:48.193 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:48.193 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:48.193 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:48.193 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:48.193 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:48.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:48.193 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:48.193 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:48.193 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:48.193 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:23:48.193 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:23:48.193 08:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:56.334 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:56.334 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:56.334 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:56.334 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:23:56.334 08:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:23:56.595 08:38:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:23:58.507 08:38:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:24:03.798 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # prepare_net_devs 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@434 -- # local -g is_hw=no 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # remove_spdk_ns 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:03.799 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:03.799 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:03.799 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:03.799 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # is_hw=yes 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:03.799 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:03.800 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:03.800 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:03.800 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:03.800 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:03.800 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.560 ms 00:24:03.800 00:24:03.800 --- 10.0.0.2 ping statistics --- 00:24:03.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:03.800 rtt min/avg/max/mdev = 0.560/0.560/0.560/0.000 ms 00:24:03.800 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:03.800 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:03.800 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.253 ms 00:24:03.800 00:24:03.800 --- 10.0.0.1 ping statistics --- 00:24:03.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:03.800 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:24:03.800 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:03.800 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # return 0 00:24:03.800 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:24:03.800 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:03.800 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:24:03.800 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:24:03.800 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:03.800 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:24:03.800 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:24:03.800 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:24:03.800 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:03.800 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:03.800 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:03.800 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # nvmfpid=3810946 00:24:03.800 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # waitforlisten 3810946 00:24:03.800 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:03.800 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 3810946 ']' 00:24:03.800 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:03.800 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:03.800 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:03.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:03.800 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:03.800 08:38:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:03.800 [2024-10-01 08:38:55.580049] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:24:03.800 [2024-10-01 08:38:55.580100] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:04.061 [2024-10-01 08:38:55.650030] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:04.061 [2024-10-01 08:38:55.714649] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:04.061 [2024-10-01 08:38:55.714689] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:04.061 [2024-10-01 08:38:55.714697] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:04.061 [2024-10-01 08:38:55.714704] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:04.061 [2024-10-01 08:38:55.714710] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:04.061 [2024-10-01 08:38:55.716441] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:04.061 [2024-10-01 08:38:55.716457] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:04.061 [2024-10-01 08:38:55.716590] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:04.061 [2024-10-01 08:38:55.716591] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:24:04.632 08:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:04.632 08:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:24:04.632 08:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:04.632 08:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:04.632 08:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:04.632 08:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:04.632 08:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:24:04.632 08:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:24:04.632 08:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:24:04.632 08:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.632 08:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:04.632 08:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.892 08:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:24:04.892 08:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:24:04.892 08:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.892 08:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:04.892 08:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.892 08:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:24:04.892 08:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.892 08:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:04.892 08:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.892 08:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:24:04.892 08:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.892 08:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:04.892 [2024-10-01 08:38:56.555567] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:04.892 08:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.892 08:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:04.892 08:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.892 08:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:04.892 Malloc1 00:24:04.892 08:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.893 08:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:04.893 08:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.893 08:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:04.893 08:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.893 08:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:04.893 08:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.893 08:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:04.893 08:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.893 08:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:04.893 08:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.893 08:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:04.893 [2024-10-01 08:38:56.614931] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:04.893 08:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.893 08:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=3811241 00:24:04.893 08:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:24:04.893 08:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:07.445 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:24:07.445 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.445 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:07.445 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.445 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:24:07.445 "tick_rate": 2400000000, 00:24:07.445 "poll_groups": [ 00:24:07.445 { 00:24:07.445 "name": "nvmf_tgt_poll_group_000", 00:24:07.445 "admin_qpairs": 1, 00:24:07.445 "io_qpairs": 1, 00:24:07.445 "current_admin_qpairs": 1, 00:24:07.445 "current_io_qpairs": 1, 00:24:07.445 "pending_bdev_io": 0, 00:24:07.445 "completed_nvme_io": 19498, 00:24:07.445 "transports": [ 00:24:07.445 { 00:24:07.445 "trtype": "TCP" 00:24:07.445 } 00:24:07.445 ] 00:24:07.445 }, 00:24:07.445 { 00:24:07.445 "name": "nvmf_tgt_poll_group_001", 00:24:07.445 "admin_qpairs": 0, 00:24:07.445 "io_qpairs": 1, 00:24:07.445 "current_admin_qpairs": 0, 00:24:07.445 "current_io_qpairs": 1, 00:24:07.445 "pending_bdev_io": 0, 00:24:07.445 "completed_nvme_io": 27629, 00:24:07.445 "transports": [ 00:24:07.445 { 00:24:07.445 "trtype": "TCP" 00:24:07.445 } 00:24:07.445 ] 00:24:07.445 }, 00:24:07.445 { 00:24:07.445 "name": "nvmf_tgt_poll_group_002", 00:24:07.445 "admin_qpairs": 0, 00:24:07.445 "io_qpairs": 1, 00:24:07.445 "current_admin_qpairs": 0, 00:24:07.445 "current_io_qpairs": 1, 00:24:07.445 "pending_bdev_io": 0, 00:24:07.445 "completed_nvme_io": 22423, 00:24:07.445 "transports": [ 00:24:07.445 { 00:24:07.445 "trtype": "TCP" 00:24:07.445 } 00:24:07.445 ] 00:24:07.445 }, 00:24:07.445 { 00:24:07.445 "name": "nvmf_tgt_poll_group_003", 00:24:07.445 "admin_qpairs": 0, 00:24:07.446 "io_qpairs": 1, 00:24:07.446 "current_admin_qpairs": 0, 00:24:07.446 "current_io_qpairs": 1, 00:24:07.446 "pending_bdev_io": 0, 00:24:07.446 "completed_nvme_io": 19858, 00:24:07.446 "transports": [ 00:24:07.446 { 00:24:07.446 "trtype": "TCP" 00:24:07.446 } 00:24:07.446 ] 00:24:07.446 } 00:24:07.446 ] 00:24:07.446 }' 00:24:07.446 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:24:07.446 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:24:07.446 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:24:07.446 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:24:07.446 08:38:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 3811241 00:24:15.588 Initializing NVMe Controllers 00:24:15.588 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:15.588 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:24:15.588 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:24:15.588 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:24:15.588 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:24:15.588 Initialization complete. Launching workers. 00:24:15.588 ======================================================== 00:24:15.588 Latency(us) 00:24:15.588 Device Information : IOPS MiB/s Average min max 00:24:15.588 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11120.98 43.44 5766.98 1753.21 43806.77 00:24:15.588 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 14745.38 57.60 4339.92 1424.83 8766.17 00:24:15.588 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 14368.29 56.13 4454.15 1182.59 11252.40 00:24:15.588 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 13249.72 51.76 4845.23 1218.44 44843.42 00:24:15.588 ======================================================== 00:24:15.588 Total : 53484.36 208.92 4792.52 1182.59 44843.42 00:24:15.588 00:24:15.588 08:39:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:24:15.588 08:39:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # nvmfcleanup 00:24:15.588 08:39:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:24:15.588 08:39:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:15.588 08:39:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:24:15.588 08:39:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:15.588 08:39:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:15.588 rmmod nvme_tcp 00:24:15.588 rmmod nvme_fabrics 00:24:15.588 rmmod nvme_keyring 00:24:15.588 08:39:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:15.588 08:39:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:24:15.588 08:39:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:24:15.588 08:39:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@513 -- # '[' -n 3810946 ']' 00:24:15.588 08:39:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # killprocess 3810946 00:24:15.588 08:39:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 3810946 ']' 00:24:15.588 08:39:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 3810946 00:24:15.588 08:39:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:24:15.588 08:39:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:15.588 08:39:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3810946 00:24:15.588 08:39:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:15.588 08:39:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:15.588 08:39:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3810946' 00:24:15.588 killing process with pid 3810946 00:24:15.588 08:39:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 3810946 00:24:15.588 08:39:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 3810946 00:24:15.588 08:39:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:24:15.588 08:39:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:24:15.588 08:39:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:24:15.588 08:39:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:24:15.588 08:39:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # iptables-save 00:24:15.588 08:39:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:24:15.588 08:39:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # iptables-restore 00:24:15.588 08:39:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:15.588 08:39:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:15.588 08:39:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:15.588 08:39:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:15.588 08:39:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:17.503 08:39:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:17.503 08:39:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:24:17.503 08:39:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:24:17.503 08:39:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:24:19.417 08:39:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:24:21.328 08:39:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:24:26.620 08:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:24:26.620 08:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:24:26.620 08:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:26.620 08:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # prepare_net_devs 00:24:26.620 08:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@434 -- # local -g is_hw=no 00:24:26.620 08:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # remove_spdk_ns 00:24:26.620 08:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:26.620 08:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:26.620 08:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:26.620 08:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:24:26.620 08:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:24:26.620 08:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:24:26.620 08:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:26.620 08:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:26.620 08:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:24:26.620 08:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:26.620 08:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:26.620 08:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:26.620 08:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:26.620 08:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:26.620 08:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:24:26.620 08:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:26.620 08:39:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:24:26.620 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:24:26.620 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:24:26.620 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:24:26.620 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:24:26.620 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:24:26.620 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:26.620 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:26.620 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:26.620 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:26.620 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:26.620 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:26.620 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:26.620 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:26.620 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:26.620 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:26.620 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:26.620 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:24:26.620 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:24:26.620 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:24:26.620 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:24:26.620 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:24:26.620 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:24:26.620 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:24:26.620 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:26.620 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:26.620 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:24:26.620 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:24:26.620 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:26.620 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:26.620 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:24:26.620 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:24:26.620 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:26.620 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:26.620 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:24:26.620 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:24:26.620 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:26.620 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:26.620 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:24:26.620 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:24:26.620 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:24:26.620 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:24:26.620 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:24:26.620 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:26.620 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:24:26.620 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:26.620 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:24:26.620 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:24:26.620 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:26.620 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:26.620 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:26.620 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:24:26.620 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:24:26.620 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:26.620 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:24:26.620 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:26.620 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:24:26.620 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:24:26.620 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:26.620 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:26.620 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:26.620 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:24:26.620 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:24:26.620 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # is_hw=yes 00:24:26.620 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:24:26.620 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:24:26.620 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:24:26.620 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:26.620 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:26.620 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:26.620 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:26.620 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:26.620 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:26.620 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:26.621 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:26.621 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:26.621 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:26.621 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:26.621 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:26.621 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:26.621 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:26.621 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:26.621 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:26.621 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:26.621 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:26.621 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:26.621 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:26.621 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:26.621 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:26.621 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:26.621 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:26.621 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.652 ms 00:24:26.621 00:24:26.621 --- 10.0.0.2 ping statistics --- 00:24:26.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:26.621 rtt min/avg/max/mdev = 0.652/0.652/0.652/0.000 ms 00:24:26.621 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:26.621 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:26.621 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:24:26.621 00:24:26.621 --- 10.0.0.1 ping statistics --- 00:24:26.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:26.621 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:24:26.621 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:26.621 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # return 0 00:24:26.621 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:24:26.621 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:26.621 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:24:26.621 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:24:26.621 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:26.621 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:24:26.621 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:24:26.621 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:24:26.621 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:24:26.621 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:24:26.621 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:24:26.621 net.core.busy_poll = 1 00:24:26.621 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:24:26.621 net.core.busy_read = 1 00:24:26.621 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:24:26.621 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:24:26.883 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:24:26.883 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:24:26.883 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:24:26.883 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:24:26.883 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:26.883 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:26.883 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:26.883 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # nvmfpid=3816350 00:24:26.883 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # waitforlisten 3816350 00:24:26.883 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:26.883 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 3816350 ']' 00:24:26.883 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:26.883 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:26.883 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:26.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:26.883 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:26.883 08:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:26.884 [2024-10-01 08:39:18.680870] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:24:26.884 [2024-10-01 08:39:18.680940] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:27.145 [2024-10-01 08:39:18.752988] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:27.145 [2024-10-01 08:39:18.826460] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:27.145 [2024-10-01 08:39:18.826498] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:27.145 [2024-10-01 08:39:18.826506] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:27.145 [2024-10-01 08:39:18.826513] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:27.145 [2024-10-01 08:39:18.826519] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:27.145 [2024-10-01 08:39:18.828035] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:27.145 [2024-10-01 08:39:18.828247] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:27.145 [2024-10-01 08:39:18.828248] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:24:27.145 [2024-10-01 08:39:18.828103] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:27.717 08:39:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:27.717 08:39:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:24:27.717 08:39:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:27.717 08:39:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:27.717 08:39:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:27.717 08:39:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:27.717 08:39:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:24:27.717 08:39:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:24:27.717 08:39:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:24:27.717 08:39:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.717 08:39:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:27.978 08:39:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.978 08:39:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:24:27.978 08:39:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:24:27.978 08:39:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.978 08:39:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:27.978 08:39:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.978 08:39:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:24:27.978 08:39:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.978 08:39:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:27.978 08:39:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.978 08:39:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:24:27.978 08:39:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.978 08:39:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:27.978 [2024-10-01 08:39:19.669279] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:27.978 08:39:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.978 08:39:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:27.978 08:39:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.978 08:39:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:27.978 Malloc1 00:24:27.978 08:39:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.978 08:39:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:27.978 08:39:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.978 08:39:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:27.978 08:39:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.978 08:39:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:27.979 08:39:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.979 08:39:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:27.979 08:39:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.979 08:39:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:27.979 08:39:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.979 08:39:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:27.979 [2024-10-01 08:39:19.728757] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:27.979 08:39:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.979 08:39:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=3816685 00:24:27.979 08:39:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:24:27.979 08:39:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:30.523 08:39:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:24:30.523 08:39:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.523 08:39:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:30.523 08:39:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.523 08:39:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:24:30.523 "tick_rate": 2400000000, 00:24:30.523 "poll_groups": [ 00:24:30.523 { 00:24:30.523 "name": "nvmf_tgt_poll_group_000", 00:24:30.523 "admin_qpairs": 1, 00:24:30.523 "io_qpairs": 4, 00:24:30.523 "current_admin_qpairs": 1, 00:24:30.523 "current_io_qpairs": 4, 00:24:30.523 "pending_bdev_io": 0, 00:24:30.523 "completed_nvme_io": 36077, 00:24:30.523 "transports": [ 00:24:30.523 { 00:24:30.523 "trtype": "TCP" 00:24:30.523 } 00:24:30.523 ] 00:24:30.523 }, 00:24:30.523 { 00:24:30.523 "name": "nvmf_tgt_poll_group_001", 00:24:30.523 "admin_qpairs": 0, 00:24:30.523 "io_qpairs": 0, 00:24:30.523 "current_admin_qpairs": 0, 00:24:30.523 "current_io_qpairs": 0, 00:24:30.523 "pending_bdev_io": 0, 00:24:30.523 "completed_nvme_io": 0, 00:24:30.523 "transports": [ 00:24:30.523 { 00:24:30.523 "trtype": "TCP" 00:24:30.523 } 00:24:30.523 ] 00:24:30.523 }, 00:24:30.523 { 00:24:30.523 "name": "nvmf_tgt_poll_group_002", 00:24:30.523 "admin_qpairs": 0, 00:24:30.523 "io_qpairs": 0, 00:24:30.523 "current_admin_qpairs": 0, 00:24:30.523 "current_io_qpairs": 0, 00:24:30.523 "pending_bdev_io": 0, 00:24:30.523 "completed_nvme_io": 0, 00:24:30.523 "transports": [ 00:24:30.523 { 00:24:30.523 "trtype": "TCP" 00:24:30.523 } 00:24:30.523 ] 00:24:30.523 }, 00:24:30.523 { 00:24:30.523 "name": "nvmf_tgt_poll_group_003", 00:24:30.523 "admin_qpairs": 0, 00:24:30.523 "io_qpairs": 0, 00:24:30.523 "current_admin_qpairs": 0, 00:24:30.523 "current_io_qpairs": 0, 00:24:30.523 "pending_bdev_io": 0, 00:24:30.523 "completed_nvme_io": 0, 00:24:30.523 "transports": [ 00:24:30.523 { 00:24:30.523 "trtype": "TCP" 00:24:30.523 } 00:24:30.523 ] 00:24:30.523 } 00:24:30.523 ] 00:24:30.523 }' 00:24:30.523 08:39:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:24:30.523 08:39:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:24:30.523 08:39:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=3 00:24:30.524 08:39:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 3 -lt 2 ]] 00:24:30.524 08:39:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 3816685 00:24:38.660 Initializing NVMe Controllers 00:24:38.660 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:38.660 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:24:38.660 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:24:38.660 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:24:38.660 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:24:38.660 Initialization complete. Launching workers. 00:24:38.660 ======================================================== 00:24:38.660 Latency(us) 00:24:38.660 Device Information : IOPS MiB/s Average min max 00:24:38.660 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6555.50 25.61 9766.03 1254.12 61275.70 00:24:38.660 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6109.70 23.87 10508.86 1140.61 56722.92 00:24:38.660 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6284.40 24.55 10185.63 1397.28 57597.40 00:24:38.660 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6347.30 24.79 10102.43 1204.82 56868.04 00:24:38.660 ======================================================== 00:24:38.660 Total : 25296.89 98.82 10134.08 1140.61 61275.70 00:24:38.660 00:24:38.660 08:39:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:24:38.660 08:39:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # nvmfcleanup 00:24:38.660 08:39:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:24:38.660 08:39:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:38.660 08:39:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:24:38.660 08:39:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:38.660 08:39:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:38.660 rmmod nvme_tcp 00:24:38.660 rmmod nvme_fabrics 00:24:38.660 rmmod nvme_keyring 00:24:38.660 08:39:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:38.660 08:39:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:24:38.660 08:39:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:24:38.660 08:39:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@513 -- # '[' -n 3816350 ']' 00:24:38.660 08:39:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # killprocess 3816350 00:24:38.660 08:39:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 3816350 ']' 00:24:38.660 08:39:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 3816350 00:24:38.660 08:39:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:24:38.660 08:39:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:38.660 08:39:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3816350 00:24:38.660 08:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:38.660 08:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:38.660 08:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3816350' 00:24:38.660 killing process with pid 3816350 00:24:38.660 08:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 3816350 00:24:38.660 08:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 3816350 00:24:38.660 08:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:24:38.660 08:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:24:38.660 08:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:24:38.660 08:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:24:38.660 08:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # iptables-save 00:24:38.660 08:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:24:38.660 08:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # iptables-restore 00:24:38.660 08:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:38.660 08:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:38.660 08:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:38.660 08:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:38.660 08:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:40.574 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:40.574 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:24:40.574 00:24:40.574 real 0m52.642s 00:24:40.574 user 2m50.329s 00:24:40.574 sys 0m10.692s 00:24:40.574 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:40.574 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:40.574 ************************************ 00:24:40.574 END TEST nvmf_perf_adq 00:24:40.574 ************************************ 00:24:40.574 08:39:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:24:40.574 08:39:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:40.574 08:39:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:40.574 08:39:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:40.574 ************************************ 00:24:40.574 START TEST nvmf_shutdown 00:24:40.574 ************************************ 00:24:40.574 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:24:40.837 * Looking for test storage... 00:24:40.837 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:40.837 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:40.837 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:24:40.837 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:40.837 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:40.837 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:40.837 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:40.837 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:40.837 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:24:40.837 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:24:40.837 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:24:40.837 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:24:40.837 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:24:40.837 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:24:40.837 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:24:40.837 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:40.837 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:24:40.837 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:24:40.837 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:40.837 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:40.837 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:24:40.837 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:24:40.837 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:40.837 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:24:40.837 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:24:40.837 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:24:40.837 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:24:40.837 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:40.837 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:24:40.837 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:24:40.837 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:40.837 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:40.837 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:24:40.837 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:40.837 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:40.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.837 --rc genhtml_branch_coverage=1 00:24:40.837 --rc genhtml_function_coverage=1 00:24:40.837 --rc genhtml_legend=1 00:24:40.837 --rc geninfo_all_blocks=1 00:24:40.837 --rc geninfo_unexecuted_blocks=1 00:24:40.837 00:24:40.837 ' 00:24:40.837 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:40.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.837 --rc genhtml_branch_coverage=1 00:24:40.837 --rc genhtml_function_coverage=1 00:24:40.837 --rc genhtml_legend=1 00:24:40.837 --rc geninfo_all_blocks=1 00:24:40.837 --rc geninfo_unexecuted_blocks=1 00:24:40.837 00:24:40.838 ' 00:24:40.838 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:40.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.838 --rc genhtml_branch_coverage=1 00:24:40.838 --rc genhtml_function_coverage=1 00:24:40.838 --rc genhtml_legend=1 00:24:40.838 --rc geninfo_all_blocks=1 00:24:40.838 --rc geninfo_unexecuted_blocks=1 00:24:40.838 00:24:40.838 ' 00:24:40.838 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:40.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.838 --rc genhtml_branch_coverage=1 00:24:40.838 --rc genhtml_function_coverage=1 00:24:40.838 --rc genhtml_legend=1 00:24:40.838 --rc geninfo_all_blocks=1 00:24:40.838 --rc geninfo_unexecuted_blocks=1 00:24:40.838 00:24:40.838 ' 00:24:40.838 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:40.838 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:24:40.838 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:40.838 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:40.838 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:40.838 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:40.838 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:40.838 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:40.838 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:40.838 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:40.838 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:40.838 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:40.838 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:40.838 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:40.838 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:40.838 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:40.838 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:40.838 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:40.838 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:40.838 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:24:40.838 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:40.838 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:40.838 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:40.838 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.838 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.838 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.838 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:24:40.838 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.838 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:24:40.838 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:40.838 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:40.838 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:40.838 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:40.838 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:40.838 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:40.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:40.838 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:40.838 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:40.838 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:40.838 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:40.838 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:40.838 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:24:40.838 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:40.838 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:40.838 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:40.838 ************************************ 00:24:40.838 START TEST nvmf_shutdown_tc1 00:24:40.838 ************************************ 00:24:40.838 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:24:40.838 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:24:40.838 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:24:40.839 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:24:40.839 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:40.839 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@472 -- # prepare_net_devs 00:24:40.839 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@434 -- # local -g is_hw=no 00:24:40.839 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@436 -- # remove_spdk_ns 00:24:40.839 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:40.839 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:40.839 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:40.839 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:24:40.839 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:24:40.839 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:24:40.839 08:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:49.017 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:49.017 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:24:49.017 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:49.017 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:49.017 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:49.017 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:49.017 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:49.017 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:24:49.017 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:49.017 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:24:49.017 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:24:49.017 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:24:49.017 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:24:49.017 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:24:49.017 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:24:49.017 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:49.017 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:49.017 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:49.017 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:49.017 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:49.017 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:49.017 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:49.017 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:49.017 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:49.017 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:49.017 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:49.017 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:24:49.017 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:24:49.017 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:24:49.017 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:24:49.017 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:24:49.017 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:24:49.017 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:24:49.017 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:49.017 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:49.017 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:24:49.017 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:24:49.017 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:49.017 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:49.017 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:24:49.017 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:24:49.017 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:49.017 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:49.017 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:24:49.017 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:24:49.017 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:49.017 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:49.017 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:24:49.017 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:24:49.017 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:24:49.017 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:24:49.017 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:24:49.017 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:49.017 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:24:49.017 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:49.017 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:24:49.017 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:24:49.017 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:49.017 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:49.018 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:49.018 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:24:49.018 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:24:49.018 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:49.018 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:24:49.018 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:49.018 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:24:49.018 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:24:49.018 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:49.018 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:49.018 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:49.018 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:24:49.018 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:24:49.018 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # is_hw=yes 00:24:49.018 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:24:49.018 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:24:49.018 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:24:49.018 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:49.018 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:49.018 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:49.018 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:49.018 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:49.018 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:49.018 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:49.018 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:49.018 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:49.018 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:49.018 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:49.018 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:49.018 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:49.018 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:49.018 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:49.018 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:49.018 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:49.018 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:49.018 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:49.018 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:49.018 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:49.018 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:49.018 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:49.018 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:49.018 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.482 ms 00:24:49.018 00:24:49.018 --- 10.0.0.2 ping statistics --- 00:24:49.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:49.018 rtt min/avg/max/mdev = 0.482/0.482/0.482/0.000 ms 00:24:49.018 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:49.018 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:49.018 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:24:49.018 00:24:49.018 --- 10.0.0.1 ping statistics --- 00:24:49.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:49.018 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:24:49.018 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:49.018 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # return 0 00:24:49.018 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:24:49.018 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:49.018 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:24:49.018 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:24:49.018 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:49.018 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:24:49.018 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:24:49.018 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:24:49.018 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:49.018 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:49.018 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:49.018 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@505 -- # nvmfpid=3822903 00:24:49.018 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@506 -- # waitforlisten 3822903 00:24:49.018 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:49.018 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 3822903 ']' 00:24:49.018 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:49.018 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:49.018 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:49.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:49.018 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:49.018 08:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:49.018 [2024-10-01 08:39:39.962443] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:24:49.018 [2024-10-01 08:39:39.962507] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:49.018 [2024-10-01 08:39:40.051911] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:49.018 [2024-10-01 08:39:40.139572] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:49.018 [2024-10-01 08:39:40.139637] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:49.018 [2024-10-01 08:39:40.139645] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:49.018 [2024-10-01 08:39:40.139652] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:49.018 [2024-10-01 08:39:40.139659] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:49.018 [2024-10-01 08:39:40.141987] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:49.018 [2024-10-01 08:39:40.142158] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:24:49.018 [2024-10-01 08:39:40.142411] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:24:49.018 [2024-10-01 08:39:40.142414] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:49.018 08:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:49.018 08:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:24:49.018 08:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:49.018 08:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:49.018 08:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:49.018 08:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:49.018 08:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:49.018 08:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.018 08:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:49.335 [2024-10-01 08:39:40.824635] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:49.335 08:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.335 08:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:24:49.335 08:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:24:49.335 08:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:49.335 08:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:49.335 08:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:49.335 08:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:49.335 08:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:49.335 08:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:49.335 08:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:49.335 08:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:49.335 08:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:49.335 08:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:49.335 08:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:49.335 08:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:49.335 08:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:49.335 08:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:49.335 08:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:49.335 08:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:49.335 08:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:49.335 08:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:49.335 08:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:49.335 08:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:49.335 08:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:49.335 08:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:49.335 08:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:49.335 08:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:24:49.335 08:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.335 08:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:49.335 Malloc1 00:24:49.335 [2024-10-01 08:39:40.927947] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:49.335 Malloc2 00:24:49.335 Malloc3 00:24:49.335 Malloc4 00:24:49.335 Malloc5 00:24:49.335 Malloc6 00:24:49.628 Malloc7 00:24:49.628 Malloc8 00:24:49.628 Malloc9 00:24:49.628 Malloc10 00:24:49.628 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.628 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:24:49.628 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:49.628 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:49.628 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=3823206 00:24:49.628 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 3823206 /var/tmp/bdevperf.sock 00:24:49.628 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 3823206 ']' 00:24:49.628 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:49.628 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:49.628 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:24:49.628 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:49.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:49.628 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:49.628 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:49.628 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:49.628 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # config=() 00:24:49.628 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # local subsystem config 00:24:49.628 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:49.628 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:49.628 { 00:24:49.628 "params": { 00:24:49.628 "name": "Nvme$subsystem", 00:24:49.628 "trtype": "$TEST_TRANSPORT", 00:24:49.628 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:49.628 "adrfam": "ipv4", 00:24:49.628 "trsvcid": "$NVMF_PORT", 00:24:49.628 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:49.628 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:49.628 "hdgst": ${hdgst:-false}, 00:24:49.628 "ddgst": ${ddgst:-false} 00:24:49.628 }, 00:24:49.628 "method": "bdev_nvme_attach_controller" 00:24:49.628 } 00:24:49.628 EOF 00:24:49.628 )") 00:24:49.628 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:24:49.628 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:49.628 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:49.628 { 00:24:49.628 "params": { 00:24:49.628 "name": "Nvme$subsystem", 00:24:49.628 "trtype": "$TEST_TRANSPORT", 00:24:49.628 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:49.628 "adrfam": "ipv4", 00:24:49.628 "trsvcid": "$NVMF_PORT", 00:24:49.628 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:49.628 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:49.628 "hdgst": ${hdgst:-false}, 00:24:49.628 "ddgst": ${ddgst:-false} 00:24:49.628 }, 00:24:49.628 "method": "bdev_nvme_attach_controller" 00:24:49.628 } 00:24:49.628 EOF 00:24:49.628 )") 00:24:49.628 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:24:49.628 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:49.628 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:49.628 { 00:24:49.628 "params": { 00:24:49.628 "name": "Nvme$subsystem", 00:24:49.628 "trtype": "$TEST_TRANSPORT", 00:24:49.628 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:49.628 "adrfam": "ipv4", 00:24:49.628 "trsvcid": "$NVMF_PORT", 00:24:49.628 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:49.628 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:49.628 "hdgst": ${hdgst:-false}, 00:24:49.628 "ddgst": ${ddgst:-false} 00:24:49.628 }, 00:24:49.628 "method": "bdev_nvme_attach_controller" 00:24:49.628 } 00:24:49.628 EOF 00:24:49.628 )") 00:24:49.628 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:24:49.628 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:49.628 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:49.628 { 00:24:49.628 "params": { 00:24:49.628 "name": "Nvme$subsystem", 00:24:49.628 "trtype": "$TEST_TRANSPORT", 00:24:49.628 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:49.628 "adrfam": "ipv4", 00:24:49.628 "trsvcid": "$NVMF_PORT", 00:24:49.628 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:49.628 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:49.628 "hdgst": ${hdgst:-false}, 00:24:49.628 "ddgst": ${ddgst:-false} 00:24:49.628 }, 00:24:49.628 "method": "bdev_nvme_attach_controller" 00:24:49.628 } 00:24:49.628 EOF 00:24:49.628 )") 00:24:49.628 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:24:49.628 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:49.628 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:49.628 { 00:24:49.628 "params": { 00:24:49.628 "name": "Nvme$subsystem", 00:24:49.628 "trtype": "$TEST_TRANSPORT", 00:24:49.628 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:49.628 "adrfam": "ipv4", 00:24:49.628 "trsvcid": "$NVMF_PORT", 00:24:49.628 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:49.628 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:49.628 "hdgst": ${hdgst:-false}, 00:24:49.628 "ddgst": ${ddgst:-false} 00:24:49.628 }, 00:24:49.628 "method": "bdev_nvme_attach_controller" 00:24:49.628 } 00:24:49.628 EOF 00:24:49.628 )") 00:24:49.628 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:24:49.629 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:49.629 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:49.629 { 00:24:49.629 "params": { 00:24:49.629 "name": "Nvme$subsystem", 00:24:49.629 "trtype": "$TEST_TRANSPORT", 00:24:49.629 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:49.629 "adrfam": "ipv4", 00:24:49.629 "trsvcid": "$NVMF_PORT", 00:24:49.629 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:49.629 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:49.629 "hdgst": ${hdgst:-false}, 00:24:49.629 "ddgst": ${ddgst:-false} 00:24:49.629 }, 00:24:49.629 "method": "bdev_nvme_attach_controller" 00:24:49.629 } 00:24:49.629 EOF 00:24:49.629 )") 00:24:49.629 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:24:49.629 [2024-10-01 08:39:41.375778] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:24:49.629 [2024-10-01 08:39:41.375835] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:24:49.629 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:49.629 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:49.629 { 00:24:49.629 "params": { 00:24:49.629 "name": "Nvme$subsystem", 00:24:49.629 "trtype": "$TEST_TRANSPORT", 00:24:49.629 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:49.629 "adrfam": "ipv4", 00:24:49.629 "trsvcid": "$NVMF_PORT", 00:24:49.629 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:49.629 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:49.629 "hdgst": ${hdgst:-false}, 00:24:49.629 "ddgst": ${ddgst:-false} 00:24:49.629 }, 00:24:49.629 "method": "bdev_nvme_attach_controller" 00:24:49.629 } 00:24:49.629 EOF 00:24:49.629 )") 00:24:49.629 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:24:49.629 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:49.629 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:49.629 { 00:24:49.629 "params": { 00:24:49.629 "name": "Nvme$subsystem", 00:24:49.629 "trtype": "$TEST_TRANSPORT", 00:24:49.629 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:49.629 "adrfam": "ipv4", 00:24:49.629 "trsvcid": "$NVMF_PORT", 00:24:49.629 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:49.629 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:49.629 "hdgst": ${hdgst:-false}, 00:24:49.629 "ddgst": ${ddgst:-false} 00:24:49.629 }, 00:24:49.629 "method": "bdev_nvme_attach_controller" 00:24:49.629 } 00:24:49.629 EOF 00:24:49.629 )") 00:24:49.629 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:24:49.629 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:49.629 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:49.629 { 00:24:49.629 "params": { 00:24:49.629 "name": "Nvme$subsystem", 00:24:49.629 "trtype": "$TEST_TRANSPORT", 00:24:49.629 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:49.629 "adrfam": "ipv4", 00:24:49.629 "trsvcid": "$NVMF_PORT", 00:24:49.629 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:49.629 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:49.629 "hdgst": ${hdgst:-false}, 00:24:49.629 "ddgst": ${ddgst:-false} 00:24:49.629 }, 00:24:49.629 "method": "bdev_nvme_attach_controller" 00:24:49.629 } 00:24:49.629 EOF 00:24:49.629 )") 00:24:49.629 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:24:49.629 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:49.629 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:49.629 { 00:24:49.629 "params": { 00:24:49.629 "name": "Nvme$subsystem", 00:24:49.629 "trtype": "$TEST_TRANSPORT", 00:24:49.629 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:49.629 "adrfam": "ipv4", 00:24:49.629 "trsvcid": "$NVMF_PORT", 00:24:49.629 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:49.629 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:49.629 "hdgst": ${hdgst:-false}, 00:24:49.629 "ddgst": ${ddgst:-false} 00:24:49.629 }, 00:24:49.629 "method": "bdev_nvme_attach_controller" 00:24:49.629 } 00:24:49.629 EOF 00:24:49.629 )") 00:24:49.629 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:24:49.629 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # jq . 00:24:49.629 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@581 -- # IFS=, 00:24:49.629 08:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:24:49.629 "params": { 00:24:49.629 "name": "Nvme1", 00:24:49.629 "trtype": "tcp", 00:24:49.629 "traddr": "10.0.0.2", 00:24:49.629 "adrfam": "ipv4", 00:24:49.629 "trsvcid": "4420", 00:24:49.629 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:49.629 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:49.629 "hdgst": false, 00:24:49.629 "ddgst": false 00:24:49.629 }, 00:24:49.629 "method": "bdev_nvme_attach_controller" 00:24:49.629 },{ 00:24:49.629 "params": { 00:24:49.629 "name": "Nvme2", 00:24:49.629 "trtype": "tcp", 00:24:49.629 "traddr": "10.0.0.2", 00:24:49.629 "adrfam": "ipv4", 00:24:49.629 "trsvcid": "4420", 00:24:49.629 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:49.629 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:49.629 "hdgst": false, 00:24:49.629 "ddgst": false 00:24:49.629 }, 00:24:49.629 "method": "bdev_nvme_attach_controller" 00:24:49.629 },{ 00:24:49.629 "params": { 00:24:49.629 "name": "Nvme3", 00:24:49.629 "trtype": "tcp", 00:24:49.629 "traddr": "10.0.0.2", 00:24:49.629 "adrfam": "ipv4", 00:24:49.629 "trsvcid": "4420", 00:24:49.629 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:49.629 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:49.629 "hdgst": false, 00:24:49.629 "ddgst": false 00:24:49.629 }, 00:24:49.629 "method": "bdev_nvme_attach_controller" 00:24:49.629 },{ 00:24:49.629 "params": { 00:24:49.629 "name": "Nvme4", 00:24:49.629 "trtype": "tcp", 00:24:49.629 "traddr": "10.0.0.2", 00:24:49.629 "adrfam": "ipv4", 00:24:49.629 "trsvcid": "4420", 00:24:49.629 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:49.629 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:49.629 "hdgst": false, 00:24:49.629 "ddgst": false 00:24:49.629 }, 00:24:49.629 "method": "bdev_nvme_attach_controller" 00:24:49.629 },{ 00:24:49.629 "params": { 00:24:49.629 "name": "Nvme5", 00:24:49.629 "trtype": "tcp", 00:24:49.629 "traddr": "10.0.0.2", 00:24:49.629 "adrfam": "ipv4", 00:24:49.629 "trsvcid": "4420", 00:24:49.629 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:49.629 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:49.629 "hdgst": false, 00:24:49.629 "ddgst": false 00:24:49.629 }, 00:24:49.629 "method": "bdev_nvme_attach_controller" 00:24:49.629 },{ 00:24:49.629 "params": { 00:24:49.629 "name": "Nvme6", 00:24:49.629 "trtype": "tcp", 00:24:49.629 "traddr": "10.0.0.2", 00:24:49.629 "adrfam": "ipv4", 00:24:49.629 "trsvcid": "4420", 00:24:49.629 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:49.629 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:49.629 "hdgst": false, 00:24:49.629 "ddgst": false 00:24:49.629 }, 00:24:49.629 "method": "bdev_nvme_attach_controller" 00:24:49.629 },{ 00:24:49.629 "params": { 00:24:49.629 "name": "Nvme7", 00:24:49.629 "trtype": "tcp", 00:24:49.629 "traddr": "10.0.0.2", 00:24:49.629 "adrfam": "ipv4", 00:24:49.629 "trsvcid": "4420", 00:24:49.629 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:49.629 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:49.629 "hdgst": false, 00:24:49.629 "ddgst": false 00:24:49.629 }, 00:24:49.629 "method": "bdev_nvme_attach_controller" 00:24:49.629 },{ 00:24:49.629 "params": { 00:24:49.629 "name": "Nvme8", 00:24:49.629 "trtype": "tcp", 00:24:49.629 "traddr": "10.0.0.2", 00:24:49.629 "adrfam": "ipv4", 00:24:49.629 "trsvcid": "4420", 00:24:49.629 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:49.629 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:49.629 "hdgst": false, 00:24:49.629 "ddgst": false 00:24:49.629 }, 00:24:49.629 "method": "bdev_nvme_attach_controller" 00:24:49.629 },{ 00:24:49.629 "params": { 00:24:49.629 "name": "Nvme9", 00:24:49.629 "trtype": "tcp", 00:24:49.629 "traddr": "10.0.0.2", 00:24:49.629 "adrfam": "ipv4", 00:24:49.629 "trsvcid": "4420", 00:24:49.629 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:49.629 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:49.629 "hdgst": false, 00:24:49.629 "ddgst": false 00:24:49.629 }, 00:24:49.629 "method": "bdev_nvme_attach_controller" 00:24:49.629 },{ 00:24:49.629 "params": { 00:24:49.629 "name": "Nvme10", 00:24:49.629 "trtype": "tcp", 00:24:49.629 "traddr": "10.0.0.2", 00:24:49.629 "adrfam": "ipv4", 00:24:49.629 "trsvcid": "4420", 00:24:49.629 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:49.629 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:49.629 "hdgst": false, 00:24:49.629 "ddgst": false 00:24:49.629 }, 00:24:49.629 "method": "bdev_nvme_attach_controller" 00:24:49.629 }' 00:24:49.629 [2024-10-01 08:39:41.438212] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:49.889 [2024-10-01 08:39:41.503182] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:51.272 08:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:51.272 08:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:24:51.272 08:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:51.272 08:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.272 08:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:51.272 08:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.272 08:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 3823206 00:24:51.272 08:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:24:51.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 3823206 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:24:51.272 08:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:24:52.214 08:39:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 3822903 00:24:52.214 08:39:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:24:52.214 08:39:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:52.214 08:39:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # config=() 00:24:52.214 08:39:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # local subsystem config 00:24:52.214 08:39:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:52.214 08:39:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:52.214 { 00:24:52.214 "params": { 00:24:52.214 "name": "Nvme$subsystem", 00:24:52.214 "trtype": "$TEST_TRANSPORT", 00:24:52.214 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:52.214 "adrfam": "ipv4", 00:24:52.214 "trsvcid": "$NVMF_PORT", 00:24:52.214 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:52.214 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:52.214 "hdgst": ${hdgst:-false}, 00:24:52.214 "ddgst": ${ddgst:-false} 00:24:52.214 }, 00:24:52.214 "method": "bdev_nvme_attach_controller" 00:24:52.214 } 00:24:52.214 EOF 00:24:52.214 )") 00:24:52.214 08:39:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:24:52.214 08:39:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:52.214 08:39:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:52.214 { 00:24:52.214 "params": { 00:24:52.214 "name": "Nvme$subsystem", 00:24:52.214 "trtype": "$TEST_TRANSPORT", 00:24:52.214 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:52.214 "adrfam": "ipv4", 00:24:52.214 "trsvcid": "$NVMF_PORT", 00:24:52.214 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:52.214 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:52.214 "hdgst": ${hdgst:-false}, 00:24:52.214 "ddgst": ${ddgst:-false} 00:24:52.214 }, 00:24:52.214 "method": "bdev_nvme_attach_controller" 00:24:52.214 } 00:24:52.214 EOF 00:24:52.214 )") 00:24:52.214 08:39:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:24:52.214 08:39:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:52.214 08:39:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:52.214 { 00:24:52.214 "params": { 00:24:52.214 "name": "Nvme$subsystem", 00:24:52.214 "trtype": "$TEST_TRANSPORT", 00:24:52.214 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:52.214 "adrfam": "ipv4", 00:24:52.214 "trsvcid": "$NVMF_PORT", 00:24:52.214 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:52.214 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:52.214 "hdgst": ${hdgst:-false}, 00:24:52.214 "ddgst": ${ddgst:-false} 00:24:52.214 }, 00:24:52.214 "method": "bdev_nvme_attach_controller" 00:24:52.215 } 00:24:52.215 EOF 00:24:52.215 )") 00:24:52.215 08:39:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:24:52.215 08:39:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:52.215 08:39:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:52.215 { 00:24:52.215 "params": { 00:24:52.215 "name": "Nvme$subsystem", 00:24:52.215 "trtype": "$TEST_TRANSPORT", 00:24:52.215 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:52.215 "adrfam": "ipv4", 00:24:52.215 "trsvcid": "$NVMF_PORT", 00:24:52.215 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:52.215 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:52.215 "hdgst": ${hdgst:-false}, 00:24:52.215 "ddgst": ${ddgst:-false} 00:24:52.215 }, 00:24:52.215 "method": "bdev_nvme_attach_controller" 00:24:52.215 } 00:24:52.215 EOF 00:24:52.215 )") 00:24:52.215 08:39:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:24:52.215 08:39:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:52.215 08:39:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:52.215 { 00:24:52.215 "params": { 00:24:52.215 "name": "Nvme$subsystem", 00:24:52.215 "trtype": "$TEST_TRANSPORT", 00:24:52.215 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:52.215 "adrfam": "ipv4", 00:24:52.215 "trsvcid": "$NVMF_PORT", 00:24:52.215 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:52.215 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:52.215 "hdgst": ${hdgst:-false}, 00:24:52.215 "ddgst": ${ddgst:-false} 00:24:52.215 }, 00:24:52.215 "method": "bdev_nvme_attach_controller" 00:24:52.215 } 00:24:52.215 EOF 00:24:52.215 )") 00:24:52.215 08:39:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:24:52.215 08:39:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:52.215 08:39:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:52.215 { 00:24:52.215 "params": { 00:24:52.215 "name": "Nvme$subsystem", 00:24:52.215 "trtype": "$TEST_TRANSPORT", 00:24:52.215 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:52.215 "adrfam": "ipv4", 00:24:52.215 "trsvcid": "$NVMF_PORT", 00:24:52.215 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:52.215 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:52.215 "hdgst": ${hdgst:-false}, 00:24:52.215 "ddgst": ${ddgst:-false} 00:24:52.215 }, 00:24:52.215 "method": "bdev_nvme_attach_controller" 00:24:52.215 } 00:24:52.215 EOF 00:24:52.215 )") 00:24:52.215 08:39:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:24:52.215 08:39:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:52.215 08:39:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:52.215 { 00:24:52.215 "params": { 00:24:52.215 "name": "Nvme$subsystem", 00:24:52.215 "trtype": "$TEST_TRANSPORT", 00:24:52.215 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:52.215 "adrfam": "ipv4", 00:24:52.215 "trsvcid": "$NVMF_PORT", 00:24:52.215 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:52.215 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:52.215 "hdgst": ${hdgst:-false}, 00:24:52.215 "ddgst": ${ddgst:-false} 00:24:52.215 }, 00:24:52.215 "method": "bdev_nvme_attach_controller" 00:24:52.215 } 00:24:52.215 EOF 00:24:52.215 )") 00:24:52.215 [2024-10-01 08:39:43.866935] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:24:52.215 [2024-10-01 08:39:43.866989] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3823858 ] 00:24:52.215 08:39:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:24:52.215 08:39:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:52.215 08:39:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:52.215 { 00:24:52.215 "params": { 00:24:52.215 "name": "Nvme$subsystem", 00:24:52.215 "trtype": "$TEST_TRANSPORT", 00:24:52.215 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:52.215 "adrfam": "ipv4", 00:24:52.215 "trsvcid": "$NVMF_PORT", 00:24:52.215 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:52.215 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:52.215 "hdgst": ${hdgst:-false}, 00:24:52.215 "ddgst": ${ddgst:-false} 00:24:52.215 }, 00:24:52.215 "method": "bdev_nvme_attach_controller" 00:24:52.215 } 00:24:52.215 EOF 00:24:52.215 )") 00:24:52.215 08:39:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:24:52.215 08:39:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:52.215 08:39:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:52.215 { 00:24:52.215 "params": { 00:24:52.215 "name": "Nvme$subsystem", 00:24:52.215 "trtype": "$TEST_TRANSPORT", 00:24:52.215 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:52.215 "adrfam": "ipv4", 00:24:52.215 "trsvcid": "$NVMF_PORT", 00:24:52.215 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:52.215 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:52.215 "hdgst": ${hdgst:-false}, 00:24:52.215 "ddgst": ${ddgst:-false} 00:24:52.215 }, 00:24:52.215 "method": "bdev_nvme_attach_controller" 00:24:52.215 } 00:24:52.215 EOF 00:24:52.215 )") 00:24:52.215 08:39:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:24:52.215 08:39:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:52.215 08:39:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:52.215 { 00:24:52.215 "params": { 00:24:52.215 "name": "Nvme$subsystem", 00:24:52.215 "trtype": "$TEST_TRANSPORT", 00:24:52.215 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:52.215 "adrfam": "ipv4", 00:24:52.215 "trsvcid": "$NVMF_PORT", 00:24:52.215 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:52.215 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:52.215 "hdgst": ${hdgst:-false}, 00:24:52.215 "ddgst": ${ddgst:-false} 00:24:52.215 }, 00:24:52.215 "method": "bdev_nvme_attach_controller" 00:24:52.215 } 00:24:52.215 EOF 00:24:52.215 )") 00:24:52.215 08:39:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:24:52.215 08:39:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # jq . 00:24:52.215 08:39:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@581 -- # IFS=, 00:24:52.215 08:39:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:24:52.215 "params": { 00:24:52.215 "name": "Nvme1", 00:24:52.215 "trtype": "tcp", 00:24:52.215 "traddr": "10.0.0.2", 00:24:52.215 "adrfam": "ipv4", 00:24:52.215 "trsvcid": "4420", 00:24:52.215 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:52.215 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:52.215 "hdgst": false, 00:24:52.215 "ddgst": false 00:24:52.215 }, 00:24:52.215 "method": "bdev_nvme_attach_controller" 00:24:52.215 },{ 00:24:52.215 "params": { 00:24:52.215 "name": "Nvme2", 00:24:52.215 "trtype": "tcp", 00:24:52.215 "traddr": "10.0.0.2", 00:24:52.215 "adrfam": "ipv4", 00:24:52.215 "trsvcid": "4420", 00:24:52.215 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:52.215 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:52.215 "hdgst": false, 00:24:52.215 "ddgst": false 00:24:52.215 }, 00:24:52.215 "method": "bdev_nvme_attach_controller" 00:24:52.215 },{ 00:24:52.215 "params": { 00:24:52.215 "name": "Nvme3", 00:24:52.215 "trtype": "tcp", 00:24:52.215 "traddr": "10.0.0.2", 00:24:52.215 "adrfam": "ipv4", 00:24:52.215 "trsvcid": "4420", 00:24:52.215 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:52.215 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:52.215 "hdgst": false, 00:24:52.215 "ddgst": false 00:24:52.215 }, 00:24:52.215 "method": "bdev_nvme_attach_controller" 00:24:52.215 },{ 00:24:52.215 "params": { 00:24:52.215 "name": "Nvme4", 00:24:52.215 "trtype": "tcp", 00:24:52.215 "traddr": "10.0.0.2", 00:24:52.215 "adrfam": "ipv4", 00:24:52.215 "trsvcid": "4420", 00:24:52.215 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:52.215 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:52.215 "hdgst": false, 00:24:52.215 "ddgst": false 00:24:52.215 }, 00:24:52.215 "method": "bdev_nvme_attach_controller" 00:24:52.215 },{ 00:24:52.215 "params": { 00:24:52.215 "name": "Nvme5", 00:24:52.215 "trtype": "tcp", 00:24:52.215 "traddr": "10.0.0.2", 00:24:52.215 "adrfam": "ipv4", 00:24:52.215 "trsvcid": "4420", 00:24:52.215 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:52.215 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:52.215 "hdgst": false, 00:24:52.215 "ddgst": false 00:24:52.215 }, 00:24:52.215 "method": "bdev_nvme_attach_controller" 00:24:52.215 },{ 00:24:52.215 "params": { 00:24:52.215 "name": "Nvme6", 00:24:52.215 "trtype": "tcp", 00:24:52.215 "traddr": "10.0.0.2", 00:24:52.215 "adrfam": "ipv4", 00:24:52.215 "trsvcid": "4420", 00:24:52.215 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:52.215 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:52.216 "hdgst": false, 00:24:52.216 "ddgst": false 00:24:52.216 }, 00:24:52.216 "method": "bdev_nvme_attach_controller" 00:24:52.216 },{ 00:24:52.216 "params": { 00:24:52.216 "name": "Nvme7", 00:24:52.216 "trtype": "tcp", 00:24:52.216 "traddr": "10.0.0.2", 00:24:52.216 "adrfam": "ipv4", 00:24:52.216 "trsvcid": "4420", 00:24:52.216 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:52.216 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:52.216 "hdgst": false, 00:24:52.216 "ddgst": false 00:24:52.216 }, 00:24:52.216 "method": "bdev_nvme_attach_controller" 00:24:52.216 },{ 00:24:52.216 "params": { 00:24:52.216 "name": "Nvme8", 00:24:52.216 "trtype": "tcp", 00:24:52.216 "traddr": "10.0.0.2", 00:24:52.216 "adrfam": "ipv4", 00:24:52.216 "trsvcid": "4420", 00:24:52.216 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:52.216 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:52.216 "hdgst": false, 00:24:52.216 "ddgst": false 00:24:52.216 }, 00:24:52.216 "method": "bdev_nvme_attach_controller" 00:24:52.216 },{ 00:24:52.216 "params": { 00:24:52.216 "name": "Nvme9", 00:24:52.216 "trtype": "tcp", 00:24:52.216 "traddr": "10.0.0.2", 00:24:52.216 "adrfam": "ipv4", 00:24:52.216 "trsvcid": "4420", 00:24:52.216 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:52.216 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:52.216 "hdgst": false, 00:24:52.216 "ddgst": false 00:24:52.216 }, 00:24:52.216 "method": "bdev_nvme_attach_controller" 00:24:52.216 },{ 00:24:52.216 "params": { 00:24:52.216 "name": "Nvme10", 00:24:52.216 "trtype": "tcp", 00:24:52.216 "traddr": "10.0.0.2", 00:24:52.216 "adrfam": "ipv4", 00:24:52.216 "trsvcid": "4420", 00:24:52.216 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:52.216 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:52.216 "hdgst": false, 00:24:52.216 "ddgst": false 00:24:52.216 }, 00:24:52.216 "method": "bdev_nvme_attach_controller" 00:24:52.216 }' 00:24:52.216 [2024-10-01 08:39:43.928479] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:52.216 [2024-10-01 08:39:43.993377] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:54.127 Running I/O for 1 seconds... 00:24:55.068 1867.00 IOPS, 116.69 MiB/s 00:24:55.068 Latency(us) 00:24:55.068 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:55.068 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:55.068 Verification LBA range: start 0x0 length 0x400 00:24:55.068 Nvme1n1 : 1.15 222.00 13.88 0.00 0.00 285268.69 18350.08 242920.11 00:24:55.068 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:55.068 Verification LBA range: start 0x0 length 0x400 00:24:55.068 Nvme2n1 : 1.15 223.28 13.95 0.00 0.00 278721.28 13981.01 253405.87 00:24:55.068 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:55.068 Verification LBA range: start 0x0 length 0x400 00:24:55.068 Nvme3n1 : 1.18 270.23 16.89 0.00 0.00 226704.55 15619.41 246415.36 00:24:55.068 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:55.068 Verification LBA range: start 0x0 length 0x400 00:24:55.068 Nvme4n1 : 1.19 268.82 16.80 0.00 0.00 222877.35 11141.12 244667.73 00:24:55.068 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:55.068 Verification LBA range: start 0x0 length 0x400 00:24:55.068 Nvme5n1 : 1.18 216.95 13.56 0.00 0.00 272682.03 17694.72 253405.87 00:24:55.068 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:55.068 Verification LBA range: start 0x0 length 0x400 00:24:55.068 Nvme6n1 : 1.19 215.57 13.47 0.00 0.00 269681.07 21954.56 253405.87 00:24:55.068 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:55.068 Verification LBA range: start 0x0 length 0x400 00:24:55.068 Nvme7n1 : 1.20 266.88 16.68 0.00 0.00 214206.29 13981.01 265639.25 00:24:55.068 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:55.068 Verification LBA range: start 0x0 length 0x400 00:24:55.068 Nvme8n1 : 1.14 223.90 13.99 0.00 0.00 249022.72 18677.76 249910.61 00:24:55.068 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:55.068 Verification LBA range: start 0x0 length 0x400 00:24:55.068 Nvme9n1 : 1.20 214.13 13.38 0.00 0.00 257263.15 21845.33 276125.01 00:24:55.068 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:55.068 Verification LBA range: start 0x0 length 0x400 00:24:55.068 Nvme10n1 : 1.20 266.12 16.63 0.00 0.00 203300.35 13325.65 248162.99 00:24:55.068 =================================================================================================================== 00:24:55.068 Total : 2387.85 149.24 0.00 0.00 245136.33 11141.12 276125.01 00:24:55.068 08:39:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:24:55.068 08:39:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:24:55.068 08:39:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:55.068 08:39:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:55.068 08:39:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:24:55.068 08:39:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # nvmfcleanup 00:24:55.068 08:39:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:24:55.068 08:39:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:55.068 08:39:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:24:55.068 08:39:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:55.068 08:39:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:55.068 rmmod nvme_tcp 00:24:55.330 rmmod nvme_fabrics 00:24:55.330 rmmod nvme_keyring 00:24:55.330 08:39:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:55.330 08:39:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:24:55.330 08:39:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:24:55.330 08:39:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@513 -- # '[' -n 3822903 ']' 00:24:55.330 08:39:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@514 -- # killprocess 3822903 00:24:55.330 08:39:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 3822903 ']' 00:24:55.330 08:39:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 3822903 00:24:55.330 08:39:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:24:55.330 08:39:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:55.330 08:39:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3822903 00:24:55.330 08:39:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:55.330 08:39:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:55.330 08:39:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3822903' 00:24:55.330 killing process with pid 3822903 00:24:55.330 08:39:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 3822903 00:24:55.330 08:39:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 3822903 00:24:55.591 08:39:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:24:55.591 08:39:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:24:55.591 08:39:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:24:55.591 08:39:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:24:55.591 08:39:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@787 -- # iptables-save 00:24:55.591 08:39:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:24:55.591 08:39:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@787 -- # iptables-restore 00:24:55.591 08:39:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:55.591 08:39:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:55.591 08:39:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:55.591 08:39:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:55.591 08:39:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:58.134 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:58.134 00:24:58.134 real 0m16.708s 00:24:58.134 user 0m34.546s 00:24:58.134 sys 0m6.730s 00:24:58.134 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:58.134 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:58.135 ************************************ 00:24:58.135 END TEST nvmf_shutdown_tc1 00:24:58.135 ************************************ 00:24:58.135 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:24:58.135 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:58.135 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:58.135 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:58.135 ************************************ 00:24:58.135 START TEST nvmf_shutdown_tc2 00:24:58.135 ************************************ 00:24:58.135 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:24:58.135 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:24:58.135 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:24:58.135 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:24:58.135 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:58.135 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@472 -- # prepare_net_devs 00:24:58.135 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@434 -- # local -g is_hw=no 00:24:58.135 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@436 -- # remove_spdk_ns 00:24:58.135 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:58.135 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:58.135 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:58.135 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:24:58.135 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:24:58.135 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:24:58.135 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:58.135 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:58.135 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:24:58.135 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:58.135 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:58.135 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:58.135 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:58.135 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:58.135 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:24:58.135 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:58.135 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:24:58.135 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:24:58.135 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:24:58.135 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:24:58.135 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:24:58.135 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:24:58.135 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:58.135 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:58.135 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:58.135 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:58.135 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:58.135 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:58.135 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:58.135 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:58.135 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:58.135 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:58.135 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:58.135 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:24:58.135 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:24:58.135 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:24:58.135 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:24:58.135 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:24:58.135 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:24:58.135 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:24:58.135 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:58.135 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:58.135 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:24:58.135 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:24:58.135 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:58.135 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:58.135 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:24:58.135 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:24:58.135 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:58.135 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:58.135 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:24:58.135 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:24:58.136 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:58.136 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:58.136 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:24:58.136 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:24:58.136 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:24:58.136 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:24:58.136 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:24:58.136 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:58.136 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:24:58.136 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:58.136 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:24:58.136 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:24:58.136 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:58.136 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:58.136 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:58.136 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:24:58.136 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:24:58.136 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:58.136 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:24:58.136 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:58.136 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:24:58.136 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:24:58.136 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:58.136 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:58.136 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:58.136 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:24:58.136 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:24:58.136 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # is_hw=yes 00:24:58.136 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:24:58.136 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:24:58.136 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:24:58.136 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:58.136 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:58.136 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:58.136 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:58.136 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:58.136 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:58.136 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:58.136 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:58.136 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:58.136 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:58.136 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:58.136 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:58.136 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:58.136 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:58.136 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:58.136 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:58.136 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:58.136 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:58.136 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:58.136 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:58.136 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:58.136 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:58.136 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:58.136 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:58.136 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.519 ms 00:24:58.136 00:24:58.136 --- 10.0.0.2 ping statistics --- 00:24:58.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.136 rtt min/avg/max/mdev = 0.519/0.519/0.519/0.000 ms 00:24:58.136 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:58.136 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:58.136 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:24:58.136 00:24:58.136 --- 10.0.0.1 ping statistics --- 00:24:58.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.136 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:24:58.136 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:58.136 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # return 0 00:24:58.136 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:24:58.136 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:58.136 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:24:58.136 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:24:58.136 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:58.136 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:24:58.136 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:24:58.137 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:24:58.137 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:58.137 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:58.137 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:58.137 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@505 -- # nvmfpid=3825009 00:24:58.137 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@506 -- # waitforlisten 3825009 00:24:58.137 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:58.137 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3825009 ']' 00:24:58.137 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:58.137 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:58.137 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:58.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:58.137 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:58.137 08:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:58.137 [2024-10-01 08:39:49.837614] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:24:58.137 [2024-10-01 08:39:49.837676] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:58.137 [2024-10-01 08:39:49.923673] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:58.397 [2024-10-01 08:39:49.983570] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:58.397 [2024-10-01 08:39:49.983604] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:58.397 [2024-10-01 08:39:49.983610] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:58.397 [2024-10-01 08:39:49.983614] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:58.397 [2024-10-01 08:39:49.983619] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:58.397 [2024-10-01 08:39:49.985148] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:58.397 [2024-10-01 08:39:49.985386] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:24:58.397 [2024-10-01 08:39:49.985540] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:58.397 [2024-10-01 08:39:49.985541] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:24:58.969 08:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:58.969 08:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:24:58.969 08:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:58.969 08:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:58.969 08:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:58.969 08:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:58.969 08:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:58.969 08:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.969 08:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:58.969 [2024-10-01 08:39:50.689670] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:58.969 08:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.969 08:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:24:58.969 08:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:24:58.969 08:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:58.969 08:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:58.969 08:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:58.969 08:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:58.969 08:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:58.969 08:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:58.969 08:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:58.969 08:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:58.969 08:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:58.969 08:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:58.969 08:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:58.969 08:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:58.969 08:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:58.969 08:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:58.969 08:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:58.969 08:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:58.969 08:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:58.969 08:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:58.969 08:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:58.969 08:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:58.969 08:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:58.969 08:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:58.969 08:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:58.969 08:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:24:58.969 08:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.969 08:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:58.969 Malloc1 00:24:58.969 [2024-10-01 08:39:50.788225] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:59.230 Malloc2 00:24:59.230 Malloc3 00:24:59.230 Malloc4 00:24:59.230 Malloc5 00:24:59.230 Malloc6 00:24:59.230 Malloc7 00:24:59.230 Malloc8 00:24:59.491 Malloc9 00:24:59.491 Malloc10 00:24:59.492 08:39:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.492 08:39:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:24:59.492 08:39:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:59.492 08:39:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:59.492 08:39:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=3825389 00:24:59.492 08:39:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 3825389 /var/tmp/bdevperf.sock 00:24:59.492 08:39:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3825389 ']' 00:24:59.492 08:39:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:59.492 08:39:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:59.492 08:39:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:59.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:59.492 08:39:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:59.492 08:39:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:59.492 08:39:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:59.492 08:39:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:59.492 08:39:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # config=() 00:24:59.492 08:39:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # local subsystem config 00:24:59.492 08:39:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:59.492 08:39:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:59.492 { 00:24:59.492 "params": { 00:24:59.492 "name": "Nvme$subsystem", 00:24:59.492 "trtype": "$TEST_TRANSPORT", 00:24:59.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:59.492 "adrfam": "ipv4", 00:24:59.492 "trsvcid": "$NVMF_PORT", 00:24:59.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:59.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:59.492 "hdgst": ${hdgst:-false}, 00:24:59.492 "ddgst": ${ddgst:-false} 00:24:59.492 }, 00:24:59.492 "method": "bdev_nvme_attach_controller" 00:24:59.492 } 00:24:59.492 EOF 00:24:59.492 )") 00:24:59.492 08:39:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:24:59.492 08:39:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:59.492 08:39:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:59.492 { 00:24:59.492 "params": { 00:24:59.492 "name": "Nvme$subsystem", 00:24:59.492 "trtype": "$TEST_TRANSPORT", 00:24:59.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:59.492 "adrfam": "ipv4", 00:24:59.492 "trsvcid": "$NVMF_PORT", 00:24:59.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:59.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:59.492 "hdgst": ${hdgst:-false}, 00:24:59.492 "ddgst": ${ddgst:-false} 00:24:59.492 }, 00:24:59.492 "method": "bdev_nvme_attach_controller" 00:24:59.492 } 00:24:59.492 EOF 00:24:59.492 )") 00:24:59.492 08:39:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:24:59.492 08:39:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:59.492 08:39:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:59.492 { 00:24:59.492 "params": { 00:24:59.492 "name": "Nvme$subsystem", 00:24:59.492 "trtype": "$TEST_TRANSPORT", 00:24:59.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:59.492 "adrfam": "ipv4", 00:24:59.492 "trsvcid": "$NVMF_PORT", 00:24:59.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:59.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:59.492 "hdgst": ${hdgst:-false}, 00:24:59.492 "ddgst": ${ddgst:-false} 00:24:59.492 }, 00:24:59.492 "method": "bdev_nvme_attach_controller" 00:24:59.492 } 00:24:59.492 EOF 00:24:59.492 )") 00:24:59.492 08:39:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:24:59.492 08:39:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:59.492 08:39:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:59.492 { 00:24:59.492 "params": { 00:24:59.492 "name": "Nvme$subsystem", 00:24:59.492 "trtype": "$TEST_TRANSPORT", 00:24:59.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:59.492 "adrfam": "ipv4", 00:24:59.492 "trsvcid": "$NVMF_PORT", 00:24:59.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:59.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:59.492 "hdgst": ${hdgst:-false}, 00:24:59.492 "ddgst": ${ddgst:-false} 00:24:59.492 }, 00:24:59.492 "method": "bdev_nvme_attach_controller" 00:24:59.492 } 00:24:59.492 EOF 00:24:59.492 )") 00:24:59.492 08:39:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:24:59.492 08:39:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:59.492 08:39:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:59.492 { 00:24:59.492 "params": { 00:24:59.492 "name": "Nvme$subsystem", 00:24:59.492 "trtype": "$TEST_TRANSPORT", 00:24:59.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:59.492 "adrfam": "ipv4", 00:24:59.492 "trsvcid": "$NVMF_PORT", 00:24:59.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:59.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:59.492 "hdgst": ${hdgst:-false}, 00:24:59.492 "ddgst": ${ddgst:-false} 00:24:59.492 }, 00:24:59.492 "method": "bdev_nvme_attach_controller" 00:24:59.492 } 00:24:59.492 EOF 00:24:59.492 )") 00:24:59.492 08:39:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:24:59.492 08:39:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:59.492 08:39:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:59.492 { 00:24:59.492 "params": { 00:24:59.492 "name": "Nvme$subsystem", 00:24:59.492 "trtype": "$TEST_TRANSPORT", 00:24:59.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:59.492 "adrfam": "ipv4", 00:24:59.492 "trsvcid": "$NVMF_PORT", 00:24:59.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:59.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:59.492 "hdgst": ${hdgst:-false}, 00:24:59.492 "ddgst": ${ddgst:-false} 00:24:59.492 }, 00:24:59.492 "method": "bdev_nvme_attach_controller" 00:24:59.492 } 00:24:59.492 EOF 00:24:59.492 )") 00:24:59.492 08:39:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:24:59.492 08:39:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:59.492 [2024-10-01 08:39:51.239480] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:24:59.492 [2024-10-01 08:39:51.239536] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3825389 ] 00:24:59.492 08:39:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:59.492 { 00:24:59.492 "params": { 00:24:59.492 "name": "Nvme$subsystem", 00:24:59.492 "trtype": "$TEST_TRANSPORT", 00:24:59.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:59.492 "adrfam": "ipv4", 00:24:59.492 "trsvcid": "$NVMF_PORT", 00:24:59.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:59.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:59.492 "hdgst": ${hdgst:-false}, 00:24:59.492 "ddgst": ${ddgst:-false} 00:24:59.492 }, 00:24:59.492 "method": "bdev_nvme_attach_controller" 00:24:59.492 } 00:24:59.492 EOF 00:24:59.492 )") 00:24:59.492 08:39:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:24:59.492 08:39:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:59.492 08:39:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:59.492 { 00:24:59.492 "params": { 00:24:59.492 "name": "Nvme$subsystem", 00:24:59.492 "trtype": "$TEST_TRANSPORT", 00:24:59.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:59.492 "adrfam": "ipv4", 00:24:59.492 "trsvcid": "$NVMF_PORT", 00:24:59.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:59.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:59.492 "hdgst": ${hdgst:-false}, 00:24:59.492 "ddgst": ${ddgst:-false} 00:24:59.492 }, 00:24:59.492 "method": "bdev_nvme_attach_controller" 00:24:59.492 } 00:24:59.492 EOF 00:24:59.492 )") 00:24:59.492 08:39:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:24:59.492 08:39:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:59.492 08:39:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:59.492 { 00:24:59.492 "params": { 00:24:59.492 "name": "Nvme$subsystem", 00:24:59.492 "trtype": "$TEST_TRANSPORT", 00:24:59.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:59.492 "adrfam": "ipv4", 00:24:59.492 "trsvcid": "$NVMF_PORT", 00:24:59.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:59.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:59.492 "hdgst": ${hdgst:-false}, 00:24:59.492 "ddgst": ${ddgst:-false} 00:24:59.493 }, 00:24:59.493 "method": "bdev_nvme_attach_controller" 00:24:59.493 } 00:24:59.493 EOF 00:24:59.493 )") 00:24:59.493 08:39:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:24:59.493 08:39:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:59.493 08:39:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:59.493 { 00:24:59.493 "params": { 00:24:59.493 "name": "Nvme$subsystem", 00:24:59.493 "trtype": "$TEST_TRANSPORT", 00:24:59.493 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:59.493 "adrfam": "ipv4", 00:24:59.493 "trsvcid": "$NVMF_PORT", 00:24:59.493 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:59.493 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:59.493 "hdgst": ${hdgst:-false}, 00:24:59.493 "ddgst": ${ddgst:-false} 00:24:59.493 }, 00:24:59.493 "method": "bdev_nvme_attach_controller" 00:24:59.493 } 00:24:59.493 EOF 00:24:59.493 )") 00:24:59.493 08:39:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:24:59.493 08:39:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # jq . 00:24:59.493 08:39:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@581 -- # IFS=, 00:24:59.493 08:39:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:24:59.493 "params": { 00:24:59.493 "name": "Nvme1", 00:24:59.493 "trtype": "tcp", 00:24:59.493 "traddr": "10.0.0.2", 00:24:59.493 "adrfam": "ipv4", 00:24:59.493 "trsvcid": "4420", 00:24:59.493 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:59.493 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:59.493 "hdgst": false, 00:24:59.493 "ddgst": false 00:24:59.493 }, 00:24:59.493 "method": "bdev_nvme_attach_controller" 00:24:59.493 },{ 00:24:59.493 "params": { 00:24:59.493 "name": "Nvme2", 00:24:59.493 "trtype": "tcp", 00:24:59.493 "traddr": "10.0.0.2", 00:24:59.493 "adrfam": "ipv4", 00:24:59.493 "trsvcid": "4420", 00:24:59.493 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:59.493 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:59.493 "hdgst": false, 00:24:59.493 "ddgst": false 00:24:59.493 }, 00:24:59.493 "method": "bdev_nvme_attach_controller" 00:24:59.493 },{ 00:24:59.493 "params": { 00:24:59.493 "name": "Nvme3", 00:24:59.493 "trtype": "tcp", 00:24:59.493 "traddr": "10.0.0.2", 00:24:59.493 "adrfam": "ipv4", 00:24:59.493 "trsvcid": "4420", 00:24:59.493 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:59.493 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:59.493 "hdgst": false, 00:24:59.493 "ddgst": false 00:24:59.493 }, 00:24:59.493 "method": "bdev_nvme_attach_controller" 00:24:59.493 },{ 00:24:59.493 "params": { 00:24:59.493 "name": "Nvme4", 00:24:59.493 "trtype": "tcp", 00:24:59.493 "traddr": "10.0.0.2", 00:24:59.493 "adrfam": "ipv4", 00:24:59.493 "trsvcid": "4420", 00:24:59.493 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:59.493 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:59.493 "hdgst": false, 00:24:59.493 "ddgst": false 00:24:59.493 }, 00:24:59.493 "method": "bdev_nvme_attach_controller" 00:24:59.493 },{ 00:24:59.493 "params": { 00:24:59.493 "name": "Nvme5", 00:24:59.493 "trtype": "tcp", 00:24:59.493 "traddr": "10.0.0.2", 00:24:59.493 "adrfam": "ipv4", 00:24:59.493 "trsvcid": "4420", 00:24:59.493 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:59.493 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:59.493 "hdgst": false, 00:24:59.493 "ddgst": false 00:24:59.493 }, 00:24:59.493 "method": "bdev_nvme_attach_controller" 00:24:59.493 },{ 00:24:59.493 "params": { 00:24:59.493 "name": "Nvme6", 00:24:59.493 "trtype": "tcp", 00:24:59.493 "traddr": "10.0.0.2", 00:24:59.493 "adrfam": "ipv4", 00:24:59.493 "trsvcid": "4420", 00:24:59.493 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:59.493 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:59.493 "hdgst": false, 00:24:59.493 "ddgst": false 00:24:59.493 }, 00:24:59.493 "method": "bdev_nvme_attach_controller" 00:24:59.493 },{ 00:24:59.493 "params": { 00:24:59.493 "name": "Nvme7", 00:24:59.493 "trtype": "tcp", 00:24:59.493 "traddr": "10.0.0.2", 00:24:59.493 "adrfam": "ipv4", 00:24:59.493 "trsvcid": "4420", 00:24:59.493 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:59.493 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:59.493 "hdgst": false, 00:24:59.493 "ddgst": false 00:24:59.493 }, 00:24:59.493 "method": "bdev_nvme_attach_controller" 00:24:59.493 },{ 00:24:59.493 "params": { 00:24:59.493 "name": "Nvme8", 00:24:59.493 "trtype": "tcp", 00:24:59.493 "traddr": "10.0.0.2", 00:24:59.493 "adrfam": "ipv4", 00:24:59.493 "trsvcid": "4420", 00:24:59.493 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:59.493 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:59.493 "hdgst": false, 00:24:59.493 "ddgst": false 00:24:59.493 }, 00:24:59.493 "method": "bdev_nvme_attach_controller" 00:24:59.493 },{ 00:24:59.493 "params": { 00:24:59.493 "name": "Nvme9", 00:24:59.493 "trtype": "tcp", 00:24:59.493 "traddr": "10.0.0.2", 00:24:59.493 "adrfam": "ipv4", 00:24:59.493 "trsvcid": "4420", 00:24:59.493 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:59.493 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:59.493 "hdgst": false, 00:24:59.493 "ddgst": false 00:24:59.493 }, 00:24:59.493 "method": "bdev_nvme_attach_controller" 00:24:59.493 },{ 00:24:59.493 "params": { 00:24:59.493 "name": "Nvme10", 00:24:59.493 "trtype": "tcp", 00:24:59.493 "traddr": "10.0.0.2", 00:24:59.493 "adrfam": "ipv4", 00:24:59.493 "trsvcid": "4420", 00:24:59.493 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:59.493 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:59.493 "hdgst": false, 00:24:59.493 "ddgst": false 00:24:59.493 }, 00:24:59.493 "method": "bdev_nvme_attach_controller" 00:24:59.493 }' 00:24:59.493 [2024-10-01 08:39:51.300896] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:59.754 [2024-10-01 08:39:51.365494] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:01.140 Running I/O for 10 seconds... 00:25:01.140 08:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:01.140 08:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:25:01.140 08:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:01.140 08:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.140 08:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:01.140 08:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.140 08:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:25:01.140 08:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:25:01.140 08:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:25:01.140 08:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:25:01.140 08:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:25:01.140 08:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:25:01.140 08:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:25:01.141 08:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:01.141 08:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.141 08:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:01.141 08:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:25:01.141 08:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.141 08:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:25:01.141 08:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:25:01.141 08:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:25:01.401 08:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:25:01.401 08:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:25:01.401 08:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:25:01.401 08:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:01.401 08:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.402 08:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:01.662 08:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.662 08:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:25:01.662 08:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:25:01.662 08:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:25:01.926 08:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:25:01.926 08:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:25:01.926 08:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:25:01.926 08:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:01.926 08:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.926 08:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:01.926 08:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.926 08:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:25:01.926 08:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:25:01.926 08:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:25:01.926 08:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:25:01.926 08:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:25:01.926 08:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 3825389 00:25:01.926 08:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 3825389 ']' 00:25:01.926 08:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 3825389 00:25:01.926 08:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:25:01.926 08:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:01.926 08:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3825389 00:25:01.926 08:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:01.926 08:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:01.926 08:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3825389' 00:25:01.926 killing process with pid 3825389 00:25:01.926 08:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 3825389 00:25:01.926 08:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 3825389 00:25:01.926 Received shutdown signal, test time was about 0.969159 seconds 00:25:01.926 00:25:01.926 Latency(us) 00:25:01.926 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:01.926 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:01.926 Verification LBA range: start 0x0 length 0x400 00:25:01.926 Nvme1n1 : 0.97 264.39 16.52 0.00 0.00 239176.53 15510.19 244667.73 00:25:01.926 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:01.926 Verification LBA range: start 0x0 length 0x400 00:25:01.926 Nvme2n1 : 0.94 220.38 13.77 0.00 0.00 278241.28 7918.93 251658.24 00:25:01.926 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:01.926 Verification LBA range: start 0x0 length 0x400 00:25:01.926 Nvme3n1 : 0.97 265.24 16.58 0.00 0.00 228524.80 18350.08 221948.59 00:25:01.926 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:01.926 Verification LBA range: start 0x0 length 0x400 00:25:01.926 Nvme4n1 : 0.96 265.50 16.59 0.00 0.00 223797.97 26105.17 255153.49 00:25:01.926 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:01.926 Verification LBA range: start 0x0 length 0x400 00:25:01.926 Nvme5n1 : 0.96 266.88 16.68 0.00 0.00 218057.81 20862.29 235929.60 00:25:01.926 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:01.926 Verification LBA range: start 0x0 length 0x400 00:25:01.926 Nvme6n1 : 0.94 204.09 12.76 0.00 0.00 278260.91 21299.20 244667.73 00:25:01.926 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:01.926 Verification LBA range: start 0x0 length 0x400 00:25:01.926 Nvme7n1 : 0.96 271.68 16.98 0.00 0.00 204629.23 19333.12 246415.36 00:25:01.926 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:01.926 Verification LBA range: start 0x0 length 0x400 00:25:01.926 Nvme8n1 : 0.95 269.37 16.84 0.00 0.00 201585.92 15510.19 246415.36 00:25:01.926 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:01.926 Verification LBA range: start 0x0 length 0x400 00:25:01.926 Nvme9n1 : 0.95 202.70 12.67 0.00 0.00 261568.85 18022.40 248162.99 00:25:01.926 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:01.926 Verification LBA range: start 0x0 length 0x400 00:25:01.926 Nvme10n1 : 0.95 201.57 12.60 0.00 0.00 257166.22 39976.96 267386.88 00:25:01.926 =================================================================================================================== 00:25:01.926 Total : 2431.80 151.99 0.00 0.00 236038.57 7918.93 267386.88 00:25:02.188 08:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:25:03.133 08:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 3825009 00:25:03.133 08:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:25:03.133 08:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:25:03.133 08:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:03.133 08:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:03.133 08:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:25:03.133 08:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # nvmfcleanup 00:25:03.133 08:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:25:03.133 08:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:03.133 08:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:25:03.133 08:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:03.133 08:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:03.133 rmmod nvme_tcp 00:25:03.133 rmmod nvme_fabrics 00:25:03.133 rmmod nvme_keyring 00:25:03.133 08:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:03.133 08:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:25:03.133 08:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:25:03.133 08:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@513 -- # '[' -n 3825009 ']' 00:25:03.133 08:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@514 -- # killprocess 3825009 00:25:03.133 08:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 3825009 ']' 00:25:03.133 08:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 3825009 00:25:03.133 08:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:25:03.133 08:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:03.133 08:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3825009 00:25:03.395 08:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:03.395 08:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:03.395 08:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3825009' 00:25:03.395 killing process with pid 3825009 00:25:03.395 08:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 3825009 00:25:03.395 08:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 3825009 00:25:03.656 08:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:25:03.656 08:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:25:03.656 08:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:25:03.656 08:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:25:03.656 08:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@787 -- # iptables-save 00:25:03.656 08:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:25:03.656 08:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@787 -- # iptables-restore 00:25:03.656 08:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:03.656 08:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:03.656 08:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.656 08:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:03.656 08:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:05.568 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:05.568 00:25:05.568 real 0m7.888s 00:25:05.568 user 0m23.679s 00:25:05.568 sys 0m1.307s 00:25:05.568 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:05.568 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:05.568 ************************************ 00:25:05.568 END TEST nvmf_shutdown_tc2 00:25:05.568 ************************************ 00:25:05.568 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:25:05.568 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:05.568 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:05.568 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:05.829 ************************************ 00:25:05.829 START TEST nvmf_shutdown_tc3 00:25:05.829 ************************************ 00:25:05.829 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:25:05.829 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:25:05.829 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:25:05.829 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:25:05.829 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:05.829 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@472 -- # prepare_net_devs 00:25:05.829 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@434 -- # local -g is_hw=no 00:25:05.829 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@436 -- # remove_spdk_ns 00:25:05.829 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:05.829 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:05.829 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:05.829 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:25:05.829 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:25:05.829 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:25:05.829 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:05.829 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:05.829 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:25:05.829 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:05.829 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:05.829 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:05.829 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:05.829 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:05.829 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:25:05.829 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:05.829 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:25:05.829 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:25:05.829 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:25:05.829 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:25:05.829 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:05.830 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:05.830 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:05.830 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:05.830 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # is_hw=yes 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:05.830 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:06.091 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:06.091 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:06.091 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:06.091 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:06.091 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:06.091 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.594 ms 00:25:06.091 00:25:06.091 --- 10.0.0.2 ping statistics --- 00:25:06.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:06.091 rtt min/avg/max/mdev = 0.594/0.594/0.594/0.000 ms 00:25:06.091 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:06.091 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:06.091 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:25:06.091 00:25:06.091 --- 10.0.0.1 ping statistics --- 00:25:06.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:06.091 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:25:06.091 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:06.091 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # return 0 00:25:06.091 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:25:06.091 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:06.091 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:25:06.091 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:25:06.091 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:06.091 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:25:06.091 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:25:06.091 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:25:06.091 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:25:06.091 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:06.091 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:06.091 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@505 -- # nvmfpid=3826791 00:25:06.091 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@506 -- # waitforlisten 3826791 00:25:06.091 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:06.091 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 3826791 ']' 00:25:06.091 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:06.091 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:06.091 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:06.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:06.091 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:06.091 08:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:06.091 [2024-10-01 08:39:57.849943] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:25:06.091 [2024-10-01 08:39:57.850017] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:06.352 [2024-10-01 08:39:57.936827] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:06.352 [2024-10-01 08:39:57.997357] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:06.352 [2024-10-01 08:39:57.997390] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:06.352 [2024-10-01 08:39:57.997396] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:06.352 [2024-10-01 08:39:57.997400] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:06.352 [2024-10-01 08:39:57.997405] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:06.352 [2024-10-01 08:39:57.998902] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:25:06.352 [2024-10-01 08:39:57.999060] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:25:06.352 [2024-10-01 08:39:57.999219] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:06.352 [2024-10-01 08:39:57.999221] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:25:06.924 08:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:06.924 08:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:25:06.924 08:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:25:06.924 08:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:06.924 08:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:06.924 08:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:06.924 08:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:06.924 08:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.924 08:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:06.924 [2024-10-01 08:39:58.699234] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:06.924 08:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.924 08:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:25:06.924 08:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:25:06.924 08:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:06.924 08:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:06.924 08:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:06.924 08:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:06.924 08:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:06.924 08:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:06.924 08:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:06.924 08:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:06.924 08:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:06.924 08:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:06.924 08:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:06.924 08:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:06.924 08:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:06.924 08:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:06.924 08:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:06.924 08:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:06.924 08:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:07.186 08:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:07.186 08:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:07.186 08:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:07.186 08:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:07.186 08:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:07.186 08:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:07.186 08:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:25:07.186 08:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.186 08:39:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:07.186 Malloc1 00:25:07.186 [2024-10-01 08:39:58.797786] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:07.186 Malloc2 00:25:07.186 Malloc3 00:25:07.186 Malloc4 00:25:07.186 Malloc5 00:25:07.186 Malloc6 00:25:07.186 Malloc7 00:25:07.446 Malloc8 00:25:07.446 Malloc9 00:25:07.446 Malloc10 00:25:07.446 08:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.446 08:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:25:07.446 08:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:07.446 08:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:07.446 08:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=3827011 00:25:07.446 08:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 3827011 /var/tmp/bdevperf.sock 00:25:07.446 08:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 3827011 ']' 00:25:07.446 08:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:07.446 08:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:07.446 08:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:07.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:07.446 08:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:25:07.446 08:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:07.446 08:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:07.446 08:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:07.446 08:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # config=() 00:25:07.446 08:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # local subsystem config 00:25:07.446 08:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:07.446 08:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:07.446 { 00:25:07.446 "params": { 00:25:07.446 "name": "Nvme$subsystem", 00:25:07.446 "trtype": "$TEST_TRANSPORT", 00:25:07.446 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:07.446 "adrfam": "ipv4", 00:25:07.446 "trsvcid": "$NVMF_PORT", 00:25:07.446 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:07.446 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:07.446 "hdgst": ${hdgst:-false}, 00:25:07.446 "ddgst": ${ddgst:-false} 00:25:07.446 }, 00:25:07.446 "method": "bdev_nvme_attach_controller" 00:25:07.446 } 00:25:07.446 EOF 00:25:07.446 )") 00:25:07.446 08:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:25:07.446 08:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:07.446 08:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:07.446 { 00:25:07.446 "params": { 00:25:07.446 "name": "Nvme$subsystem", 00:25:07.446 "trtype": "$TEST_TRANSPORT", 00:25:07.446 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:07.446 "adrfam": "ipv4", 00:25:07.446 "trsvcid": "$NVMF_PORT", 00:25:07.446 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:07.446 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:07.446 "hdgst": ${hdgst:-false}, 00:25:07.446 "ddgst": ${ddgst:-false} 00:25:07.446 }, 00:25:07.446 "method": "bdev_nvme_attach_controller" 00:25:07.446 } 00:25:07.446 EOF 00:25:07.446 )") 00:25:07.446 08:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:25:07.447 08:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:07.447 08:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:07.447 { 00:25:07.447 "params": { 00:25:07.447 "name": "Nvme$subsystem", 00:25:07.447 "trtype": "$TEST_TRANSPORT", 00:25:07.447 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:07.447 "adrfam": "ipv4", 00:25:07.447 "trsvcid": "$NVMF_PORT", 00:25:07.447 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:07.447 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:07.447 "hdgst": ${hdgst:-false}, 00:25:07.447 "ddgst": ${ddgst:-false} 00:25:07.447 }, 00:25:07.447 "method": "bdev_nvme_attach_controller" 00:25:07.447 } 00:25:07.447 EOF 00:25:07.447 )") 00:25:07.447 08:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:25:07.447 08:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:07.447 08:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:07.447 { 00:25:07.447 "params": { 00:25:07.447 "name": "Nvme$subsystem", 00:25:07.447 "trtype": "$TEST_TRANSPORT", 00:25:07.447 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:07.447 "adrfam": "ipv4", 00:25:07.447 "trsvcid": "$NVMF_PORT", 00:25:07.447 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:07.447 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:07.447 "hdgst": ${hdgst:-false}, 00:25:07.447 "ddgst": ${ddgst:-false} 00:25:07.447 }, 00:25:07.447 "method": "bdev_nvme_attach_controller" 00:25:07.447 } 00:25:07.447 EOF 00:25:07.447 )") 00:25:07.447 08:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:25:07.447 08:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:07.447 08:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:07.447 { 00:25:07.447 "params": { 00:25:07.447 "name": "Nvme$subsystem", 00:25:07.447 "trtype": "$TEST_TRANSPORT", 00:25:07.447 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:07.447 "adrfam": "ipv4", 00:25:07.447 "trsvcid": "$NVMF_PORT", 00:25:07.447 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:07.447 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:07.447 "hdgst": ${hdgst:-false}, 00:25:07.447 "ddgst": ${ddgst:-false} 00:25:07.447 }, 00:25:07.447 "method": "bdev_nvme_attach_controller" 00:25:07.447 } 00:25:07.447 EOF 00:25:07.447 )") 00:25:07.447 08:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:25:07.447 08:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:07.447 08:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:07.447 { 00:25:07.447 "params": { 00:25:07.447 "name": "Nvme$subsystem", 00:25:07.447 "trtype": "$TEST_TRANSPORT", 00:25:07.447 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:07.447 "adrfam": "ipv4", 00:25:07.447 "trsvcid": "$NVMF_PORT", 00:25:07.447 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:07.447 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:07.447 "hdgst": ${hdgst:-false}, 00:25:07.447 "ddgst": ${ddgst:-false} 00:25:07.447 }, 00:25:07.447 "method": "bdev_nvme_attach_controller" 00:25:07.447 } 00:25:07.447 EOF 00:25:07.447 )") 00:25:07.447 08:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:25:07.447 08:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:07.447 08:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:07.447 { 00:25:07.447 "params": { 00:25:07.447 "name": "Nvme$subsystem", 00:25:07.447 "trtype": "$TEST_TRANSPORT", 00:25:07.447 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:07.447 "adrfam": "ipv4", 00:25:07.447 "trsvcid": "$NVMF_PORT", 00:25:07.447 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:07.447 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:07.447 "hdgst": ${hdgst:-false}, 00:25:07.447 "ddgst": ${ddgst:-false} 00:25:07.447 }, 00:25:07.447 "method": "bdev_nvme_attach_controller" 00:25:07.447 } 00:25:07.447 EOF 00:25:07.447 )") 00:25:07.447 [2024-10-01 08:39:59.243566] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:25:07.447 [2024-10-01 08:39:59.243620] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3827011 ] 00:25:07.447 08:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:25:07.447 08:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:07.447 08:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:07.447 { 00:25:07.447 "params": { 00:25:07.447 "name": "Nvme$subsystem", 00:25:07.447 "trtype": "$TEST_TRANSPORT", 00:25:07.447 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:07.447 "adrfam": "ipv4", 00:25:07.447 "trsvcid": "$NVMF_PORT", 00:25:07.447 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:07.447 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:07.447 "hdgst": ${hdgst:-false}, 00:25:07.447 "ddgst": ${ddgst:-false} 00:25:07.447 }, 00:25:07.447 "method": "bdev_nvme_attach_controller" 00:25:07.447 } 00:25:07.447 EOF 00:25:07.447 )") 00:25:07.447 08:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:25:07.447 08:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:07.447 08:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:07.447 { 00:25:07.447 "params": { 00:25:07.447 "name": "Nvme$subsystem", 00:25:07.447 "trtype": "$TEST_TRANSPORT", 00:25:07.447 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:07.447 "adrfam": "ipv4", 00:25:07.447 "trsvcid": "$NVMF_PORT", 00:25:07.447 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:07.447 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:07.447 "hdgst": ${hdgst:-false}, 00:25:07.447 "ddgst": ${ddgst:-false} 00:25:07.447 }, 00:25:07.447 "method": "bdev_nvme_attach_controller" 00:25:07.447 } 00:25:07.447 EOF 00:25:07.447 )") 00:25:07.447 08:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:25:07.447 08:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:25:07.447 08:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:25:07.447 { 00:25:07.447 "params": { 00:25:07.447 "name": "Nvme$subsystem", 00:25:07.447 "trtype": "$TEST_TRANSPORT", 00:25:07.447 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:07.447 "adrfam": "ipv4", 00:25:07.447 "trsvcid": "$NVMF_PORT", 00:25:07.447 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:07.447 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:07.447 "hdgst": ${hdgst:-false}, 00:25:07.447 "ddgst": ${ddgst:-false} 00:25:07.447 }, 00:25:07.447 "method": "bdev_nvme_attach_controller" 00:25:07.447 } 00:25:07.447 EOF 00:25:07.447 )") 00:25:07.708 08:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:25:07.708 08:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # jq . 00:25:07.708 08:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@581 -- # IFS=, 00:25:07.708 08:39:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:25:07.708 "params": { 00:25:07.708 "name": "Nvme1", 00:25:07.708 "trtype": "tcp", 00:25:07.708 "traddr": "10.0.0.2", 00:25:07.708 "adrfam": "ipv4", 00:25:07.708 "trsvcid": "4420", 00:25:07.708 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:07.708 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:07.708 "hdgst": false, 00:25:07.708 "ddgst": false 00:25:07.708 }, 00:25:07.708 "method": "bdev_nvme_attach_controller" 00:25:07.708 },{ 00:25:07.708 "params": { 00:25:07.708 "name": "Nvme2", 00:25:07.708 "trtype": "tcp", 00:25:07.708 "traddr": "10.0.0.2", 00:25:07.708 "adrfam": "ipv4", 00:25:07.708 "trsvcid": "4420", 00:25:07.708 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:07.708 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:07.708 "hdgst": false, 00:25:07.708 "ddgst": false 00:25:07.708 }, 00:25:07.708 "method": "bdev_nvme_attach_controller" 00:25:07.708 },{ 00:25:07.708 "params": { 00:25:07.708 "name": "Nvme3", 00:25:07.708 "trtype": "tcp", 00:25:07.708 "traddr": "10.0.0.2", 00:25:07.708 "adrfam": "ipv4", 00:25:07.708 "trsvcid": "4420", 00:25:07.708 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:07.708 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:07.708 "hdgst": false, 00:25:07.708 "ddgst": false 00:25:07.708 }, 00:25:07.708 "method": "bdev_nvme_attach_controller" 00:25:07.708 },{ 00:25:07.708 "params": { 00:25:07.708 "name": "Nvme4", 00:25:07.708 "trtype": "tcp", 00:25:07.708 "traddr": "10.0.0.2", 00:25:07.708 "adrfam": "ipv4", 00:25:07.708 "trsvcid": "4420", 00:25:07.708 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:07.708 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:07.708 "hdgst": false, 00:25:07.708 "ddgst": false 00:25:07.708 }, 00:25:07.708 "method": "bdev_nvme_attach_controller" 00:25:07.708 },{ 00:25:07.708 "params": { 00:25:07.708 "name": "Nvme5", 00:25:07.708 "trtype": "tcp", 00:25:07.708 "traddr": "10.0.0.2", 00:25:07.708 "adrfam": "ipv4", 00:25:07.708 "trsvcid": "4420", 00:25:07.708 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:07.708 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:07.708 "hdgst": false, 00:25:07.708 "ddgst": false 00:25:07.708 }, 00:25:07.708 "method": "bdev_nvme_attach_controller" 00:25:07.708 },{ 00:25:07.708 "params": { 00:25:07.708 "name": "Nvme6", 00:25:07.708 "trtype": "tcp", 00:25:07.708 "traddr": "10.0.0.2", 00:25:07.708 "adrfam": "ipv4", 00:25:07.708 "trsvcid": "4420", 00:25:07.708 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:07.708 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:07.708 "hdgst": false, 00:25:07.708 "ddgst": false 00:25:07.708 }, 00:25:07.708 "method": "bdev_nvme_attach_controller" 00:25:07.708 },{ 00:25:07.708 "params": { 00:25:07.708 "name": "Nvme7", 00:25:07.708 "trtype": "tcp", 00:25:07.708 "traddr": "10.0.0.2", 00:25:07.708 "adrfam": "ipv4", 00:25:07.708 "trsvcid": "4420", 00:25:07.708 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:07.708 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:07.708 "hdgst": false, 00:25:07.708 "ddgst": false 00:25:07.708 }, 00:25:07.708 "method": "bdev_nvme_attach_controller" 00:25:07.708 },{ 00:25:07.708 "params": { 00:25:07.708 "name": "Nvme8", 00:25:07.708 "trtype": "tcp", 00:25:07.708 "traddr": "10.0.0.2", 00:25:07.708 "adrfam": "ipv4", 00:25:07.708 "trsvcid": "4420", 00:25:07.708 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:07.708 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:07.708 "hdgst": false, 00:25:07.708 "ddgst": false 00:25:07.708 }, 00:25:07.708 "method": "bdev_nvme_attach_controller" 00:25:07.708 },{ 00:25:07.708 "params": { 00:25:07.708 "name": "Nvme9", 00:25:07.708 "trtype": "tcp", 00:25:07.708 "traddr": "10.0.0.2", 00:25:07.708 "adrfam": "ipv4", 00:25:07.708 "trsvcid": "4420", 00:25:07.708 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:07.708 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:07.708 "hdgst": false, 00:25:07.708 "ddgst": false 00:25:07.708 }, 00:25:07.708 "method": "bdev_nvme_attach_controller" 00:25:07.708 },{ 00:25:07.708 "params": { 00:25:07.708 "name": "Nvme10", 00:25:07.708 "trtype": "tcp", 00:25:07.708 "traddr": "10.0.0.2", 00:25:07.708 "adrfam": "ipv4", 00:25:07.708 "trsvcid": "4420", 00:25:07.708 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:07.708 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:07.708 "hdgst": false, 00:25:07.708 "ddgst": false 00:25:07.708 }, 00:25:07.708 "method": "bdev_nvme_attach_controller" 00:25:07.708 }' 00:25:07.708 [2024-10-01 08:39:59.305363] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:07.708 [2024-10-01 08:39:59.370035] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:09.620 Running I/O for 10 seconds... 00:25:09.620 08:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:09.620 08:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:25:09.620 08:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:09.620 08:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.620 08:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:09.620 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.620 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:09.620 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:25:09.620 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:25:09.620 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:25:09.620 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:25:09.620 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:25:09.620 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:25:09.620 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:25:09.620 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:09.620 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:25:09.620 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.620 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:09.620 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.620 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:25:09.620 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:25:09.620 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:25:09.620 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:25:09.620 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:25:09.620 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:09.620 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:25:09.620 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.620 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:09.881 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.881 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:25:09.881 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:25:09.881 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:25:10.163 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:25:10.163 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:25:10.163 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:10.163 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:25:10.163 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.163 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:10.163 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.163 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:25:10.163 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:25:10.163 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:25:10.163 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:25:10.163 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:25:10.163 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 3826791 00:25:10.163 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 3826791 ']' 00:25:10.163 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 3826791 00:25:10.163 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:25:10.163 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:10.163 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3826791 00:25:10.163 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:10.163 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:10.163 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3826791' 00:25:10.163 killing process with pid 3826791 00:25:10.163 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 3826791 00:25:10.163 08:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 3826791 00:25:10.163 [2024-10-01 08:40:01.844574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121fbf0 is same with the state(6) to be set 00:25:10.163 [2024-10-01 08:40:01.844633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121fbf0 is same with the state(6) to be set 00:25:10.163 [2024-10-01 08:40:01.844640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121fbf0 is same with the state(6) to be set 00:25:10.163 [2024-10-01 08:40:01.844650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121fbf0 is same with the state(6) to be set 00:25:10.163 [2024-10-01 08:40:01.844655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121fbf0 is same with the state(6) to be set 00:25:10.163 [2024-10-01 08:40:01.844659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121fbf0 is same with the state(6) to be set 00:25:10.163 [2024-10-01 08:40:01.844665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121fbf0 is same with the state(6) to be set 00:25:10.163 [2024-10-01 08:40:01.844670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121fbf0 is same with the state(6) to be set 00:25:10.163 [2024-10-01 08:40:01.844675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121fbf0 is same with the state(6) to be set 00:25:10.163 [2024-10-01 08:40:01.844680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121fbf0 is same with the state(6) to be set 00:25:10.163 [2024-10-01 08:40:01.844685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121fbf0 is same with the state(6) to be set 00:25:10.163 [2024-10-01 08:40:01.844689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121fbf0 is same with the state(6) to be set 00:25:10.163 [2024-10-01 08:40:01.844694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121fbf0 is same with the state(6) to be set 00:25:10.163 [2024-10-01 08:40:01.844699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121fbf0 is same with the state(6) to be set 00:25:10.163 [2024-10-01 08:40:01.844704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121fbf0 is same with the state(6) to be set 00:25:10.163 [2024-10-01 08:40:01.844708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121fbf0 is same with the state(6) to be set 00:25:10.163 [2024-10-01 08:40:01.844713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121fbf0 is same with the state(6) to be set 00:25:10.163 [2024-10-01 08:40:01.844718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121fbf0 is same with the state(6) to be set 00:25:10.163 [2024-10-01 08:40:01.844724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121fbf0 is same with the state(6) to be set 00:25:10.163 [2024-10-01 08:40:01.844728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121fbf0 is same with the state(6) to be set 00:25:10.163 [2024-10-01 08:40:01.844733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121fbf0 is same with the state(6) to be set 00:25:10.163 [2024-10-01 08:40:01.844738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121fbf0 is same with the state(6) to be set 00:25:10.163 [2024-10-01 08:40:01.844743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121fbf0 is same with the state(6) to be set 00:25:10.163 [2024-10-01 08:40:01.844748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121fbf0 is same with the state(6) to be set 00:25:10.163 [2024-10-01 08:40:01.844752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121fbf0 is same with the state(6) to be set 00:25:10.163 [2024-10-01 08:40:01.844757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121fbf0 is same with the state(6) to be set 00:25:10.163 [2024-10-01 08:40:01.844762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121fbf0 is same with the state(6) to be set 00:25:10.163 [2024-10-01 08:40:01.844767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121fbf0 is same with the state(6) to be set 00:25:10.163 [2024-10-01 08:40:01.844772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121fbf0 is same with the state(6) to be set 00:25:10.163 [2024-10-01 08:40:01.844777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121fbf0 is same with the state(6) to be set 00:25:10.163 [2024-10-01 08:40:01.844783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121fbf0 is same with the state(6) to be set 00:25:10.163 [2024-10-01 08:40:01.844788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121fbf0 is same with the state(6) to be set 00:25:10.163 [2024-10-01 08:40:01.844793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121fbf0 is same with the state(6) to be set 00:25:10.163 [2024-10-01 08:40:01.844797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121fbf0 is same with the state(6) to be set 00:25:10.163 [2024-10-01 08:40:01.844802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121fbf0 is same with the state(6) to be set 00:25:10.163 [2024-10-01 08:40:01.844807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121fbf0 is same with the state(6) to be set 00:25:10.163 [2024-10-01 08:40:01.844812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121fbf0 is same with the state(6) to be set 00:25:10.163 [2024-10-01 08:40:01.844817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121fbf0 is same with the state(6) to be set 00:25:10.163 [2024-10-01 08:40:01.844822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121fbf0 is same with the state(6) to be set 00:25:10.164 [2024-10-01 08:40:01.844826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121fbf0 is same with the state(6) to be set 00:25:10.164 [2024-10-01 08:40:01.844832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121fbf0 is same with the state(6) to be set 00:25:10.164 [2024-10-01 08:40:01.844836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121fbf0 is same with the state(6) to be set 00:25:10.164 [2024-10-01 08:40:01.844841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121fbf0 is same with the state(6) to be set 00:25:10.164 [2024-10-01 08:40:01.844846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121fbf0 is same with the state(6) to be set 00:25:10.164 [2024-10-01 08:40:01.844851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121fbf0 is same with the state(6) to be set 00:25:10.164 [2024-10-01 08:40:01.844855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121fbf0 is same with the state(6) to be set 00:25:10.164 [2024-10-01 08:40:01.844860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121fbf0 is same with the state(6) to be set 00:25:10.164 [2024-10-01 08:40:01.844865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121fbf0 is same with the state(6) to be set 00:25:10.164 [2024-10-01 08:40:01.844870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121fbf0 is same with the state(6) to be set 00:25:10.164 [2024-10-01 08:40:01.844875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121fbf0 is same with the state(6) to be set 00:25:10.164 [2024-10-01 08:40:01.844880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121fbf0 is same with the state(6) to be set 00:25:10.164 [2024-10-01 08:40:01.844885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121fbf0 is same with the state(6) to be set 00:25:10.164 [2024-10-01 08:40:01.844890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121fbf0 is same with the state(6) to be set 00:25:10.164 [2024-10-01 08:40:01.844895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121fbf0 is same with the state(6) to be set 00:25:10.164 [2024-10-01 08:40:01.844900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121fbf0 is same with the state(6) to be set 00:25:10.164 [2024-10-01 08:40:01.844904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121fbf0 is same with the state(6) to be set 00:25:10.164 [2024-10-01 08:40:01.844909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121fbf0 is same with the state(6) to be set 00:25:10.164 [2024-10-01 08:40:01.844915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121fbf0 is same with the state(6) to be set 00:25:10.164 [2024-10-01 08:40:01.844920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121fbf0 is same with the state(6) to be set 00:25:10.164 [2024-10-01 08:40:01.844924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121fbf0 is same with the state(6) to be set 00:25:10.164 [2024-10-01 08:40:01.844929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121fbf0 is same with the state(6) to be set 00:25:10.164 [2024-10-01 08:40:01.844934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121fbf0 is same with the state(6) to be set 00:25:10.164 [2024-10-01 08:40:01.844939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121fbf0 is same with the state(6) to be set 00:25:10.164 [2024-10-01 08:40:01.845549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.164 [2024-10-01 08:40:01.845585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.164 [2024-10-01 08:40:01.845596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.164 [2024-10-01 08:40:01.845604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.164 [2024-10-01 08:40:01.845613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.164 [2024-10-01 08:40:01.845620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.164 [2024-10-01 08:40:01.845629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.164 [2024-10-01 08:40:01.845636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.164 [2024-10-01 08:40:01.845644] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163af30 is same with the state(6) to be set 00:25:10.164 [2024-10-01 08:40:01.845685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.164 [2024-10-01 08:40:01.845695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.164 [2024-10-01 08:40:01.845704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.164 [2024-10-01 08:40:01.845712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.164 [2024-10-01 08:40:01.845720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.164 [2024-10-01 08:40:01.845728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.164 [2024-10-01 08:40:01.845736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.164 [2024-10-01 08:40:01.845744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.164 [2024-10-01 08:40:01.845752] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab3e30 is same with the state(6) to be set 00:25:10.164 [2024-10-01 08:40:01.846757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.164 [2024-10-01 08:40:01.846780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.164 [2024-10-01 08:40:01.846801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.164 [2024-10-01 08:40:01.846808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.164 [2024-10-01 08:40:01.846818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.164 [2024-10-01 08:40:01.846826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.164 [2024-10-01 08:40:01.846836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.164 [2024-10-01 08:40:01.846843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.164 [2024-10-01 08:40:01.846852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.164 [2024-10-01 08:40:01.846862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.164 [2024-10-01 08:40:01.846872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.164 [2024-10-01 08:40:01.846879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.164 [2024-10-01 08:40:01.846889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.164 [2024-10-01 08:40:01.846896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.164 [2024-10-01 08:40:01.846905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.164 [2024-10-01 08:40:01.846913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.164 [2024-10-01 08:40:01.846923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.164 [2024-10-01 08:40:01.846930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.164 [2024-10-01 08:40:01.846939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.164 [2024-10-01 08:40:01.846946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.164 [2024-10-01 08:40:01.846956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.164 [2024-10-01 08:40:01.846964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.164 [2024-10-01 08:40:01.846973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.164 [2024-10-01 08:40:01.846981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.164 [2024-10-01 08:40:01.846990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.164 [2024-10-01 08:40:01.847005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.164 [2024-10-01 08:40:01.847014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.164 [2024-10-01 08:40:01.847024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.164 [2024-10-01 08:40:01.847035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.164 [2024-10-01 08:40:01.847042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.164 [2024-10-01 08:40:01.847052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.164 [2024-10-01 08:40:01.847059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.164 [2024-10-01 08:40:01.847069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.164 [2024-10-01 08:40:01.847077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.164 [2024-10-01 08:40:01.847086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.164 [2024-10-01 08:40:01.847093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.164 [2024-10-01 08:40:01.847103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.164 [2024-10-01 08:40:01.847110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.164 [2024-10-01 08:40:01.847120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.164 [2024-10-01 08:40:01.847127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.165 [2024-10-01 08:40:01.847137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.165 [2024-10-01 08:40:01.847144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.165 [2024-10-01 08:40:01.847154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.165 [2024-10-01 08:40:01.847161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.165 [2024-10-01 08:40:01.847159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1010 is same with [2024-10-01 08:40:01.847171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:1the state(6) to be set 00:25:10.165 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.165 [2024-10-01 08:40:01.847181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.165 [2024-10-01 08:40:01.847186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1010 is same with the state(6) to be set 00:25:10.165 [2024-10-01 08:40:01.847191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:1[2024-10-01 08:40:01.847193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1010 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.165 the state(6) to be set 00:25:10.165 [2024-10-01 08:40:01.847199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1010 is same with the state(6) to be set 00:25:10.165 [2024-10-01 08:40:01.847201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-01 08:40:01.847204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1010 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.165 the state(6) to be set 00:25:10.165 [2024-10-01 08:40:01.847215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1010 is same with the state(6) to be set 00:25:10.165 [2024-10-01 08:40:01.847216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.165 [2024-10-01 08:40:01.847220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1010 is same with the state(6) to be set 00:25:10.165 [2024-10-01 08:40:01.847225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-01 08:40:01.847226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1010 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.165 the state(6) to be set 00:25:10.165 [2024-10-01 08:40:01.847233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1010 is same with the state(6) to be set 00:25:10.165 [2024-10-01 08:40:01.847236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.165 [2024-10-01 08:40:01.847238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1010 is same with the state(6) to be set 00:25:10.165 [2024-10-01 08:40:01.847243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1010 is same with the state(6) to be set 00:25:10.165 [2024-10-01 08:40:01.847244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.165 [2024-10-01 08:40:01.847249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1010 is same with the state(6) to be set 00:25:10.165 [2024-10-01 08:40:01.847254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1010 is same with the state(6) to be set 00:25:10.165 [2024-10-01 08:40:01.847254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.165 [2024-10-01 08:40:01.847260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1010 is same with the state(6) to be set 00:25:10.165 [2024-10-01 08:40:01.847263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.165 [2024-10-01 08:40:01.847266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1010 is same with the state(6) to be set 00:25:10.165 [2024-10-01 08:40:01.847272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1010 is same with the state(6) to be set 00:25:10.165 [2024-10-01 08:40:01.847273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.165 [2024-10-01 08:40:01.847278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1010 is same with the state(6) to be set 00:25:10.165 [2024-10-01 08:40:01.847281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.165 [2024-10-01 08:40:01.847284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1010 is same with the state(6) to be set 00:25:10.165 [2024-10-01 08:40:01.847290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1010 is same with the state(6) to be set 00:25:10.165 [2024-10-01 08:40:01.847291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.165 [2024-10-01 08:40:01.847295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1010 is same with the state(6) to be set 00:25:10.165 [2024-10-01 08:40:01.847298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-01 08:40:01.847300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1010 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.165 the state(6) to be set 00:25:10.165 [2024-10-01 08:40:01.847311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1010 is same with the state(6) to be set 00:25:10.165 [2024-10-01 08:40:01.847313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.165 [2024-10-01 08:40:01.847316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1010 is same with the state(6) to be set 00:25:10.165 [2024-10-01 08:40:01.847321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1010 is same with [2024-10-01 08:40:01.847321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:25:10.165 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.165 [2024-10-01 08:40:01.847329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1010 is same with the state(6) to be set 00:25:10.165 [2024-10-01 08:40:01.847333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:1[2024-10-01 08:40:01.847334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1010 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.165 the state(6) to be set 00:25:10.165 [2024-10-01 08:40:01.847341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1010 is same with the state(6) to be set 00:25:10.165 [2024-10-01 08:40:01.847342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.165 [2024-10-01 08:40:01.847346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1010 is same with the state(6) to be set 00:25:10.165 [2024-10-01 08:40:01.847351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1010 is same with the state(6) to be set 00:25:10.165 [2024-10-01 08:40:01.847352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.165 [2024-10-01 08:40:01.847357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1010 is same with the state(6) to be set 00:25:10.165 [2024-10-01 08:40:01.847360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.165 [2024-10-01 08:40:01.847363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1010 is same with the state(6) to be set 00:25:10.165 [2024-10-01 08:40:01.847368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1010 is same with the state(6) to be set 00:25:10.165 [2024-10-01 08:40:01.847370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.165 [2024-10-01 08:40:01.847374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1010 is same with the state(6) to be set 00:25:10.165 [2024-10-01 08:40:01.847379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-01 08:40:01.847380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1010 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.165 the state(6) to be set 00:25:10.165 [2024-10-01 08:40:01.847387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1010 is same with the state(6) to be set 00:25:10.165 [2024-10-01 08:40:01.847390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.165 [2024-10-01 08:40:01.847392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1010 is same with the state(6) to be set 00:25:10.165 [2024-10-01 08:40:01.847398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1010 is same with the state(6) to be set 00:25:10.165 [2024-10-01 08:40:01.847398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.165 [2024-10-01 08:40:01.847403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1010 is same with the state(6) to be set 00:25:10.165 [2024-10-01 08:40:01.847409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1010 is same with the state(6) to be set 00:25:10.165 [2024-10-01 08:40:01.847410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.165 [2024-10-01 08:40:01.847414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1010 is same with the state(6) to be set 00:25:10.165 [2024-10-01 08:40:01.847418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-01 08:40:01.847419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1010 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.165 the state(6) to be set 00:25:10.165 [2024-10-01 08:40:01.847426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1010 is same with the state(6) to be set 00:25:10.165 [2024-10-01 08:40:01.847429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.165 [2024-10-01 08:40:01.847432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1010 is same with the state(6) to be set 00:25:10.165 [2024-10-01 08:40:01.847437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1010 is same with the state(6) to be set 00:25:10.165 [2024-10-01 08:40:01.847437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.165 [2024-10-01 08:40:01.847443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1010 is same with the state(6) to be set 00:25:10.165 [2024-10-01 08:40:01.847449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1010 is same with [2024-10-01 08:40:01.847448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:1the state(6) to be set 00:25:10.165 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.165 [2024-10-01 08:40:01.847456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1010 is same with the state(6) to be set 00:25:10.165 [2024-10-01 08:40:01.847458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.166 [2024-10-01 08:40:01.847461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1010 is same with the state(6) to be set 00:25:10.166 [2024-10-01 08:40:01.847467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1010 is same with the state(6) to be set 00:25:10.166 [2024-10-01 08:40:01.847468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.166 [2024-10-01 08:40:01.847473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1010 is same with the state(6) to be set 00:25:10.166 [2024-10-01 08:40:01.847476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.166 [2024-10-01 08:40:01.847478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1010 is same with the state(6) to be set 00:25:10.166 [2024-10-01 08:40:01.847484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1010 is same with the state(6) to be set 00:25:10.166 [2024-10-01 08:40:01.847486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.166 [2024-10-01 08:40:01.847489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1010 is same with the state(6) to be set 00:25:10.166 [2024-10-01 08:40:01.847494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-01 08:40:01.847495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1010 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.166 the state(6) to be set 00:25:10.166 [2024-10-01 08:40:01.847505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1010 is same with the state(6) to be set 00:25:10.166 [2024-10-01 08:40:01.847508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.166 [2024-10-01 08:40:01.847510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1010 is same with the state(6) to be set 00:25:10.166 [2024-10-01 08:40:01.847515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1010 is same with the state(6) to be set 00:25:10.166 [2024-10-01 08:40:01.847515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.166 [2024-10-01 08:40:01.847521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1010 is same with the state(6) to be set 00:25:10.166 [2024-10-01 08:40:01.847526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:1[2024-10-01 08:40:01.847527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1010 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.166 the state(6) to be set 00:25:10.166 [2024-10-01 08:40:01.847535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1010 is same with the state(6) to be set 00:25:10.166 [2024-10-01 08:40:01.847535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.166 [2024-10-01 08:40:01.847540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1010 is same with the state(6) to be set 00:25:10.166 [2024-10-01 08:40:01.847545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1010 is same with the state(6) to be set 00:25:10.166 [2024-10-01 08:40:01.847546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.166 [2024-10-01 08:40:01.847550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1010 is same with the state(6) to be set 00:25:10.166 [2024-10-01 08:40:01.847554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.166 [2024-10-01 08:40:01.847563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.166 [2024-10-01 08:40:01.847570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.166 [2024-10-01 08:40:01.847580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.166 [2024-10-01 08:40:01.847587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.166 [2024-10-01 08:40:01.847597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.166 [2024-10-01 08:40:01.847604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.166 [2024-10-01 08:40:01.847612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.166 [2024-10-01 08:40:01.847619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.166 [2024-10-01 08:40:01.847630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.166 [2024-10-01 08:40:01.847637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.166 [2024-10-01 08:40:01.847649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.166 [2024-10-01 08:40:01.847656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.166 [2024-10-01 08:40:01.847665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.166 [2024-10-01 08:40:01.847673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.166 [2024-10-01 08:40:01.847682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.166 [2024-10-01 08:40:01.847690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.166 [2024-10-01 08:40:01.847699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.166 [2024-10-01 08:40:01.847706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.166 [2024-10-01 08:40:01.847715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.166 [2024-10-01 08:40:01.847723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.166 [2024-10-01 08:40:01.847733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.166 [2024-10-01 08:40:01.847741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.166 [2024-10-01 08:40:01.847750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.166 [2024-10-01 08:40:01.847757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.166 [2024-10-01 08:40:01.847766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.166 [2024-10-01 08:40:01.847773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.166 [2024-10-01 08:40:01.847783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.166 [2024-10-01 08:40:01.847791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.166 [2024-10-01 08:40:01.847801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.166 [2024-10-01 08:40:01.847808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.166 [2024-10-01 08:40:01.847817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.166 [2024-10-01 08:40:01.847825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.166 [2024-10-01 08:40:01.847835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.166 [2024-10-01 08:40:01.847842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.166 [2024-10-01 08:40:01.847852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.166 [2024-10-01 08:40:01.847860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.166 [2024-10-01 08:40:01.847870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.166 [2024-10-01 08:40:01.847877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.166 [2024-10-01 08:40:01.847887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.166 [2024-10-01 08:40:01.847894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.166 [2024-10-01 08:40:01.847903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.166 [2024-10-01 08:40:01.847910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.166 [2024-10-01 08:40:01.847919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.166 [2024-10-01 08:40:01.847926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.166 [2024-10-01 08:40:01.847951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:10.166 [2024-10-01 08:40:01.847993] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1840360 was disconnected and freed. reset controller. 00:25:10.166 [2024-10-01 08:40:01.848369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1500 is same with the state(6) to be set 00:25:10.166 [2024-10-01 08:40:01.848391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1500 is same with the state(6) to be set 00:25:10.166 [2024-10-01 08:40:01.848397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1500 is same with the state(6) to be set 00:25:10.166 [2024-10-01 08:40:01.848403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1500 is same with the state(6) to be set 00:25:10.166 [2024-10-01 08:40:01.848993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.166 [2024-10-01 08:40:01.849016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.166 [2024-10-01 08:40:01.849014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1880 is same with the state(6) to be set 00:25:10.166 [2024-10-01 08:40:01.849029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.166 [2024-10-01 08:40:01.849037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1880 is same with the state(6) to be set 00:25:10.167 [2024-10-01 08:40:01.849038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.167 [2024-10-01 08:40:01.849044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1880 is same with the state(6) to be set 00:25:10.167 [2024-10-01 08:40:01.849050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1880 is same with [2024-10-01 08:40:01.849049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:12the state(6) to be set 00:25:10.167 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.167 [2024-10-01 08:40:01.849057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1880 is same with the state(6) to be set 00:25:10.167 [2024-10-01 08:40:01.849059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-01 08:40:01.849063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1880 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.167 the state(6) to be set 00:25:10.167 [2024-10-01 08:40:01.849071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1880 is same with the state(6) to be set 00:25:10.167 [2024-10-01 08:40:01.849074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.167 [2024-10-01 08:40:01.849076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1880 is same with the state(6) to be set 00:25:10.167 [2024-10-01 08:40:01.849082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1880 is same with [2024-10-01 08:40:01.849081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:25:10.167 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.167 [2024-10-01 08:40:01.849090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1880 is same with the state(6) to be set 00:25:10.167 [2024-10-01 08:40:01.849093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.167 [2024-10-01 08:40:01.849095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1880 is same with the state(6) to be set 00:25:10.167 [2024-10-01 08:40:01.849101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1880 is same with the state(6) to be set 00:25:10.167 [2024-10-01 08:40:01.849102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.167 [2024-10-01 08:40:01.849107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1880 is same with the state(6) to be set 00:25:10.167 [2024-10-01 08:40:01.849112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1880 is same with the state(6) to be set 00:25:10.167 [2024-10-01 08:40:01.849112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.167 [2024-10-01 08:40:01.849118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1880 is same with the state(6) to be set 00:25:10.167 [2024-10-01 08:40:01.849121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.167 [2024-10-01 08:40:01.849123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1880 is same with the state(6) to be set 00:25:10.167 [2024-10-01 08:40:01.849129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1880 is same with the state(6) to be set 00:25:10.167 [2024-10-01 08:40:01.849131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.167 [2024-10-01 08:40:01.849133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1880 is same with the state(6) to be set 00:25:10.167 [2024-10-01 08:40:01.849139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1880 is same with [2024-10-01 08:40:01.849139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:25:10.167 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.167 [2024-10-01 08:40:01.849146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1880 is same with the state(6) to be set 00:25:10.167 [2024-10-01 08:40:01.849152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1880 is same with [2024-10-01 08:40:01.849151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:1the state(6) to be set 00:25:10.167 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.167 [2024-10-01 08:40:01.849159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1880 is same with [2024-10-01 08:40:01.849161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:25:10.167 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.167 [2024-10-01 08:40:01.849170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1880 is same with the state(6) to be set 00:25:10.167 [2024-10-01 08:40:01.849174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:1[2024-10-01 08:40:01.849175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1880 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.167 the state(6) to be set 00:25:10.167 [2024-10-01 08:40:01.849183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1880 is same with the state(6) to be set 00:25:10.167 [2024-10-01 08:40:01.849183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.167 [2024-10-01 08:40:01.849187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1880 is same with the state(6) to be set 00:25:10.167 [2024-10-01 08:40:01.849192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1880 is same with the state(6) to be set 00:25:10.167 [2024-10-01 08:40:01.849193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.167 [2024-10-01 08:40:01.849198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1880 is same with the state(6) to be set 00:25:10.167 [2024-10-01 08:40:01.849201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.167 [2024-10-01 08:40:01.849203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1880 is same with the state(6) to be set 00:25:10.167 [2024-10-01 08:40:01.849209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1880 is same with the state(6) to be set 00:25:10.167 [2024-10-01 08:40:01.849210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.167 [2024-10-01 08:40:01.849213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1880 is same with the state(6) to be set 00:25:10.167 [2024-10-01 08:40:01.849219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1880 is same with [2024-10-01 08:40:01.849219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:25:10.167 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.167 [2024-10-01 08:40:01.849226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1880 is same with the state(6) to be set 00:25:10.167 [2024-10-01 08:40:01.849230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:1[2024-10-01 08:40:01.849231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1880 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.167 the state(6) to be set 00:25:10.167 [2024-10-01 08:40:01.849239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1880 is same with the state(6) to be set 00:25:10.167 [2024-10-01 08:40:01.849240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.167 [2024-10-01 08:40:01.849245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1880 is same with the state(6) to be set 00:25:10.167 [2024-10-01 08:40:01.849250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:1[2024-10-01 08:40:01.849250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1880 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.167 the state(6) to be set 00:25:10.167 [2024-10-01 08:40:01.849258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1880 is same with [2024-10-01 08:40:01.849260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:25:10.167 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.167 [2024-10-01 08:40:01.849270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1880 is same with the state(6) to be set 00:25:10.167 [2024-10-01 08:40:01.849274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:1[2024-10-01 08:40:01.849276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1880 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.167 the state(6) to be set 00:25:10.167 [2024-10-01 08:40:01.849283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1880 is same with the state(6) to be set 00:25:10.167 [2024-10-01 08:40:01.849283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.167 [2024-10-01 08:40:01.849288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1880 is same with the state(6) to be set 00:25:10.167 [2024-10-01 08:40:01.849293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1880 is same with [2024-10-01 08:40:01.849293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:1the state(6) to be set 00:25:10.167 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.167 [2024-10-01 08:40:01.849301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1880 is same with the state(6) to be set 00:25:10.167 [2024-10-01 08:40:01.849303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.167 [2024-10-01 08:40:01.849307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1880 is same with the state(6) to be set 00:25:10.168 [2024-10-01 08:40:01.849312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1880 is same with the state(6) to be set 00:25:10.168 [2024-10-01 08:40:01.849314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.168 [2024-10-01 08:40:01.849317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1880 is same with the state(6) to be set 00:25:10.168 [2024-10-01 08:40:01.849322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-01 08:40:01.849323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1880 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.168 the state(6) to be set 00:25:10.168 [2024-10-01 08:40:01.849330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1880 is same with the state(6) to be set 00:25:10.168 [2024-10-01 08:40:01.849333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.168 [2024-10-01 08:40:01.849335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1880 is same with the state(6) to be set 00:25:10.168 [2024-10-01 08:40:01.849341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1880 is same with the state(6) to be set 00:25:10.168 [2024-10-01 08:40:01.849341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.168 [2024-10-01 08:40:01.849346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1880 is same with the state(6) to be set 00:25:10.168 [2024-10-01 08:40:01.849352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1880 is same with [2024-10-01 08:40:01.849351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:1the state(6) to be set 00:25:10.168 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.168 [2024-10-01 08:40:01.849359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1880 is same with the state(6) to be set 00:25:10.168 [2024-10-01 08:40:01.849361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.168 [2024-10-01 08:40:01.849366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1880 is same with the state(6) to be set 00:25:10.168 [2024-10-01 08:40:01.849371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:1[2024-10-01 08:40:01.849372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1880 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.168 the state(6) to be set 00:25:10.168 [2024-10-01 08:40:01.849380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1880 is same with the state(6) to be set 00:25:10.168 [2024-10-01 08:40:01.849381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.168 [2024-10-01 08:40:01.849385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1880 is same with the state(6) to be set 00:25:10.168 [2024-10-01 08:40:01.849390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1880 is same with the state(6) to be set 00:25:10.168 [2024-10-01 08:40:01.849390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.168 [2024-10-01 08:40:01.849396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1880 is same with the state(6) to be set 00:25:10.168 [2024-10-01 08:40:01.849399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.168 [2024-10-01 08:40:01.849402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1880 is same with the state(6) to be set 00:25:10.168 [2024-10-01 08:40:01.849407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1880 is same with the state(6) to be set 00:25:10.168 [2024-10-01 08:40:01.849410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.168 [2024-10-01 08:40:01.849412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1880 is same with the state(6) to be set 00:25:10.168 [2024-10-01 08:40:01.849417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.168 [2024-10-01 08:40:01.849427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.168 [2024-10-01 08:40:01.849434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.168 [2024-10-01 08:40:01.849444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.168 [2024-10-01 08:40:01.849452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.168 [2024-10-01 08:40:01.849461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.168 [2024-10-01 08:40:01.849469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.168 [2024-10-01 08:40:01.849479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.168 [2024-10-01 08:40:01.849486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.168 [2024-10-01 08:40:01.849495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.168 [2024-10-01 08:40:01.849503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.168 [2024-10-01 08:40:01.849514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.168 [2024-10-01 08:40:01.849521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.168 [2024-10-01 08:40:01.849531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.168 [2024-10-01 08:40:01.849538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.168 [2024-10-01 08:40:01.849548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.168 [2024-10-01 08:40:01.849555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.168 [2024-10-01 08:40:01.849565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.168 [2024-10-01 08:40:01.849572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.168 [2024-10-01 08:40:01.849582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.168 [2024-10-01 08:40:01.849590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.168 [2024-10-01 08:40:01.849599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.168 [2024-10-01 08:40:01.849606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.168 [2024-10-01 08:40:01.849615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.168 [2024-10-01 08:40:01.849623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.168 [2024-10-01 08:40:01.849633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.168 [2024-10-01 08:40:01.849640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.168 [2024-10-01 08:40:01.849650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.168 [2024-10-01 08:40:01.849657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.168 [2024-10-01 08:40:01.849666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.168 [2024-10-01 08:40:01.849673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.168 [2024-10-01 08:40:01.849683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.168 [2024-10-01 08:40:01.849691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.168 [2024-10-01 08:40:01.849700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.168 [2024-10-01 08:40:01.849707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.168 [2024-10-01 08:40:01.849716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.168 [2024-10-01 08:40:01.849725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.168 [2024-10-01 08:40:01.849735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.168 [2024-10-01 08:40:01.849742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.168 [2024-10-01 08:40:01.849752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.168 [2024-10-01 08:40:01.849759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.168 [2024-10-01 08:40:01.849769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.168 [2024-10-01 08:40:01.849776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.168 [2024-10-01 08:40:01.849785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.168 [2024-10-01 08:40:01.849793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.168 [2024-10-01 08:40:01.849802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.168 [2024-10-01 08:40:01.849809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.168 [2024-10-01 08:40:01.849818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.168 [2024-10-01 08:40:01.849825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.168 [2024-10-01 08:40:01.849834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.168 [2024-10-01 08:40:01.849842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.168 [2024-10-01 08:40:01.849851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.169 [2024-10-01 08:40:01.849859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.169 [2024-10-01 08:40:01.849868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.169 [2024-10-01 08:40:01.849874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.169 [2024-10-01 08:40:01.849884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.169 [2024-10-01 08:40:01.849891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.169 [2024-10-01 08:40:01.849901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.169 [2024-10-01 08:40:01.849908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.169 [2024-10-01 08:40:01.849917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.169 [2024-10-01 08:40:01.849924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.169 [2024-10-01 08:40:01.849934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.169 [2024-10-01 08:40:01.849942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.169 [2024-10-01 08:40:01.849952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.169 [2024-10-01 08:40:01.849959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.169 [2024-10-01 08:40:01.849968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.169 [2024-10-01 08:40:01.849975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.169 [2024-10-01 08:40:01.849984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.169 [2024-10-01 08:40:01.849993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.169 [2024-10-01 08:40:01.850007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.169 [2024-10-01 08:40:01.850014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.169 [2024-10-01 08:40:01.850024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.169 [2024-10-01 08:40:01.850031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.169 [2024-10-01 08:40:01.850041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.169 [2024-10-01 08:40:01.850048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.169 [2024-10-01 08:40:01.850057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.169 [2024-10-01 08:40:01.850065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.169 [2024-10-01 08:40:01.850074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.169 [2024-10-01 08:40:01.850082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.169 [2024-10-01 08:40:01.850091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.169 [2024-10-01 08:40:01.850098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.169 [2024-10-01 08:40:01.850108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.169 [2024-10-01 08:40:01.850116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.169 [2024-10-01 08:40:01.850125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.169 [2024-10-01 08:40:01.850133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.169 [2024-10-01 08:40:01.850142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.169 [2024-10-01 08:40:01.850151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.169 [2024-10-01 08:40:01.850149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1d50 is same with the state(6) to be set 00:25:10.169 [2024-10-01 08:40:01.850165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1d50 is same with the state(6) to be set 00:25:10.169 [2024-10-01 08:40:01.850170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f1d50 is same with the state(6) to be set 00:25:10.169 [2024-10-01 08:40:01.850179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:10.169 [2024-10-01 08:40:01.850215] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b27510 was disconnected and freed. reset controller. 00:25:10.169 [2024-10-01 08:40:01.850676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.169 [2024-10-01 08:40:01.850695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.169 [2024-10-01 08:40:01.850706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.169 [2024-10-01 08:40:01.850714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.169 [2024-10-01 08:40:01.850724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.169 [2024-10-01 08:40:01.850731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.169 [2024-10-01 08:40:01.850741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.169 [2024-10-01 08:40:01.850749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.169 [2024-10-01 08:40:01.850758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.169 [2024-10-01 08:40:01.850765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.169 [2024-10-01 08:40:01.850774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.169 [2024-10-01 08:40:01.850781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.169 [2024-10-01 08:40:01.850791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.169 [2024-10-01 08:40:01.850799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.169 [2024-10-01 08:40:01.850808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.169 [2024-10-01 08:40:01.850816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.169 [2024-10-01 08:40:01.850825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.169 [2024-10-01 08:40:01.850832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.169 [2024-10-01 08:40:01.850842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.169 [2024-10-01 08:40:01.850856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.169 [2024-10-01 08:40:01.850865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.169 [2024-10-01 08:40:01.850872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.169 [2024-10-01 08:40:01.850882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.169 [2024-10-01 08:40:01.850889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.169 [2024-10-01 08:40:01.850900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.169 [2024-10-01 08:40:01.850907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.169 [2024-10-01 08:40:01.850916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.169 [2024-10-01 08:40:01.850923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.169 [2024-10-01 08:40:01.850933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.169 [2024-10-01 08:40:01.850940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.169 [2024-10-01 08:40:01.850950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.169 [2024-10-01 08:40:01.850958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.169 [2024-10-01 08:40:01.850967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.169 [2024-10-01 08:40:01.850975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.169 [2024-10-01 08:40:01.850989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.169 [2024-10-01 08:40:01.851009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.169 [2024-10-01 08:40:01.851025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.169 [2024-10-01 08:40:01.851033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.169 [2024-10-01 08:40:01.851042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.169 [2024-10-01 08:40:01.851050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.170 [2024-10-01 08:40:01.851059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.170 [2024-10-01 08:40:01.851067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.170 [2024-10-01 08:40:01.851076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.170 [2024-10-01 08:40:01.851083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.170 [2024-10-01 08:40:01.851095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.170 [2024-10-01 08:40:01.851103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.170 [2024-10-01 08:40:01.851113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.170 [2024-10-01 08:40:01.851119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2220 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.851134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2220 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.851139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2220 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.851144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2220 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.851149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2220 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.851154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2220 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.851159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2220 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.851163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2220 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.851168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2220 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.851173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2220 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.851177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2220 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.851182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2220 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.851187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2220 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.851191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2220 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.851196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2220 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.851200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2220 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.851206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2220 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.851211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2220 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.851215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2220 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.851220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2220 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.851225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2220 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.851235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2220 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.851240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2220 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.851244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2220 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.851253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2220 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.851258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2220 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.851262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2220 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.851267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2220 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.851272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2220 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.851277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2220 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.851281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2220 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.851286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2220 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.851291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2220 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.851295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2220 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.851300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2220 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.851305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2220 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.851310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2220 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.851315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2220 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.851320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2220 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.851324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2220 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.851329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2220 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.851333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2220 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.851338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2220 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.851342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2220 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.851346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2220 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.851351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2220 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.851355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2220 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.851360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2220 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.851365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2220 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.851370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2220 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.851375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2220 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.851380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2220 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.851385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2220 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.851389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2220 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.851394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2220 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.851398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2220 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.851403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2220 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.851408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2220 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.851412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2220 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.851417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2220 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.851421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2220 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.851426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2220 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.851430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2220 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.852040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2710 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.852054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2710 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.852059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2710 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.852064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2710 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.852069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2710 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.852074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2710 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.852079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2710 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.852083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2710 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.852088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2710 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.852093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2710 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.852098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2710 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.852103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2710 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.852107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2710 is same with the state(6) to be set 00:25:10.170 [2024-10-01 08:40:01.852112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2710 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.852117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2710 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.852121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2710 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.852130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2710 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.852134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2710 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.852140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2710 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.852145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2710 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.852150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2710 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.852155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2710 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.852160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2710 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.852164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2710 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.852169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2710 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.852173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2710 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.852178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2710 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.852182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2710 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.852187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2710 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.852192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2710 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.852197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2710 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.852202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2710 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.852207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2710 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.852211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2710 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.852216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2710 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.852220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2710 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.852225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2710 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.852230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2710 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.852234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2710 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.852239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2710 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.852244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2710 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.852249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2710 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.852254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2710 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.852260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2710 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.852265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2710 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.852270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2710 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.852274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2710 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.852279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2710 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.852283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2710 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.852288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2710 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.852293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2710 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.852298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2710 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.852303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2710 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.852308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2710 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.852312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2710 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.852317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2710 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.852321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2710 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.852326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2710 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.852330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2710 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.852335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2710 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.852340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2710 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.852346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2710 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.852351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2710 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.852814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f720 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.852828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f720 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.852837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f720 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.852842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f720 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.852847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f720 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.852852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f720 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.852857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f720 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.852865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f720 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.852870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f720 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.852874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f720 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.852879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f720 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.852907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f720 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.852957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f720 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.853032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f720 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.853241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f720 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.853262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f720 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.853270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f720 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.853276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f720 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.853281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f720 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.853323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f720 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.853375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f720 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.853430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f720 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.853481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f720 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.853532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f720 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.853583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f720 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.853634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f720 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.853684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f720 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.853735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f720 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.853785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f720 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.853836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f720 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.853888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f720 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.853947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f720 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.854000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f720 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.854055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f720 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.854105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f720 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.854156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f720 is same with the state(6) to be set 00:25:10.171 [2024-10-01 08:40:01.854209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f720 is same with the state(6) to be set 00:25:10.172 [2024-10-01 08:40:01.854260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f720 is same with the state(6) to be set 00:25:10.172 [2024-10-01 08:40:01.854310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f720 is same with the state(6) to be set 00:25:10.172 [2024-10-01 08:40:01.854360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f720 is same with the state(6) to be set 00:25:10.172 [2024-10-01 08:40:01.854411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f720 is same with the state(6) to be set 00:25:10.172 [2024-10-01 08:40:01.854468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f720 is same with the state(6) to be set 00:25:10.172 [2024-10-01 08:40:01.854520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f720 is same with the state(6) to be set 00:25:10.172 [2024-10-01 08:40:01.854569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f720 is same with the state(6) to be set 00:25:10.172 [2024-10-01 08:40:01.854620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f720 is same with the state(6) to be set 00:25:10.172 [2024-10-01 08:40:01.854670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f720 is same with the state(6) to be set 00:25:10.172 [2024-10-01 08:40:01.854722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f720 is same with the state(6) to be set 00:25:10.172 [2024-10-01 08:40:01.854772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f720 is same with the state(6) to be set 00:25:10.172 [2024-10-01 08:40:01.854822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f720 is same with the state(6) to be set 00:25:10.172 [2024-10-01 08:40:01.854872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f720 is same with the state(6) to be set 00:25:10.172 [2024-10-01 08:40:01.854922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f720 is same with the state(6) to be set 00:25:10.172 [2024-10-01 08:40:01.854980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f720 is same with the state(6) to be set 00:25:10.172 [2024-10-01 08:40:01.855038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f720 is same with the state(6) to be set 00:25:10.172 [2024-10-01 08:40:01.855089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f720 is same with the state(6) to be set 00:25:10.172 [2024-10-01 08:40:01.855139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f720 is same with the state(6) to be set 00:25:10.172 [2024-10-01 08:40:01.855189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f720 is same with the state(6) to be set 00:25:10.172 [2024-10-01 08:40:01.855239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f720 is same with the state(6) to be set 00:25:10.172 [2024-10-01 08:40:01.855292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f720 is same with the state(6) to be set 00:25:10.172 [2024-10-01 08:40:01.855343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f720 is same with the state(6) to be set 00:25:10.172 [2024-10-01 08:40:01.855393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f720 is same with the state(6) to be set 00:25:10.172 [2024-10-01 08:40:01.855448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f720 is same with the state(6) to be set 00:25:10.172 [2024-10-01 08:40:01.855501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f720 is same with the state(6) to be set 00:25:10.172 [2024-10-01 08:40:01.866357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.172 [2024-10-01 08:40:01.866394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.172 [2024-10-01 08:40:01.866403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.172 [2024-10-01 08:40:01.866414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.172 [2024-10-01 08:40:01.866422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.172 [2024-10-01 08:40:01.866431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.172 [2024-10-01 08:40:01.866439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.172 [2024-10-01 08:40:01.866449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.172 [2024-10-01 08:40:01.866456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.172 [2024-10-01 08:40:01.866467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.172 [2024-10-01 08:40:01.866475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.172 [2024-10-01 08:40:01.866484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.172 [2024-10-01 08:40:01.866491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.172 [2024-10-01 08:40:01.866500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.172 [2024-10-01 08:40:01.866508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.172 [2024-10-01 08:40:01.866518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.172 [2024-10-01 08:40:01.866526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.172 [2024-10-01 08:40:01.866535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.172 [2024-10-01 08:40:01.866542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.172 [2024-10-01 08:40:01.866552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.172 [2024-10-01 08:40:01.866559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.172 [2024-10-01 08:40:01.866568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.172 [2024-10-01 08:40:01.866576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.172 [2024-10-01 08:40:01.866585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.172 [2024-10-01 08:40:01.866597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.172 [2024-10-01 08:40:01.866607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.172 [2024-10-01 08:40:01.866614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.172 [2024-10-01 08:40:01.866625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.172 [2024-10-01 08:40:01.866632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.172 [2024-10-01 08:40:01.866641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.172 [2024-10-01 08:40:01.866650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.172 [2024-10-01 08:40:01.866660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.172 [2024-10-01 08:40:01.866668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.172 [2024-10-01 08:40:01.866677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.172 [2024-10-01 08:40:01.866684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.172 [2024-10-01 08:40:01.866694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.172 [2024-10-01 08:40:01.866701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.172 [2024-10-01 08:40:01.866711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.172 [2024-10-01 08:40:01.866718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.172 [2024-10-01 08:40:01.866727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.172 [2024-10-01 08:40:01.866735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.172 [2024-10-01 08:40:01.866745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.172 [2024-10-01 08:40:01.866752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.172 [2024-10-01 08:40:01.866761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.172 [2024-10-01 08:40:01.866769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.173 [2024-10-01 08:40:01.866779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.173 [2024-10-01 08:40:01.866786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.173 [2024-10-01 08:40:01.866795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.173 [2024-10-01 08:40:01.866803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.173 [2024-10-01 08:40:01.866814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.173 [2024-10-01 08:40:01.866821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.173 [2024-10-01 08:40:01.866831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.173 [2024-10-01 08:40:01.866839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.173 [2024-10-01 08:40:01.866848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.173 [2024-10-01 08:40:01.866856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.173 [2024-10-01 08:40:01.866865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.173 [2024-10-01 08:40:01.866872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.173 [2024-10-01 08:40:01.866882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.173 [2024-10-01 08:40:01.866889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.173 [2024-10-01 08:40:01.866898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.173 [2024-10-01 08:40:01.866906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.173 [2024-10-01 08:40:01.866916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.173 [2024-10-01 08:40:01.866924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.173 [2024-10-01 08:40:01.866934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.173 [2024-10-01 08:40:01.866942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.173 [2024-10-01 08:40:01.866951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.173 [2024-10-01 08:40:01.866959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.173 [2024-10-01 08:40:01.866969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.173 [2024-10-01 08:40:01.866976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.173 [2024-10-01 08:40:01.866986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.173 [2024-10-01 08:40:01.867003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.173 [2024-10-01 08:40:01.867013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.173 [2024-10-01 08:40:01.867020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.173 [2024-10-01 08:40:01.867029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.173 [2024-10-01 08:40:01.867039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.173 [2024-10-01 08:40:01.867049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.173 [2024-10-01 08:40:01.867056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.173 [2024-10-01 08:40:01.867065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.173 [2024-10-01 08:40:01.867073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.173 [2024-10-01 08:40:01.867083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.173 [2024-10-01 08:40:01.867090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.173 [2024-10-01 08:40:01.867124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:10.173 [2024-10-01 08:40:01.867162] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a3b4a0 was disconnected and freed. reset controller. 00:25:10.173 [2024-10-01 08:40:01.867399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.173 [2024-10-01 08:40:01.867417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.173 [2024-10-01 08:40:01.867427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.173 [2024-10-01 08:40:01.867434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.173 [2024-10-01 08:40:01.867443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.173 [2024-10-01 08:40:01.867450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.173 [2024-10-01 08:40:01.867458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.173 [2024-10-01 08:40:01.867465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.173 [2024-10-01 08:40:01.867474] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5c380 is same with the state(6) to be set 00:25:10.173 [2024-10-01 08:40:01.867531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.173 [2024-10-01 08:40:01.867541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.173 [2024-10-01 08:40:01.867550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.173 [2024-10-01 08:40:01.867557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.173 [2024-10-01 08:40:01.867566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.173 [2024-10-01 08:40:01.867573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.173 [2024-10-01 08:40:01.867581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.173 [2024-10-01 08:40:01.867589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.173 [2024-10-01 08:40:01.867600] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1638990 is same with the state(6) to be set 00:25:10.173 [2024-10-01 08:40:01.867618] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x163af30 (9): Bad file descriptor 00:25:10.173 [2024-10-01 08:40:01.867641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.173 [2024-10-01 08:40:01.867650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.173 [2024-10-01 08:40:01.867659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.173 [2024-10-01 08:40:01.867666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.173 [2024-10-01 08:40:01.867674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.173 [2024-10-01 08:40:01.867682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.173 [2024-10-01 08:40:01.867690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.173 [2024-10-01 08:40:01.867697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.173 [2024-10-01 08:40:01.867705] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163aad0 is same with the state(6) to be set 00:25:10.173 [2024-10-01 08:40:01.867725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.173 [2024-10-01 08:40:01.867734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.173 [2024-10-01 08:40:01.867742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.173 [2024-10-01 08:40:01.867749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.173 [2024-10-01 08:40:01.867757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.173 [2024-10-01 08:40:01.867765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.173 [2024-10-01 08:40:01.867773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.173 [2024-10-01 08:40:01.867781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.173 [2024-10-01 08:40:01.867788] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1630120 is same with the state(6) to be set 00:25:10.173 [2024-10-01 08:40:01.867807] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ab3e30 (9): Bad file descriptor 00:25:10.173 [2024-10-01 08:40:01.867833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.173 [2024-10-01 08:40:01.867842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.173 [2024-10-01 08:40:01.867851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.173 [2024-10-01 08:40:01.867859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.173 [2024-10-01 08:40:01.867869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.174 [2024-10-01 08:40:01.867877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.174 [2024-10-01 08:40:01.867886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.174 [2024-10-01 08:40:01.867893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.174 [2024-10-01 08:40:01.867901] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a64e90 is same with the state(6) to be set 00:25:10.174 [2024-10-01 08:40:01.867922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.174 [2024-10-01 08:40:01.867931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.174 [2024-10-01 08:40:01.867939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.174 [2024-10-01 08:40:01.867946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.174 [2024-10-01 08:40:01.867955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.174 [2024-10-01 08:40:01.867962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.174 [2024-10-01 08:40:01.867970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.174 [2024-10-01 08:40:01.867977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.174 [2024-10-01 08:40:01.867985] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a651b0 is same with the state(6) to be set 00:25:10.174 [2024-10-01 08:40:01.868020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.174 [2024-10-01 08:40:01.868030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.174 [2024-10-01 08:40:01.868038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.174 [2024-10-01 08:40:01.868045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.174 [2024-10-01 08:40:01.868053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.174 [2024-10-01 08:40:01.868060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.174 [2024-10-01 08:40:01.868069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.174 [2024-10-01 08:40:01.868076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.174 [2024-10-01 08:40:01.868084] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1553610 is same with the state(6) to be set 00:25:10.174 [2024-10-01 08:40:01.868170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.174 [2024-10-01 08:40:01.868184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.174 [2024-10-01 08:40:01.868201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.174 [2024-10-01 08:40:01.868213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.174 [2024-10-01 08:40:01.868222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.174 [2024-10-01 08:40:01.868230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.174 [2024-10-01 08:40:01.868239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.174 [2024-10-01 08:40:01.868248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.174 [2024-10-01 08:40:01.868258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.174 [2024-10-01 08:40:01.868266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.174 [2024-10-01 08:40:01.868276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.174 [2024-10-01 08:40:01.868283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.174 [2024-10-01 08:40:01.868293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.174 [2024-10-01 08:40:01.868300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.174 [2024-10-01 08:40:01.868309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.174 [2024-10-01 08:40:01.868317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.174 [2024-10-01 08:40:01.868327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.174 [2024-10-01 08:40:01.868335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.174 [2024-10-01 08:40:01.868344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.174 [2024-10-01 08:40:01.868351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.174 [2024-10-01 08:40:01.868362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.174 [2024-10-01 08:40:01.868370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.174 [2024-10-01 08:40:01.868379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.174 [2024-10-01 08:40:01.868386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.174 [2024-10-01 08:40:01.868395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.174 [2024-10-01 08:40:01.868403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.174 [2024-10-01 08:40:01.868413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.174 [2024-10-01 08:40:01.868420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.174 [2024-10-01 08:40:01.868432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.174 [2024-10-01 08:40:01.868440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.174 [2024-10-01 08:40:01.868449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.174 [2024-10-01 08:40:01.868456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.174 [2024-10-01 08:40:01.868466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.174 [2024-10-01 08:40:01.868473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.174 [2024-10-01 08:40:01.868483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.174 [2024-10-01 08:40:01.868490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.174 [2024-10-01 08:40:01.868500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.174 [2024-10-01 08:40:01.868508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.174 [2024-10-01 08:40:01.868518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.174 [2024-10-01 08:40:01.868525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.174 [2024-10-01 08:40:01.868534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.174 [2024-10-01 08:40:01.868543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.174 [2024-10-01 08:40:01.868553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.174 [2024-10-01 08:40:01.868561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.174 [2024-10-01 08:40:01.868571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.174 [2024-10-01 08:40:01.868578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.174 [2024-10-01 08:40:01.868588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.174 [2024-10-01 08:40:01.868595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.174 [2024-10-01 08:40:01.868605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.174 [2024-10-01 08:40:01.868612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.174 [2024-10-01 08:40:01.868622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.174 [2024-10-01 08:40:01.868629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.174 [2024-10-01 08:40:01.868639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.174 [2024-10-01 08:40:01.868647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.174 [2024-10-01 08:40:01.868658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.174 [2024-10-01 08:40:01.868665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.174 [2024-10-01 08:40:01.868675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.174 [2024-10-01 08:40:01.868683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.175 [2024-10-01 08:40:01.868693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.175 [2024-10-01 08:40:01.868701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.175 [2024-10-01 08:40:01.868710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.175 [2024-10-01 08:40:01.868718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.175 [2024-10-01 08:40:01.868727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.175 [2024-10-01 08:40:01.868735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.175 [2024-10-01 08:40:01.868745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.175 [2024-10-01 08:40:01.868752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.175 [2024-10-01 08:40:01.868761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.175 [2024-10-01 08:40:01.868769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.175 [2024-10-01 08:40:01.868779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.175 [2024-10-01 08:40:01.868786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.175 [2024-10-01 08:40:01.868796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.175 [2024-10-01 08:40:01.868803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.175 [2024-10-01 08:40:01.868813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.175 [2024-10-01 08:40:01.868820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.175 [2024-10-01 08:40:01.868829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.175 [2024-10-01 08:40:01.868837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.175 [2024-10-01 08:40:01.868846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.175 [2024-10-01 08:40:01.868854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.175 [2024-10-01 08:40:01.868865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.175 [2024-10-01 08:40:01.868872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.175 [2024-10-01 08:40:01.868882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.175 [2024-10-01 08:40:01.868890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.175 [2024-10-01 08:40:01.868899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.175 [2024-10-01 08:40:01.868906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.175 [2024-10-01 08:40:01.868916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.175 [2024-10-01 08:40:01.868923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.175 [2024-10-01 08:40:01.868932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.175 [2024-10-01 08:40:01.868940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.175 [2024-10-01 08:40:01.868949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.175 [2024-10-01 08:40:01.868957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.175 [2024-10-01 08:40:01.868966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.175 [2024-10-01 08:40:01.868973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.175 [2024-10-01 08:40:01.868983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.175 [2024-10-01 08:40:01.868990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.175 [2024-10-01 08:40:01.869004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.175 [2024-10-01 08:40:01.869012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.175 [2024-10-01 08:40:01.869022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.175 [2024-10-01 08:40:01.869029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.175 [2024-10-01 08:40:01.869039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.175 [2024-10-01 08:40:01.869046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.175 [2024-10-01 08:40:01.869056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.175 [2024-10-01 08:40:01.869063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.175 [2024-10-01 08:40:01.869072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.175 [2024-10-01 08:40:01.869081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.175 [2024-10-01 08:40:01.869091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.175 [2024-10-01 08:40:01.869098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.175 [2024-10-01 08:40:01.869108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.175 [2024-10-01 08:40:01.869115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.175 [2024-10-01 08:40:01.869125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.175 [2024-10-01 08:40:01.869132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.175 [2024-10-01 08:40:01.869142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.175 [2024-10-01 08:40:01.869149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.175 [2024-10-01 08:40:01.869159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.175 [2024-10-01 08:40:01.869166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.175 [2024-10-01 08:40:01.869175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.175 [2024-10-01 08:40:01.869183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.175 [2024-10-01 08:40:01.869193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.175 [2024-10-01 08:40:01.869200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.175 [2024-10-01 08:40:01.869209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.175 [2024-10-01 08:40:01.869217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.175 [2024-10-01 08:40:01.869226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.175 [2024-10-01 08:40:01.869233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.175 [2024-10-01 08:40:01.869243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.175 [2024-10-01 08:40:01.869250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.175 [2024-10-01 08:40:01.869259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.175 [2024-10-01 08:40:01.869267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.175 [2024-10-01 08:40:01.869276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.175 [2024-10-01 08:40:01.869283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.175 [2024-10-01 08:40:01.869330] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x183f0b0 was disconnected and freed. reset controller. 00:25:10.175 [2024-10-01 08:40:01.872540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f720 is same with the state(6) to be set 00:25:10.175 [2024-10-01 08:40:01.873199] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:25:10.175 [2024-10-01 08:40:01.879235] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:25:10.175 [2024-10-01 08:40:01.879272] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a64e90 (9): Bad file descriptor 00:25:10.175 [2024-10-01 08:40:01.879288] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x163aad0 (9): Bad file descriptor 00:25:10.175 [2024-10-01 08:40:01.880822] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:25:10.175 [2024-10-01 08:40:01.880848] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.175 [2024-10-01 08:40:01.880871] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a5c380 (9): Bad file descriptor 00:25:10.175 [2024-10-01 08:40:01.880930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.175 [2024-10-01 08:40:01.880946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.175 [2024-10-01 08:40:01.880957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.176 [2024-10-01 08:40:01.880964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.176 [2024-10-01 08:40:01.880973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.176 [2024-10-01 08:40:01.880981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.176 [2024-10-01 08:40:01.880990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.176 [2024-10-01 08:40:01.881004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.176 [2024-10-01 08:40:01.881012] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aae880 is same with the state(6) to be set 00:25:10.176 [2024-10-01 08:40:01.881033] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1638990 (9): Bad file descriptor 00:25:10.176 [2024-10-01 08:40:01.881052] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1630120 (9): Bad file descriptor 00:25:10.176 [2024-10-01 08:40:01.881078] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a651b0 (9): Bad file descriptor 00:25:10.176 [2024-10-01 08:40:01.881095] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1553610 (9): Bad file descriptor 00:25:10.176 [2024-10-01 08:40:01.882129] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:10.176 [2024-10-01 08:40:01.882637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.176 [2024-10-01 08:40:01.882657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x163aad0 with addr=10.0.0.2, port=4420 00:25:10.176 [2024-10-01 08:40:01.882666] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163aad0 is same with the state(6) to be set 00:25:10.176 [2024-10-01 08:40:01.882858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.176 [2024-10-01 08:40:01.882869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a64e90 with addr=10.0.0.2, port=4420 00:25:10.176 [2024-10-01 08:40:01.882881] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a64e90 is same with the state(6) to be set 00:25:10.176 [2024-10-01 08:40:01.883294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.176 [2024-10-01 08:40:01.883335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x163af30 with addr=10.0.0.2, port=4420 00:25:10.176 [2024-10-01 08:40:01.883346] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163af30 is same with the state(6) to be set 00:25:10.176 [2024-10-01 08:40:01.883685] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:10.176 [2024-10-01 08:40:01.883739] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:10.176 [2024-10-01 08:40:01.883783] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:10.176 [2024-10-01 08:40:01.883819] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:10.176 [2024-10-01 08:40:01.883901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.176 [2024-10-01 08:40:01.883914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.176 [2024-10-01 08:40:01.883929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.176 [2024-10-01 08:40:01.883938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.176 [2024-10-01 08:40:01.883948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.176 [2024-10-01 08:40:01.883956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.176 [2024-10-01 08:40:01.883966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.176 [2024-10-01 08:40:01.883974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.176 [2024-10-01 08:40:01.883984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.176 [2024-10-01 08:40:01.883992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.176 [2024-10-01 08:40:01.884010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.176 [2024-10-01 08:40:01.884017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.176 [2024-10-01 08:40:01.884027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.176 [2024-10-01 08:40:01.884036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.176 [2024-10-01 08:40:01.884046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.176 [2024-10-01 08:40:01.884053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.176 [2024-10-01 08:40:01.884063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.176 [2024-10-01 08:40:01.884070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.176 [2024-10-01 08:40:01.884080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.176 [2024-10-01 08:40:01.884097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.176 [2024-10-01 08:40:01.884107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.176 [2024-10-01 08:40:01.884114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.176 [2024-10-01 08:40:01.884124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.176 [2024-10-01 08:40:01.884132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.176 [2024-10-01 08:40:01.884142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.176 [2024-10-01 08:40:01.884150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.176 [2024-10-01 08:40:01.884159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.176 [2024-10-01 08:40:01.884167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.176 [2024-10-01 08:40:01.884176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.176 [2024-10-01 08:40:01.884184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.176 [2024-10-01 08:40:01.884193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.176 [2024-10-01 08:40:01.884202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.176 [2024-10-01 08:40:01.884211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.176 [2024-10-01 08:40:01.884219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.176 [2024-10-01 08:40:01.884230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.176 [2024-10-01 08:40:01.884237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.176 [2024-10-01 08:40:01.884247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.176 [2024-10-01 08:40:01.884254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.176 [2024-10-01 08:40:01.884264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.176 [2024-10-01 08:40:01.884271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.176 [2024-10-01 08:40:01.884280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.176 [2024-10-01 08:40:01.884288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.176 [2024-10-01 08:40:01.884298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.176 [2024-10-01 08:40:01.884305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.176 [2024-10-01 08:40:01.884317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.176 [2024-10-01 08:40:01.884325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.176 [2024-10-01 08:40:01.884334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.176 [2024-10-01 08:40:01.884341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.176 [2024-10-01 08:40:01.884351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.177 [2024-10-01 08:40:01.884359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.177 [2024-10-01 08:40:01.884368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.177 [2024-10-01 08:40:01.884375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.177 [2024-10-01 08:40:01.884385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.177 [2024-10-01 08:40:01.884392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.177 [2024-10-01 08:40:01.884402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.177 [2024-10-01 08:40:01.884409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.177 [2024-10-01 08:40:01.884419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.177 [2024-10-01 08:40:01.884427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.177 [2024-10-01 08:40:01.884436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.177 [2024-10-01 08:40:01.884443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.177 [2024-10-01 08:40:01.884453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.177 [2024-10-01 08:40:01.884461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.177 [2024-10-01 08:40:01.884471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.177 [2024-10-01 08:40:01.884478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.177 [2024-10-01 08:40:01.884488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.177 [2024-10-01 08:40:01.884496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.177 [2024-10-01 08:40:01.884506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.177 [2024-10-01 08:40:01.884514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.177 [2024-10-01 08:40:01.884525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.177 [2024-10-01 08:40:01.884533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.177 [2024-10-01 08:40:01.884543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.177 [2024-10-01 08:40:01.884551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.177 [2024-10-01 08:40:01.884561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.177 [2024-10-01 08:40:01.884568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.177 [2024-10-01 08:40:01.884579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.177 [2024-10-01 08:40:01.884587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.177 [2024-10-01 08:40:01.884597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.177 [2024-10-01 08:40:01.884605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.177 [2024-10-01 08:40:01.884615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.177 [2024-10-01 08:40:01.884623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.177 [2024-10-01 08:40:01.884633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.177 [2024-10-01 08:40:01.884641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.177 [2024-10-01 08:40:01.884651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.177 [2024-10-01 08:40:01.884659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.177 [2024-10-01 08:40:01.884669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.177 [2024-10-01 08:40:01.884677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.177 [2024-10-01 08:40:01.884687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.177 [2024-10-01 08:40:01.884695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.177 [2024-10-01 08:40:01.884705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.177 [2024-10-01 08:40:01.884712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.177 [2024-10-01 08:40:01.884722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.177 [2024-10-01 08:40:01.884730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.177 [2024-10-01 08:40:01.884740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.177 [2024-10-01 08:40:01.884747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.177 [2024-10-01 08:40:01.884758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.177 [2024-10-01 08:40:01.884767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.177 [2024-10-01 08:40:01.884778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.177 [2024-10-01 08:40:01.884786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.177 [2024-10-01 08:40:01.884796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.177 [2024-10-01 08:40:01.884805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.177 [2024-10-01 08:40:01.884814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.177 [2024-10-01 08:40:01.884822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.177 [2024-10-01 08:40:01.884831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.177 [2024-10-01 08:40:01.884839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.177 [2024-10-01 08:40:01.884849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.177 [2024-10-01 08:40:01.884857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.177 [2024-10-01 08:40:01.884867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.177 [2024-10-01 08:40:01.884874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.177 [2024-10-01 08:40:01.884884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.177 [2024-10-01 08:40:01.884892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.177 [2024-10-01 08:40:01.884902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.177 [2024-10-01 08:40:01.884909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.177 [2024-10-01 08:40:01.884918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.177 [2024-10-01 08:40:01.884926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.177 [2024-10-01 08:40:01.884936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.177 [2024-10-01 08:40:01.884944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.177 [2024-10-01 08:40:01.884954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.177 [2024-10-01 08:40:01.884962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.177 [2024-10-01 08:40:01.884971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.177 [2024-10-01 08:40:01.884980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.177 [2024-10-01 08:40:01.884991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.177 [2024-10-01 08:40:01.885003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.177 [2024-10-01 08:40:01.885013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.177 [2024-10-01 08:40:01.885021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.177 [2024-10-01 08:40:01.885030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.177 [2024-10-01 08:40:01.885038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.177 [2024-10-01 08:40:01.885048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.177 [2024-10-01 08:40:01.885056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.177 [2024-10-01 08:40:01.885064] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a40870 is same with the state(6) to be set 00:25:10.178 [2024-10-01 08:40:01.886634] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:25:10.178 [2024-10-01 08:40:01.887036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.178 [2024-10-01 08:40:01.887062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5c380 with addr=10.0.0.2, port=4420 00:25:10.178 [2024-10-01 08:40:01.887072] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5c380 is same with the state(6) to be set 00:25:10.178 [2024-10-01 08:40:01.887084] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x163aad0 (9): Bad file descriptor 00:25:10.178 [2024-10-01 08:40:01.887095] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a64e90 (9): Bad file descriptor 00:25:10.178 [2024-10-01 08:40:01.887104] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x163af30 (9): Bad file descriptor 00:25:10.178 [2024-10-01 08:40:01.887223] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:10.178 [2024-10-01 08:40:01.887543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.178 [2024-10-01 08:40:01.887557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ab3e30 with addr=10.0.0.2, port=4420 00:25:10.178 [2024-10-01 08:40:01.887565] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab3e30 is same with the state(6) to be set 00:25:10.178 [2024-10-01 08:40:01.887575] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a5c380 (9): Bad file descriptor 00:25:10.178 [2024-10-01 08:40:01.887584] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:25:10.178 [2024-10-01 08:40:01.887591] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:25:10.178 [2024-10-01 08:40:01.887600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:25:10.178 [2024-10-01 08:40:01.887614] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:25:10.178 [2024-10-01 08:40:01.887621] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:25:10.178 [2024-10-01 08:40:01.887628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:25:10.178 [2024-10-01 08:40:01.887639] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.178 [2024-10-01 08:40:01.887650] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.178 [2024-10-01 08:40:01.887657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.178 [2024-10-01 08:40:01.887965] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.178 [2024-10-01 08:40:01.887977] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.178 [2024-10-01 08:40:01.887984] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.178 [2024-10-01 08:40:01.887992] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ab3e30 (9): Bad file descriptor 00:25:10.178 [2024-10-01 08:40:01.888008] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:25:10.178 [2024-10-01 08:40:01.888015] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:25:10.178 [2024-10-01 08:40:01.888022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:25:10.178 [2024-10-01 08:40:01.888066] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.178 [2024-10-01 08:40:01.888075] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:25:10.178 [2024-10-01 08:40:01.888081] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:25:10.178 [2024-10-01 08:40:01.888088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:25:10.178 [2024-10-01 08:40:01.888128] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.178 [2024-10-01 08:40:01.890858] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aae880 (9): Bad file descriptor 00:25:10.178 [2024-10-01 08:40:01.891003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.178 [2024-10-01 08:40:01.891016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.178 [2024-10-01 08:40:01.891030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.178 [2024-10-01 08:40:01.891038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.178 [2024-10-01 08:40:01.891048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.178 [2024-10-01 08:40:01.891057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.178 [2024-10-01 08:40:01.891066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.178 [2024-10-01 08:40:01.891074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.178 [2024-10-01 08:40:01.891084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.178 [2024-10-01 08:40:01.891092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.178 [2024-10-01 08:40:01.891102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.178 [2024-10-01 08:40:01.891110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.178 [2024-10-01 08:40:01.891119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.178 [2024-10-01 08:40:01.891131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.178 [2024-10-01 08:40:01.891141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.178 [2024-10-01 08:40:01.891149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.178 [2024-10-01 08:40:01.891159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.178 [2024-10-01 08:40:01.891167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.178 [2024-10-01 08:40:01.891177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.178 [2024-10-01 08:40:01.891184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.178 [2024-10-01 08:40:01.891194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.178 [2024-10-01 08:40:01.891203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.178 [2024-10-01 08:40:01.891213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.178 [2024-10-01 08:40:01.891221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.178 [2024-10-01 08:40:01.891230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.178 [2024-10-01 08:40:01.891238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.178 [2024-10-01 08:40:01.891247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.178 [2024-10-01 08:40:01.891255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.178 [2024-10-01 08:40:01.891265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.178 [2024-10-01 08:40:01.891273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.178 [2024-10-01 08:40:01.891283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.178 [2024-10-01 08:40:01.891291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.178 [2024-10-01 08:40:01.891300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.178 [2024-10-01 08:40:01.891308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.178 [2024-10-01 08:40:01.891318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.178 [2024-10-01 08:40:01.891326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.178 [2024-10-01 08:40:01.891335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.178 [2024-10-01 08:40:01.891343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.178 [2024-10-01 08:40:01.891355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.178 [2024-10-01 08:40:01.891364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.178 [2024-10-01 08:40:01.891373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.178 [2024-10-01 08:40:01.891381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.178 [2024-10-01 08:40:01.891391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.178 [2024-10-01 08:40:01.891399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.178 [2024-10-01 08:40:01.891408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.178 [2024-10-01 08:40:01.891416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.178 [2024-10-01 08:40:01.891426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.178 [2024-10-01 08:40:01.891435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.178 [2024-10-01 08:40:01.891445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.178 [2024-10-01 08:40:01.891452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.178 [2024-10-01 08:40:01.891463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.179 [2024-10-01 08:40:01.891471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.179 [2024-10-01 08:40:01.891481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.179 [2024-10-01 08:40:01.891488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.179 [2024-10-01 08:40:01.891498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.179 [2024-10-01 08:40:01.891506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.179 [2024-10-01 08:40:01.891516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.179 [2024-10-01 08:40:01.891524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.179 [2024-10-01 08:40:01.891535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.179 [2024-10-01 08:40:01.891542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.179 [2024-10-01 08:40:01.891553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.179 [2024-10-01 08:40:01.891560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.179 [2024-10-01 08:40:01.891571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.179 [2024-10-01 08:40:01.891580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.179 [2024-10-01 08:40:01.891591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.179 [2024-10-01 08:40:01.891599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.179 [2024-10-01 08:40:01.891609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.179 [2024-10-01 08:40:01.891616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.179 [2024-10-01 08:40:01.891626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.179 [2024-10-01 08:40:01.891634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.179 [2024-10-01 08:40:01.891644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.179 [2024-10-01 08:40:01.891651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.179 [2024-10-01 08:40:01.891661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.179 [2024-10-01 08:40:01.891669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.179 [2024-10-01 08:40:01.891679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.179 [2024-10-01 08:40:01.891686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.179 [2024-10-01 08:40:01.891696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.179 [2024-10-01 08:40:01.891704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.179 [2024-10-01 08:40:01.891713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.179 [2024-10-01 08:40:01.891721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.179 [2024-10-01 08:40:01.891730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.179 [2024-10-01 08:40:01.891738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.179 [2024-10-01 08:40:01.891748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.179 [2024-10-01 08:40:01.891756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.179 [2024-10-01 08:40:01.891765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.179 [2024-10-01 08:40:01.891773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.179 [2024-10-01 08:40:01.891782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.179 [2024-10-01 08:40:01.891791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.179 [2024-10-01 08:40:01.891802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.179 [2024-10-01 08:40:01.891810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.179 [2024-10-01 08:40:01.891819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.179 [2024-10-01 08:40:01.891827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.179 [2024-10-01 08:40:01.891837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.179 [2024-10-01 08:40:01.891845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.179 [2024-10-01 08:40:01.891854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.179 [2024-10-01 08:40:01.891862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.179 [2024-10-01 08:40:01.891871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.179 [2024-10-01 08:40:01.891879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.179 [2024-10-01 08:40:01.891889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.179 [2024-10-01 08:40:01.891897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.179 [2024-10-01 08:40:01.891906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.179 [2024-10-01 08:40:01.891913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.179 [2024-10-01 08:40:01.891924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.179 [2024-10-01 08:40:01.891931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.179 [2024-10-01 08:40:01.891941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.179 [2024-10-01 08:40:01.891948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.179 [2024-10-01 08:40:01.891958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.179 [2024-10-01 08:40:01.891966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.179 [2024-10-01 08:40:01.891976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.179 [2024-10-01 08:40:01.891983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.179 [2024-10-01 08:40:01.891997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.179 [2024-10-01 08:40:01.892005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.179 [2024-10-01 08:40:01.892015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.179 [2024-10-01 08:40:01.892024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.179 [2024-10-01 08:40:01.892034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.179 [2024-10-01 08:40:01.892042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.179 [2024-10-01 08:40:01.892052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.179 [2024-10-01 08:40:01.892060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.179 [2024-10-01 08:40:01.892070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.179 [2024-10-01 08:40:01.892077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.179 [2024-10-01 08:40:01.892087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.179 [2024-10-01 08:40:01.892097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.179 [2024-10-01 08:40:01.892107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.179 [2024-10-01 08:40:01.892115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.179 [2024-10-01 08:40:01.892125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.179 [2024-10-01 08:40:01.892133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.179 [2024-10-01 08:40:01.892143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.179 [2024-10-01 08:40:01.892150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.179 [2024-10-01 08:40:01.892159] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b25f70 is same with the state(6) to be set 00:25:10.179 [2024-10-01 08:40:01.893443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.179 [2024-10-01 08:40:01.893458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.180 [2024-10-01 08:40:01.893470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.180 [2024-10-01 08:40:01.893480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.180 [2024-10-01 08:40:01.893492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.180 [2024-10-01 08:40:01.893501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.180 [2024-10-01 08:40:01.893512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.180 [2024-10-01 08:40:01.893521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.180 [2024-10-01 08:40:01.893532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.180 [2024-10-01 08:40:01.893542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.180 [2024-10-01 08:40:01.893552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.180 [2024-10-01 08:40:01.893559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.180 [2024-10-01 08:40:01.893569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.180 [2024-10-01 08:40:01.893576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.180 [2024-10-01 08:40:01.893586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.180 [2024-10-01 08:40:01.893594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.180 [2024-10-01 08:40:01.893603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.180 [2024-10-01 08:40:01.893611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.180 [2024-10-01 08:40:01.893620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.180 [2024-10-01 08:40:01.893628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.180 [2024-10-01 08:40:01.893637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.180 [2024-10-01 08:40:01.893644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.180 [2024-10-01 08:40:01.893654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.180 [2024-10-01 08:40:01.893662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.180 [2024-10-01 08:40:01.893672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.180 [2024-10-01 08:40:01.893679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.180 [2024-10-01 08:40:01.893689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.180 [2024-10-01 08:40:01.893697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.180 [2024-10-01 08:40:01.893707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.180 [2024-10-01 08:40:01.893715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.180 [2024-10-01 08:40:01.893725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.180 [2024-10-01 08:40:01.893732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.180 [2024-10-01 08:40:01.893743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.180 [2024-10-01 08:40:01.893750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.180 [2024-10-01 08:40:01.893763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.180 [2024-10-01 08:40:01.893770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.180 [2024-10-01 08:40:01.893780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.180 [2024-10-01 08:40:01.893788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.180 [2024-10-01 08:40:01.893798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.180 [2024-10-01 08:40:01.893806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.180 [2024-10-01 08:40:01.893816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.180 [2024-10-01 08:40:01.893823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.180 [2024-10-01 08:40:01.893833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.180 [2024-10-01 08:40:01.893841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.180 [2024-10-01 08:40:01.893851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.180 [2024-10-01 08:40:01.893860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.180 [2024-10-01 08:40:01.893869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.180 [2024-10-01 08:40:01.893877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.180 [2024-10-01 08:40:01.893887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.180 [2024-10-01 08:40:01.893895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.180 [2024-10-01 08:40:01.893904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.180 [2024-10-01 08:40:01.893912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.180 [2024-10-01 08:40:01.893922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.180 [2024-10-01 08:40:01.893930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.180 [2024-10-01 08:40:01.893939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.180 [2024-10-01 08:40:01.893948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.180 [2024-10-01 08:40:01.893958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.180 [2024-10-01 08:40:01.893965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.180 [2024-10-01 08:40:01.893975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.180 [2024-10-01 08:40:01.893985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.180 [2024-10-01 08:40:01.893998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.180 [2024-10-01 08:40:01.894007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.180 [2024-10-01 08:40:01.894017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.180 [2024-10-01 08:40:01.894025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.180 [2024-10-01 08:40:01.894035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.180 [2024-10-01 08:40:01.894042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.180 [2024-10-01 08:40:01.894052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.180 [2024-10-01 08:40:01.894060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.180 [2024-10-01 08:40:01.894071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.180 [2024-10-01 08:40:01.894078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.180 [2024-10-01 08:40:01.894088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.180 [2024-10-01 08:40:01.894096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.180 [2024-10-01 08:40:01.894106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.180 [2024-10-01 08:40:01.894113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.180 [2024-10-01 08:40:01.894123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.180 [2024-10-01 08:40:01.894130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.180 [2024-10-01 08:40:01.894140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.180 [2024-10-01 08:40:01.894148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.180 [2024-10-01 08:40:01.894158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.180 [2024-10-01 08:40:01.894165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.181 [2024-10-01 08:40:01.894175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.181 [2024-10-01 08:40:01.894183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.181 [2024-10-01 08:40:01.894193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.181 [2024-10-01 08:40:01.894201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.181 [2024-10-01 08:40:01.894212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.181 [2024-10-01 08:40:01.894220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.181 [2024-10-01 08:40:01.894230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.181 [2024-10-01 08:40:01.894238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.181 [2024-10-01 08:40:01.894248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.181 [2024-10-01 08:40:01.894255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.181 [2024-10-01 08:40:01.894265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.181 [2024-10-01 08:40:01.894273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.181 [2024-10-01 08:40:01.894283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.181 [2024-10-01 08:40:01.894291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.181 [2024-10-01 08:40:01.894300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.181 [2024-10-01 08:40:01.894308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.181 [2024-10-01 08:40:01.894318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.181 [2024-10-01 08:40:01.894327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.181 [2024-10-01 08:40:01.894336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.181 [2024-10-01 08:40:01.894344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.181 [2024-10-01 08:40:01.894354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.181 [2024-10-01 08:40:01.894362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.181 [2024-10-01 08:40:01.894372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.181 [2024-10-01 08:40:01.894380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.181 [2024-10-01 08:40:01.894391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.181 [2024-10-01 08:40:01.894399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.181 [2024-10-01 08:40:01.894409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.181 [2024-10-01 08:40:01.894417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.181 [2024-10-01 08:40:01.894427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.181 [2024-10-01 08:40:01.894436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.181 [2024-10-01 08:40:01.894446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.181 [2024-10-01 08:40:01.894454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.181 [2024-10-01 08:40:01.894465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.181 [2024-10-01 08:40:01.894472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.181 [2024-10-01 08:40:01.894482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.181 [2024-10-01 08:40:01.894490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.181 [2024-10-01 08:40:01.894500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.181 [2024-10-01 08:40:01.894508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.181 [2024-10-01 08:40:01.894518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.181 [2024-10-01 08:40:01.894526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.181 [2024-10-01 08:40:01.894536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.181 [2024-10-01 08:40:01.894544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.181 [2024-10-01 08:40:01.894554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.181 [2024-10-01 08:40:01.894562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.181 [2024-10-01 08:40:01.894572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.181 [2024-10-01 08:40:01.894581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.181 [2024-10-01 08:40:01.894591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.181 [2024-10-01 08:40:01.894599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.181 [2024-10-01 08:40:01.894608] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b28a40 is same with the state(6) to be set 00:25:10.181 [2024-10-01 08:40:01.895876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.181 [2024-10-01 08:40:01.895890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.181 [2024-10-01 08:40:01.895904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.181 [2024-10-01 08:40:01.895914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.181 [2024-10-01 08:40:01.895926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.181 [2024-10-01 08:40:01.895939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.181 [2024-10-01 08:40:01.895952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.181 [2024-10-01 08:40:01.895962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.181 [2024-10-01 08:40:01.895973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.181 [2024-10-01 08:40:01.895981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.181 [2024-10-01 08:40:01.895992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.181 [2024-10-01 08:40:01.896005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.181 [2024-10-01 08:40:01.896015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.181 [2024-10-01 08:40:01.896023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.181 [2024-10-01 08:40:01.896033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.181 [2024-10-01 08:40:01.896041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.181 [2024-10-01 08:40:01.896051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.181 [2024-10-01 08:40:01.896059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.181 [2024-10-01 08:40:01.896069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.181 [2024-10-01 08:40:01.896078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.181 [2024-10-01 08:40:01.896088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.181 [2024-10-01 08:40:01.896096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.181 [2024-10-01 08:40:01.896106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.182 [2024-10-01 08:40:01.896114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.182 [2024-10-01 08:40:01.896124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.182 [2024-10-01 08:40:01.896132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.182 [2024-10-01 08:40:01.896142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.182 [2024-10-01 08:40:01.896150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.182 [2024-10-01 08:40:01.896160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.182 [2024-10-01 08:40:01.896168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.182 [2024-10-01 08:40:01.896180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.182 [2024-10-01 08:40:01.896188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.182 [2024-10-01 08:40:01.896198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.182 [2024-10-01 08:40:01.896206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.182 [2024-10-01 08:40:01.896216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.182 [2024-10-01 08:40:01.896224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.182 [2024-10-01 08:40:01.896234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.182 [2024-10-01 08:40:01.896242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.182 [2024-10-01 08:40:01.896252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.182 [2024-10-01 08:40:01.896260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.182 [2024-10-01 08:40:01.896269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.182 [2024-10-01 08:40:01.896278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.182 [2024-10-01 08:40:01.896287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.182 [2024-10-01 08:40:01.896295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.182 [2024-10-01 08:40:01.896305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.182 [2024-10-01 08:40:01.896313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.182 [2024-10-01 08:40:01.896323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.182 [2024-10-01 08:40:01.896331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.182 [2024-10-01 08:40:01.896340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.182 [2024-10-01 08:40:01.896348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.182 [2024-10-01 08:40:01.896359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.182 [2024-10-01 08:40:01.896366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.182 [2024-10-01 08:40:01.896377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.182 [2024-10-01 08:40:01.896384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.182 [2024-10-01 08:40:01.896395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.182 [2024-10-01 08:40:01.896404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.182 [2024-10-01 08:40:01.896414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.182 [2024-10-01 08:40:01.896423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.182 [2024-10-01 08:40:01.896432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.182 [2024-10-01 08:40:01.896440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.182 [2024-10-01 08:40:01.896450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.182 [2024-10-01 08:40:01.896458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.182 [2024-10-01 08:40:01.896468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.182 [2024-10-01 08:40:01.896475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.182 [2024-10-01 08:40:01.896486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.182 [2024-10-01 08:40:01.896494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.182 [2024-10-01 08:40:01.896505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.182 [2024-10-01 08:40:01.896513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.182 [2024-10-01 08:40:01.896524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.182 [2024-10-01 08:40:01.896532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.182 [2024-10-01 08:40:01.896541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.182 [2024-10-01 08:40:01.896549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.182 [2024-10-01 08:40:01.896560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.182 [2024-10-01 08:40:01.896568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.182 [2024-10-01 08:40:01.896578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.182 [2024-10-01 08:40:01.896585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.182 [2024-10-01 08:40:01.896595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.182 [2024-10-01 08:40:01.896603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.182 [2024-10-01 08:40:01.896613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.182 [2024-10-01 08:40:01.896621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.182 [2024-10-01 08:40:01.896631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.182 [2024-10-01 08:40:01.896644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.182 [2024-10-01 08:40:01.896654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.182 [2024-10-01 08:40:01.896662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.182 [2024-10-01 08:40:01.896671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.182 [2024-10-01 08:40:01.896679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.182 [2024-10-01 08:40:01.896689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.182 [2024-10-01 08:40:01.896697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.182 [2024-10-01 08:40:01.896707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.182 [2024-10-01 08:40:01.896716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.182 [2024-10-01 08:40:01.896725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.182 [2024-10-01 08:40:01.896733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.182 [2024-10-01 08:40:01.896743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.182 [2024-10-01 08:40:01.896752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.182 [2024-10-01 08:40:01.896762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.182 [2024-10-01 08:40:01.896770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.182 [2024-10-01 08:40:01.896779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.182 [2024-10-01 08:40:01.896787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.182 [2024-10-01 08:40:01.896797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.182 [2024-10-01 08:40:01.896805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.182 [2024-10-01 08:40:01.896815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.182 [2024-10-01 08:40:01.896822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.182 [2024-10-01 08:40:01.896832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.183 [2024-10-01 08:40:01.896841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.183 [2024-10-01 08:40:01.896851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.183 [2024-10-01 08:40:01.896859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.183 [2024-10-01 08:40:01.896870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.183 [2024-10-01 08:40:01.896879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.183 [2024-10-01 08:40:01.896888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.183 [2024-10-01 08:40:01.896896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.183 [2024-10-01 08:40:01.896906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.183 [2024-10-01 08:40:01.896914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.183 [2024-10-01 08:40:01.896924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.183 [2024-10-01 08:40:01.896932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.183 [2024-10-01 08:40:01.896942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.183 [2024-10-01 08:40:01.896950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.183 [2024-10-01 08:40:01.896960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.183 [2024-10-01 08:40:01.896968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.183 [2024-10-01 08:40:01.896978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.183 [2024-10-01 08:40:01.896986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.183 [2024-10-01 08:40:01.897000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.183 [2024-10-01 08:40:01.897008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.183 [2024-10-01 08:40:01.897018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.183 [2024-10-01 08:40:01.897026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.183 [2024-10-01 08:40:01.897036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.183 [2024-10-01 08:40:01.897044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.183 [2024-10-01 08:40:01.897053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.183 [2024-10-01 08:40:01.897061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.183 [2024-10-01 08:40:01.897070] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3ca20 is same with the state(6) to be set 00:25:10.183 [2024-10-01 08:40:01.898333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.183 [2024-10-01 08:40:01.898350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.183 [2024-10-01 08:40:01.898363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.183 [2024-10-01 08:40:01.898371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.183 [2024-10-01 08:40:01.898381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.183 [2024-10-01 08:40:01.898388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.183 [2024-10-01 08:40:01.898398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.183 [2024-10-01 08:40:01.898405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.183 [2024-10-01 08:40:01.898415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.183 [2024-10-01 08:40:01.898422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.183 [2024-10-01 08:40:01.898433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.183 [2024-10-01 08:40:01.898441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.183 [2024-10-01 08:40:01.898450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.183 [2024-10-01 08:40:01.898458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.183 [2024-10-01 08:40:01.898468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.183 [2024-10-01 08:40:01.898475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.183 [2024-10-01 08:40:01.898485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.183 [2024-10-01 08:40:01.898493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.183 [2024-10-01 08:40:01.898503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.183 [2024-10-01 08:40:01.898511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.183 [2024-10-01 08:40:01.898521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.183 [2024-10-01 08:40:01.898529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.183 [2024-10-01 08:40:01.898539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.183 [2024-10-01 08:40:01.898547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.183 [2024-10-01 08:40:01.898557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.183 [2024-10-01 08:40:01.898564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.183 [2024-10-01 08:40:01.898574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.183 [2024-10-01 08:40:01.898584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.183 [2024-10-01 08:40:01.898594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.183 [2024-10-01 08:40:01.898602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.183 [2024-10-01 08:40:01.898612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.183 [2024-10-01 08:40:01.898620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.183 [2024-10-01 08:40:01.898630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.183 [2024-10-01 08:40:01.898638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.183 [2024-10-01 08:40:01.898649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.183 [2024-10-01 08:40:01.898657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.183 [2024-10-01 08:40:01.898667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.183 [2024-10-01 08:40:01.898675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.183 [2024-10-01 08:40:01.898685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.183 [2024-10-01 08:40:01.898694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.183 [2024-10-01 08:40:01.898704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.183 [2024-10-01 08:40:01.898713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.183 [2024-10-01 08:40:01.898723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.183 [2024-10-01 08:40:01.898731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.183 [2024-10-01 08:40:01.898742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.183 [2024-10-01 08:40:01.898750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.183 [2024-10-01 08:40:01.898761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.183 [2024-10-01 08:40:01.898769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.183 [2024-10-01 08:40:01.898779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.183 [2024-10-01 08:40:01.898787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.183 [2024-10-01 08:40:01.898798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.183 [2024-10-01 08:40:01.898805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.183 [2024-10-01 08:40:01.898816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.183 [2024-10-01 08:40:01.898824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.184 [2024-10-01 08:40:01.898834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.184 [2024-10-01 08:40:01.898843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.184 [2024-10-01 08:40:01.898853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.184 [2024-10-01 08:40:01.898861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.184 [2024-10-01 08:40:01.898871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.184 [2024-10-01 08:40:01.898879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.184 [2024-10-01 08:40:01.898889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.184 [2024-10-01 08:40:01.898897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.184 [2024-10-01 08:40:01.898907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.184 [2024-10-01 08:40:01.898915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.184 [2024-10-01 08:40:01.898925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.184 [2024-10-01 08:40:01.898933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.184 [2024-10-01 08:40:01.898943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.184 [2024-10-01 08:40:01.898951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.184 [2024-10-01 08:40:01.898960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.184 [2024-10-01 08:40:01.898968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.184 [2024-10-01 08:40:01.898978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.184 [2024-10-01 08:40:01.898986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.184 [2024-10-01 08:40:01.899000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.184 [2024-10-01 08:40:01.899008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.184 [2024-10-01 08:40:01.899018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.184 [2024-10-01 08:40:01.899026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.184 [2024-10-01 08:40:01.899036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.184 [2024-10-01 08:40:01.899046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.184 [2024-10-01 08:40:01.899056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.184 [2024-10-01 08:40:01.899064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.184 [2024-10-01 08:40:01.899074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.184 [2024-10-01 08:40:01.899082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.184 [2024-10-01 08:40:01.899092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.184 [2024-10-01 08:40:01.899101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.184 [2024-10-01 08:40:01.899110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.184 [2024-10-01 08:40:01.899119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.184 [2024-10-01 08:40:01.899128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.184 [2024-10-01 08:40:01.899136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.184 [2024-10-01 08:40:01.899146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.184 [2024-10-01 08:40:01.899154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.184 [2024-10-01 08:40:01.899164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.184 [2024-10-01 08:40:01.899172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.184 [2024-10-01 08:40:01.899182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.184 [2024-10-01 08:40:01.899190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.184 [2024-10-01 08:40:01.899199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.184 [2024-10-01 08:40:01.899207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.184 [2024-10-01 08:40:01.899216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.184 [2024-10-01 08:40:01.899225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.184 [2024-10-01 08:40:01.899234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.184 [2024-10-01 08:40:01.899242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.184 [2024-10-01 08:40:01.899252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.184 [2024-10-01 08:40:01.899260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.184 [2024-10-01 08:40:01.899271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.184 [2024-10-01 08:40:01.899280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.184 [2024-10-01 08:40:01.899289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.184 [2024-10-01 08:40:01.899297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.184 [2024-10-01 08:40:01.899306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.184 [2024-10-01 08:40:01.899314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.184 [2024-10-01 08:40:01.899325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.184 [2024-10-01 08:40:01.899333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.184 [2024-10-01 08:40:01.899343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.184 [2024-10-01 08:40:01.899351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.184 [2024-10-01 08:40:01.899361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.184 [2024-10-01 08:40:01.899371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.184 [2024-10-01 08:40:01.899381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.184 [2024-10-01 08:40:01.899388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.184 [2024-10-01 08:40:01.899399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.184 [2024-10-01 08:40:01.899407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.184 [2024-10-01 08:40:01.899417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.184 [2024-10-01 08:40:01.899424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.184 [2024-10-01 08:40:01.899434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.184 [2024-10-01 08:40:01.899442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.184 [2024-10-01 08:40:01.899452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.184 [2024-10-01 08:40:01.899459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.184 [2024-10-01 08:40:01.899470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.184 [2024-10-01 08:40:01.899478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.184 [2024-10-01 08:40:01.899488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.184 [2024-10-01 08:40:01.899497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.184 [2024-10-01 08:40:01.899505] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3dfa0 is same with the state(6) to be set 00:25:10.184 [2024-10-01 08:40:01.900781] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:25:10.184 [2024-10-01 08:40:01.900802] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:25:10.184 [2024-10-01 08:40:01.900899] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:25:10.184 [2024-10-01 08:40:01.900914] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:25:10.184 [2024-10-01 08:40:01.900925] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.184 [2024-10-01 08:40:01.900934] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:25:10.185 [2024-10-01 08:40:01.900943] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:25:10.185 [2024-10-01 08:40:01.901211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.185 [2024-10-01 08:40:01.901228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1638990 with addr=10.0.0.2, port=4420 00:25:10.185 [2024-10-01 08:40:01.901236] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1638990 is same with the state(6) to be set 00:25:10.185 [2024-10-01 08:40:01.901612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.185 [2024-10-01 08:40:01.901624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a651b0 with addr=10.0.0.2, port=4420 00:25:10.185 [2024-10-01 08:40:01.901633] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a651b0 is same with the state(6) to be set 00:25:10.185 [2024-10-01 08:40:01.902690] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:25:10.185 [2024-10-01 08:40:01.902707] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:25:10.185 [2024-10-01 08:40:01.902904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.185 [2024-10-01 08:40:01.902918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1553610 with addr=10.0.0.2, port=4420 00:25:10.185 [2024-10-01 08:40:01.902925] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1553610 is same with the state(6) to be set 00:25:10.185 [2024-10-01 08:40:01.903225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.185 [2024-10-01 08:40:01.903237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1630120 with addr=10.0.0.2, port=4420 00:25:10.185 [2024-10-01 08:40:01.903244] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1630120 is same with the state(6) to be set 00:25:10.185 [2024-10-01 08:40:01.903596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.185 [2024-10-01 08:40:01.903608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x163af30 with addr=10.0.0.2, port=4420 00:25:10.185 [2024-10-01 08:40:01.903616] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163af30 is same with the state(6) to be set 00:25:10.185 [2024-10-01 08:40:01.903911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.185 [2024-10-01 08:40:01.903923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a64e90 with addr=10.0.0.2, port=4420 00:25:10.185 [2024-10-01 08:40:01.903931] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a64e90 is same with the state(6) to be set 00:25:10.185 [2024-10-01 08:40:01.904236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.185 [2024-10-01 08:40:01.904252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x163aad0 with addr=10.0.0.2, port=4420 00:25:10.185 [2024-10-01 08:40:01.904260] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163aad0 is same with the state(6) to be set 00:25:10.185 [2024-10-01 08:40:01.904270] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1638990 (9): Bad file descriptor 00:25:10.185 [2024-10-01 08:40:01.904280] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a651b0 (9): Bad file descriptor 00:25:10.185 [2024-10-01 08:40:01.904345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.185 [2024-10-01 08:40:01.904356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.185 [2024-10-01 08:40:01.904368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.185 [2024-10-01 08:40:01.904376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.185 [2024-10-01 08:40:01.904386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.185 [2024-10-01 08:40:01.904394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.185 [2024-10-01 08:40:01.904404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.185 [2024-10-01 08:40:01.904411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.185 [2024-10-01 08:40:01.904421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.185 [2024-10-01 08:40:01.904429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.185 [2024-10-01 08:40:01.904438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.185 [2024-10-01 08:40:01.904446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.185 [2024-10-01 08:40:01.904455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.185 [2024-10-01 08:40:01.904462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.185 [2024-10-01 08:40:01.904472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.185 [2024-10-01 08:40:01.904480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.185 [2024-10-01 08:40:01.904489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.185 [2024-10-01 08:40:01.904497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.185 [2024-10-01 08:40:01.904507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.185 [2024-10-01 08:40:01.904515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.185 [2024-10-01 08:40:01.904524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.185 [2024-10-01 08:40:01.904531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.185 [2024-10-01 08:40:01.904543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.185 [2024-10-01 08:40:01.904552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.185 [2024-10-01 08:40:01.904563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.185 [2024-10-01 08:40:01.904571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.185 [2024-10-01 08:40:01.904580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.185 [2024-10-01 08:40:01.904588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.185 [2024-10-01 08:40:01.904598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.185 [2024-10-01 08:40:01.904605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.185 [2024-10-01 08:40:01.904615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.185 [2024-10-01 08:40:01.904623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.185 [2024-10-01 08:40:01.904633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.185 [2024-10-01 08:40:01.904640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.185 [2024-10-01 08:40:01.904650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.185 [2024-10-01 08:40:01.904658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.185 [2024-10-01 08:40:01.904667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.185 [2024-10-01 08:40:01.904675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.185 [2024-10-01 08:40:01.904685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.185 [2024-10-01 08:40:01.904692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.185 [2024-10-01 08:40:01.904702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.185 [2024-10-01 08:40:01.904709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.185 [2024-10-01 08:40:01.904719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.185 [2024-10-01 08:40:01.904727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.185 [2024-10-01 08:40:01.904737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.185 [2024-10-01 08:40:01.904744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.185 [2024-10-01 08:40:01.904753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.185 [2024-10-01 08:40:01.904763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.185 [2024-10-01 08:40:01.904773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.186 [2024-10-01 08:40:01.904781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.186 [2024-10-01 08:40:01.904791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.186 [2024-10-01 08:40:01.904798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.186 [2024-10-01 08:40:01.904807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.186 [2024-10-01 08:40:01.904815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.186 [2024-10-01 08:40:01.904824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.186 [2024-10-01 08:40:01.904831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.186 [2024-10-01 08:40:01.904842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.186 [2024-10-01 08:40:01.904849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.186 [2024-10-01 08:40:01.904858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.186 [2024-10-01 08:40:01.904866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.186 [2024-10-01 08:40:01.904875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.186 [2024-10-01 08:40:01.904882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.186 [2024-10-01 08:40:01.904893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.186 [2024-10-01 08:40:01.904900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.186 [2024-10-01 08:40:01.904910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.186 [2024-10-01 08:40:01.904917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.186 [2024-10-01 08:40:01.904928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.186 [2024-10-01 08:40:01.904936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.186 [2024-10-01 08:40:01.904946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.186 [2024-10-01 08:40:01.904954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.186 [2024-10-01 08:40:01.904964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.186 [2024-10-01 08:40:01.904972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.186 [2024-10-01 08:40:01.904983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.186 [2024-10-01 08:40:01.904990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.186 [2024-10-01 08:40:01.905005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.186 [2024-10-01 08:40:01.905012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.186 [2024-10-01 08:40:01.905022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.186 [2024-10-01 08:40:01.905030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.186 [2024-10-01 08:40:01.905039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.186 [2024-10-01 08:40:01.905047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.186 [2024-10-01 08:40:01.905057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.186 [2024-10-01 08:40:01.905064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.186 [2024-10-01 08:40:01.905073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.186 [2024-10-01 08:40:01.905081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.186 [2024-10-01 08:40:01.905090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.186 [2024-10-01 08:40:01.905099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.186 [2024-10-01 08:40:01.905109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.186 [2024-10-01 08:40:01.905117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.186 [2024-10-01 08:40:01.905127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.186 [2024-10-01 08:40:01.905134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.186 [2024-10-01 08:40:01.905145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.186 [2024-10-01 08:40:01.905153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.186 [2024-10-01 08:40:01.905163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.186 [2024-10-01 08:40:01.905170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.186 [2024-10-01 08:40:01.905180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.186 [2024-10-01 08:40:01.905189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.186 [2024-10-01 08:40:01.905199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.186 [2024-10-01 08:40:01.905208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.186 [2024-10-01 08:40:01.905217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.186 [2024-10-01 08:40:01.905225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.186 [2024-10-01 08:40:01.905234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.186 [2024-10-01 08:40:01.905242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.186 [2024-10-01 08:40:01.905252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.186 [2024-10-01 08:40:01.905259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.186 [2024-10-01 08:40:01.905269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.186 [2024-10-01 08:40:01.905276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.186 [2024-10-01 08:40:01.905287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.186 [2024-10-01 08:40:01.905295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.186 [2024-10-01 08:40:01.905305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.186 [2024-10-01 08:40:01.905313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.186 [2024-10-01 08:40:01.905323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.186 [2024-10-01 08:40:01.905331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.186 [2024-10-01 08:40:01.905341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.186 [2024-10-01 08:40:01.905348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.186 [2024-10-01 08:40:01.905358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.186 [2024-10-01 08:40:01.905365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.186 [2024-10-01 08:40:01.905375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.186 [2024-10-01 08:40:01.905383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.186 [2024-10-01 08:40:01.905392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.186 [2024-10-01 08:40:01.905400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.186 [2024-10-01 08:40:01.905410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.186 [2024-10-01 08:40:01.905417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.186 [2024-10-01 08:40:01.905429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.186 [2024-10-01 08:40:01.905436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.186 [2024-10-01 08:40:01.905447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.186 [2024-10-01 08:40:01.905454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.186 [2024-10-01 08:40:01.905463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.186 [2024-10-01 08:40:01.905471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.186 [2024-10-01 08:40:01.905479] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3f310 is same with the state(6) to be set 00:25:10.187 task offset: 24576 on job bdev=Nvme2n1 fails 00:25:10.187 00:25:10.187 Latency(us) 00:25:10.187 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:10.187 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:10.187 Job: Nvme1n1 ended in about 0.96 seconds with error 00:25:10.187 Verification LBA range: start 0x0 length 0x400 00:25:10.187 Nvme1n1 : 0.96 199.88 12.49 66.63 0.00 237423.15 19442.35 242920.11 00:25:10.187 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:10.187 Job: Nvme2n1 ended in about 0.95 seconds with error 00:25:10.187 Verification LBA range: start 0x0 length 0x400 00:25:10.187 Nvme2n1 : 0.95 201.98 12.62 67.33 0.00 230068.27 21189.97 248162.99 00:25:10.187 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:10.187 Job: Nvme3n1 ended in about 0.97 seconds with error 00:25:10.187 Verification LBA range: start 0x0 length 0x400 00:25:10.187 Nvme3n1 : 0.97 197.25 12.33 65.75 0.00 230938.67 11250.35 255153.49 00:25:10.187 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:10.187 Job: Nvme4n1 ended in about 0.95 seconds with error 00:25:10.187 Verification LBA range: start 0x0 length 0x400 00:25:10.187 Nvme4n1 : 0.95 201.71 12.61 67.24 0.00 220687.57 16602.45 246415.36 00:25:10.187 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:10.187 Job: Nvme5n1 ended in about 0.98 seconds with error 00:25:10.187 Verification LBA range: start 0x0 length 0x400 00:25:10.187 Nvme5n1 : 0.98 131.17 8.20 65.58 0.00 296002.28 20753.07 277872.64 00:25:10.187 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:10.187 Job: Nvme6n1 ended in about 0.95 seconds with error 00:25:10.187 Verification LBA range: start 0x0 length 0x400 00:25:10.187 Nvme6n1 : 0.95 201.44 12.59 67.15 0.00 211348.48 21408.43 227191.47 00:25:10.187 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:10.187 Job: Nvme7n1 ended in about 0.98 seconds with error 00:25:10.187 Verification LBA range: start 0x0 length 0x400 00:25:10.187 Nvme7n1 : 0.98 206.48 12.91 65.42 0.00 204778.03 12178.77 232434.35 00:25:10.187 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:10.187 Job: Nvme8n1 ended in about 0.98 seconds with error 00:25:10.187 Verification LBA range: start 0x0 length 0x400 00:25:10.187 Nvme8n1 : 0.98 130.52 8.16 65.26 0.00 278140.59 15837.87 256901.12 00:25:10.187 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:10.187 Job: Nvme9n1 ended in about 0.99 seconds with error 00:25:10.187 Verification LBA range: start 0x0 length 0x400 00:25:10.187 Nvme9n1 : 0.99 129.73 8.11 64.86 0.00 273631.00 25231.36 249910.61 00:25:10.187 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:10.187 Job: Nvme10n1 ended in about 0.97 seconds with error 00:25:10.187 Verification LBA range: start 0x0 length 0x400 00:25:10.187 Nvme10n1 : 0.97 132.46 8.28 66.23 0.00 260491.66 16493.23 269134.51 00:25:10.187 =================================================================================================================== 00:25:10.187 Total : 1732.61 108.29 661.44 0.00 240560.62 11250.35 277872.64 00:25:10.187 [2024-10-01 08:40:01.935384] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:10.187 [2024-10-01 08:40:01.935436] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:25:10.187 [2024-10-01 08:40:01.935800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.187 [2024-10-01 08:40:01.935820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5c380 with addr=10.0.0.2, port=4420 00:25:10.187 [2024-10-01 08:40:01.935831] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5c380 is same with the state(6) to be set 00:25:10.187 [2024-10-01 08:40:01.936111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.187 [2024-10-01 08:40:01.936123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ab3e30 with addr=10.0.0.2, port=4420 00:25:10.187 [2024-10-01 08:40:01.936131] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab3e30 is same with the state(6) to be set 00:25:10.187 [2024-10-01 08:40:01.936145] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1553610 (9): Bad file descriptor 00:25:10.187 [2024-10-01 08:40:01.936157] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1630120 (9): Bad file descriptor 00:25:10.187 [2024-10-01 08:40:01.936166] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x163af30 (9): Bad file descriptor 00:25:10.187 [2024-10-01 08:40:01.936176] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a64e90 (9): Bad file descriptor 00:25:10.187 [2024-10-01 08:40:01.936186] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x163aad0 (9): Bad file descriptor 00:25:10.187 [2024-10-01 08:40:01.936195] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:25:10.187 [2024-10-01 08:40:01.936202] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:25:10.187 [2024-10-01 08:40:01.936210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:25:10.187 [2024-10-01 08:40:01.936226] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:25:10.187 [2024-10-01 08:40:01.936234] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:25:10.187 [2024-10-01 08:40:01.936241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:25:10.187 [2024-10-01 08:40:01.936351] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.187 [2024-10-01 08:40:01.936363] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.187 [2024-10-01 08:40:01.936641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.187 [2024-10-01 08:40:01.936653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae880 with addr=10.0.0.2, port=4420 00:25:10.187 [2024-10-01 08:40:01.936662] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aae880 is same with the state(6) to be set 00:25:10.187 [2024-10-01 08:40:01.936672] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a5c380 (9): Bad file descriptor 00:25:10.187 [2024-10-01 08:40:01.936681] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ab3e30 (9): Bad file descriptor 00:25:10.187 [2024-10-01 08:40:01.936690] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:25:10.187 [2024-10-01 08:40:01.936704] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:25:10.187 [2024-10-01 08:40:01.936712] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:25:10.187 [2024-10-01 08:40:01.936723] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:25:10.187 [2024-10-01 08:40:01.936730] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:25:10.187 [2024-10-01 08:40:01.936738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:25:10.187 [2024-10-01 08:40:01.936749] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.187 [2024-10-01 08:40:01.936755] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.187 [2024-10-01 08:40:01.936762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.187 [2024-10-01 08:40:01.936773] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:25:10.187 [2024-10-01 08:40:01.936781] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:25:10.187 [2024-10-01 08:40:01.936788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:25:10.187 [2024-10-01 08:40:01.936798] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:25:10.187 [2024-10-01 08:40:01.936805] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:25:10.187 [2024-10-01 08:40:01.936812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:25:10.187 [2024-10-01 08:40:01.936840] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:10.187 [2024-10-01 08:40:01.936853] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:10.187 [2024-10-01 08:40:01.936865] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:10.187 [2024-10-01 08:40:01.936875] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:10.187 [2024-10-01 08:40:01.936886] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:10.187 [2024-10-01 08:40:01.936906] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:10.187 [2024-10-01 08:40:01.936918] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:10.187 [2024-10-01 08:40:01.937226] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.187 [2024-10-01 08:40:01.937238] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.187 [2024-10-01 08:40:01.937244] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.187 [2024-10-01 08:40:01.937251] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.187 [2024-10-01 08:40:01.937257] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.187 [2024-10-01 08:40:01.937272] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aae880 (9): Bad file descriptor 00:25:10.187 [2024-10-01 08:40:01.937282] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:25:10.187 [2024-10-01 08:40:01.937288] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:25:10.187 [2024-10-01 08:40:01.937296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:25:10.187 [2024-10-01 08:40:01.937310] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:25:10.187 [2024-10-01 08:40:01.937317] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:25:10.187 [2024-10-01 08:40:01.937324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:25:10.187 [2024-10-01 08:40:01.937606] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:25:10.187 [2024-10-01 08:40:01.937621] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:25:10.188 [2024-10-01 08:40:01.937631] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.188 [2024-10-01 08:40:01.937637] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.188 [2024-10-01 08:40:01.937656] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:25:10.188 [2024-10-01 08:40:01.937664] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:25:10.188 [2024-10-01 08:40:01.937671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:25:10.188 [2024-10-01 08:40:01.937710] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.188 [2024-10-01 08:40:01.938040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.188 [2024-10-01 08:40:01.938054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a651b0 with addr=10.0.0.2, port=4420 00:25:10.188 [2024-10-01 08:40:01.938062] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a651b0 is same with the state(6) to be set 00:25:10.188 [2024-10-01 08:40:01.938263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.188 [2024-10-01 08:40:01.938275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1638990 with addr=10.0.0.2, port=4420 00:25:10.188 [2024-10-01 08:40:01.938282] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1638990 is same with the state(6) to be set 00:25:10.188 [2024-10-01 08:40:01.938312] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a651b0 (9): Bad file descriptor 00:25:10.188 [2024-10-01 08:40:01.938323] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1638990 (9): Bad file descriptor 00:25:10.188 [2024-10-01 08:40:01.938350] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:25:10.188 [2024-10-01 08:40:01.938358] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:25:10.188 [2024-10-01 08:40:01.938366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:25:10.188 [2024-10-01 08:40:01.938375] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:25:10.188 [2024-10-01 08:40:01.938382] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:25:10.188 [2024-10-01 08:40:01.938389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:25:10.188 [2024-10-01 08:40:01.938417] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.188 [2024-10-01 08:40:01.938424] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.449 08:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:25:11.391 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 3827011 00:25:11.392 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:25:11.392 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3827011 00:25:11.392 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:25:11.392 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:11.392 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:25:11.392 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:11.392 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 3827011 00:25:11.392 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:25:11.392 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:11.392 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:25:11.392 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:25:11.392 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:25:11.392 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:11.392 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:25:11.392 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:25:11.392 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:11.392 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:11.392 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:25:11.392 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # nvmfcleanup 00:25:11.392 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:25:11.392 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:11.392 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:25:11.392 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:11.392 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:11.392 rmmod nvme_tcp 00:25:11.392 rmmod nvme_fabrics 00:25:11.392 rmmod nvme_keyring 00:25:11.653 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:11.653 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:25:11.653 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:25:11.653 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@513 -- # '[' -n 3826791 ']' 00:25:11.653 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@514 -- # killprocess 3826791 00:25:11.653 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 3826791 ']' 00:25:11.653 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 3826791 00:25:11.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3826791) - No such process 00:25:11.653 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@977 -- # echo 'Process with pid 3826791 is not found' 00:25:11.653 Process with pid 3826791 is not found 00:25:11.653 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:25:11.653 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:25:11.653 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:25:11.653 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:25:11.653 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:25:11.653 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@787 -- # iptables-save 00:25:11.653 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@787 -- # iptables-restore 00:25:11.653 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:11.653 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:11.653 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:11.653 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:11.653 08:40:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:13.567 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:13.567 00:25:13.567 real 0m7.912s 00:25:13.567 user 0m19.515s 00:25:13.567 sys 0m1.253s 00:25:13.567 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:13.567 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:13.567 ************************************ 00:25:13.567 END TEST nvmf_shutdown_tc3 00:25:13.567 ************************************ 00:25:13.567 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:25:13.567 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:25:13.567 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:25:13.567 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:13.567 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:13.567 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:13.567 ************************************ 00:25:13.567 START TEST nvmf_shutdown_tc4 00:25:13.567 ************************************ 00:25:13.567 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc4 00:25:13.567 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:25:13.567 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:25:13.567 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:25:13.567 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:13.568 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@472 -- # prepare_net_devs 00:25:13.568 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@434 -- # local -g is_hw=no 00:25:13.568 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@436 -- # remove_spdk_ns 00:25:13.568 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:13.568 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:13.568 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:13.568 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:25:13.568 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:25:13.568 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:25:13.568 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:13.829 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:13.829 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:25:13.829 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:13.829 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:13.829 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:13.829 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:13.829 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:13.829 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:25:13.829 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:13.829 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:25:13.829 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:25:13.829 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:25:13.829 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:25:13.829 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:25:13.829 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:25:13.829 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:13.829 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:13.829 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:13.829 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:13.829 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:13.829 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:13.829 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:13.829 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:13.829 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:13.829 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:13.829 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:13.829 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:25:13.829 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:25:13.829 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:25:13.829 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:25:13.829 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:25:13.829 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:25:13.829 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:13.829 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:13.829 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:13.829 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:13.829 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:13.829 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:13.829 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:13.829 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:13.829 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:13.829 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:13.829 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:13.829 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:13.829 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:13.829 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:13.829 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:13.829 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:13.829 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:25:13.829 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:25:13.829 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:25:13.829 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:13.829 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:13.829 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:13.829 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:13.829 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:13.829 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:13.829 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:13.829 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:13.829 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:13.829 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:13.829 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:13.829 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:13.829 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:13.829 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:13.829 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:13.830 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:13.830 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:13.830 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:13.830 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:13.830 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:13.830 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:25:13.830 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # is_hw=yes 00:25:13.830 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:25:13.830 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:25:13.830 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:25:13.830 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:13.830 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:13.830 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:13.830 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:13.830 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:13.830 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:13.830 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:13.830 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:13.830 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:13.830 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:13.830 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:13.830 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:13.830 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:13.830 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:13.830 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:13.830 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:13.830 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:13.830 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:13.830 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:14.091 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:14.091 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:14.091 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:14.091 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:14.091 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:14.091 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.552 ms 00:25:14.091 00:25:14.091 --- 10.0.0.2 ping statistics --- 00:25:14.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.091 rtt min/avg/max/mdev = 0.552/0.552/0.552/0.000 ms 00:25:14.091 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:14.091 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:14.091 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:25:14.091 00:25:14.091 --- 10.0.0.1 ping statistics --- 00:25:14.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.091 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:25:14.091 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:14.091 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # return 0 00:25:14.091 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:25:14.091 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:14.091 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:25:14.091 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:25:14.091 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:14.091 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:25:14.091 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:25:14.091 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:25:14.091 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:25:14.091 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:14.091 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:14.091 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@505 -- # nvmfpid=3828373 00:25:14.091 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@506 -- # waitforlisten 3828373 00:25:14.091 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:14.091 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@831 -- # '[' -z 3828373 ']' 00:25:14.091 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:14.091 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:14.091 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:14.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:14.091 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:14.091 08:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:14.091 [2024-10-01 08:40:05.802205] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:25:14.091 [2024-10-01 08:40:05.802273] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:14.091 [2024-10-01 08:40:05.889872] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:14.351 [2024-10-01 08:40:05.951237] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:14.351 [2024-10-01 08:40:05.951269] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:14.351 [2024-10-01 08:40:05.951275] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:14.351 [2024-10-01 08:40:05.951279] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:14.351 [2024-10-01 08:40:05.951284] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:14.351 [2024-10-01 08:40:05.952541] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:25:14.351 [2024-10-01 08:40:05.952702] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:25:14.351 [2024-10-01 08:40:05.952859] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:14.351 [2024-10-01 08:40:05.952860] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:25:14.924 08:40:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:14.924 08:40:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # return 0 00:25:14.924 08:40:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:25:14.924 08:40:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:14.924 08:40:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:14.924 08:40:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:14.924 08:40:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:14.924 08:40:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.924 08:40:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:14.924 [2024-10-01 08:40:06.645356] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:14.924 08:40:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.924 08:40:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:25:14.924 08:40:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:25:14.924 08:40:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:14.924 08:40:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:14.924 08:40:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:14.924 08:40:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:14.924 08:40:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:14.924 08:40:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:14.924 08:40:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:14.924 08:40:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:14.924 08:40:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:14.924 08:40:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:14.924 08:40:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:14.924 08:40:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:14.924 08:40:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:14.924 08:40:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:14.924 08:40:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:14.924 08:40:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:14.924 08:40:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:14.924 08:40:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:14.924 08:40:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:14.924 08:40:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:14.924 08:40:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:14.924 08:40:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:14.924 08:40:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:14.924 08:40:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:25:14.924 08:40:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.924 08:40:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:14.924 Malloc1 00:25:14.924 [2024-10-01 08:40:06.743930] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:15.185 Malloc2 00:25:15.185 Malloc3 00:25:15.185 Malloc4 00:25:15.185 Malloc5 00:25:15.185 Malloc6 00:25:15.185 Malloc7 00:25:15.185 Malloc8 00:25:15.446 Malloc9 00:25:15.446 Malloc10 00:25:15.446 08:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.446 08:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:25:15.446 08:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:15.446 08:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:15.446 08:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=3828761 00:25:15.446 08:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:25:15.446 08:40:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:25:15.446 [2024-10-01 08:40:07.198479] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:20.735 08:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:20.735 08:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 3828373 00:25:20.735 08:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 3828373 ']' 00:25:20.735 08:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 3828373 00:25:20.736 08:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # uname 00:25:20.736 08:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:20.736 08:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3828373 00:25:20.736 08:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:20.736 08:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:20.736 08:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3828373' 00:25:20.736 killing process with pid 3828373 00:25:20.736 08:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@969 -- # kill 3828373 00:25:20.736 08:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@974 -- # wait 3828373 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 starting I/O failed: -6 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 starting I/O failed: -6 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 starting I/O failed: -6 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 starting I/O failed: -6 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 starting I/O failed: -6 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 starting I/O failed: -6 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 starting I/O failed: -6 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 [2024-10-01 08:40:12.224132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2726560 is same with the state(6) to be set 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 [2024-10-01 08:40:12.224170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2726560 is same with the state(6) to be set 00:25:20.736 [2024-10-01 08:40:12.224177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2726560 is same with Write completed with error (sct=0, sc=8) 00:25:20.736 the state(6) to be set 00:25:20.736 [2024-10-01 08:40:12.224183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2726560 is same with the state(6) to be set 00:25:20.736 [2024-10-01 08:40:12.224188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2726560 is same with the state(6) to be set 00:25:20.736 [2024-10-01 08:40:12.224193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2726560 is same with the state(6) to be set 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 [2024-10-01 08:40:12.224198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2726560 is same with the state(6) to be set 00:25:20.736 starting I/O failed: -6 00:25:20.736 [2024-10-01 08:40:12.224203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2726560 is same with the state(6) to be set 00:25:20.736 [2024-10-01 08:40:12.224208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2726560 is same with the state(6) to be set 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 [2024-10-01 08:40:12.224213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2726560 is same with the state(6) to be set 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 starting I/O failed: -6 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 starting I/O failed: -6 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 [2024-10-01 08:40:12.224429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.736 starting I/O failed: -6 00:25:20.736 starting I/O failed: -6 00:25:20.736 starting I/O failed: -6 00:25:20.736 starting I/O failed: -6 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 starting I/O failed: -6 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 starting I/O failed: -6 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 starting I/O failed: -6 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 starting I/O failed: -6 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 starting I/O failed: -6 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 starting I/O failed: -6 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 starting I/O failed: -6 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 starting I/O failed: -6 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 starting I/O failed: -6 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 starting I/O failed: -6 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 starting I/O failed: -6 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 starting I/O failed: -6 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 starting I/O failed: -6 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 starting I/O failed: -6 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 [2024-10-01 08:40:12.225512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 starting I/O failed: -6 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 starting I/O failed: -6 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 starting I/O failed: -6 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 starting I/O failed: -6 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 starting I/O failed: -6 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 starting I/O failed: -6 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 starting I/O failed: -6 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 starting I/O failed: -6 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 starting I/O failed: -6 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 starting I/O failed: -6 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 starting I/O failed: -6 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 starting I/O failed: -6 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 starting I/O failed: -6 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 starting I/O failed: -6 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 starting I/O failed: -6 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 starting I/O failed: -6 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 starting I/O failed: -6 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 starting I/O failed: -6 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 starting I/O failed: -6 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 starting I/O failed: -6 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 starting I/O failed: -6 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 starting I/O failed: -6 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 starting I/O failed: -6 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 starting I/O failed: -6 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 starting I/O failed: -6 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 starting I/O failed: -6 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 starting I/O failed: -6 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 Write completed with error (sct=0, sc=8) 00:25:20.736 starting I/O failed: -6 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 starting I/O failed: -6 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 starting I/O failed: -6 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 starting I/O failed: -6 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 [2024-10-01 08:40:12.226296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27256f0 is same with starting I/O failed: -6 00:25:20.737 the state(6) to be set 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 [2024-10-01 08:40:12.226323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27256f0 is same with the state(6) to be set 00:25:20.737 starting I/O failed: -6 00:25:20.737 [2024-10-01 08:40:12.226329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27256f0 is same with the state(6) to be set 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 [2024-10-01 08:40:12.226335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27256f0 is same with the state(6) to be set 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 starting I/O failed: -6 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 starting I/O failed: -6 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 starting I/O failed: -6 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 starting I/O failed: -6 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 starting I/O failed: -6 00:25:20.737 [2024-10-01 08:40:12.226454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 starting I/O failed: -6 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 starting I/O failed: -6 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 starting I/O failed: -6 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 [2024-10-01 08:40:12.226612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2725bc0 is same with the state(6) to be set 00:25:20.737 starting I/O failed: -6 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 [2024-10-01 08:40:12.226637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2725bc0 is same with the state(6) to be set 00:25:20.737 starting I/O failed: -6 00:25:20.737 [2024-10-01 08:40:12.226643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2725bc0 is same with the state(6) to be set 00:25:20.737 [2024-10-01 08:40:12.226650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2725bc0 is same with the state(6) to be set 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 [2024-10-01 08:40:12.226656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2725bc0 is same with the state(6) to be set 00:25:20.737 starting I/O failed: -6 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 starting I/O failed: -6 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 starting I/O failed: -6 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 starting I/O failed: -6 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 starting I/O failed: -6 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 starting I/O failed: -6 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 starting I/O failed: -6 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 starting I/O failed: -6 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 starting I/O failed: -6 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 starting I/O failed: -6 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 starting I/O failed: -6 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 starting I/O failed: -6 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 starting I/O failed: -6 00:25:20.737 [2024-10-01 08:40:12.226827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2726090 is same with the state(6) to be set 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 starting I/O failed: -6 00:25:20.737 [2024-10-01 08:40:12.226852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2726090 is same with the state(6) to be set 00:25:20.737 [2024-10-01 08:40:12.226859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2726090 is same with Write completed with error (sct=0, sc=8) 00:25:20.737 the state(6) to be set 00:25:20.737 [2024-10-01 08:40:12.226876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2726090 is same with starting I/O failed: -6 00:25:20.737 the state(6) to be set 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 starting I/O failed: -6 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 starting I/O failed: -6 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 starting I/O failed: -6 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 starting I/O failed: -6 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 starting I/O failed: -6 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 starting I/O failed: -6 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 starting I/O failed: -6 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 starting I/O failed: -6 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 starting I/O failed: -6 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 starting I/O failed: -6 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 starting I/O failed: -6 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 starting I/O failed: -6 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 starting I/O failed: -6 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 [2024-10-01 08:40:12.227080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2725220 is same with the state(6) to be set 00:25:20.737 starting I/O failed: -6 00:25:20.737 [2024-10-01 08:40:12.227097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2725220 is same with the state(6) to be set 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 [2024-10-01 08:40:12.227103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2725220 is same with the state(6) to be set 00:25:20.737 starting I/O failed: -6 00:25:20.737 [2024-10-01 08:40:12.227109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2725220 is same with the state(6) to be set 00:25:20.737 [2024-10-01 08:40:12.227114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2725220 is same with Write completed with error (sct=0, sc=8) 00:25:20.737 the state(6) to be set 00:25:20.737 starting I/O failed: -6 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 starting I/O failed: -6 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 starting I/O failed: -6 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 starting I/O failed: -6 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 starting I/O failed: -6 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 starting I/O failed: -6 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 starting I/O failed: -6 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 starting I/O failed: -6 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 starting I/O failed: -6 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 starting I/O failed: -6 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 starting I/O failed: -6 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 starting I/O failed: -6 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 starting I/O failed: -6 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 starting I/O failed: -6 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 starting I/O failed: -6 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 starting I/O failed: -6 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 starting I/O failed: -6 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 starting I/O failed: -6 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 starting I/O failed: -6 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 starting I/O failed: -6 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 starting I/O failed: -6 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 starting I/O failed: -6 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 starting I/O failed: -6 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 starting I/O failed: -6 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 starting I/O failed: -6 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 starting I/O failed: -6 00:25:20.737 Write completed with error (sct=0, sc=8) 00:25:20.737 starting I/O failed: -6 00:25:20.737 [2024-10-01 08:40:12.228089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:20.737 NVMe io qpair process completion error 00:25:20.737 [2024-10-01 08:40:12.230740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2627920 is same with the state(6) to be set 00:25:20.737 [2024-10-01 08:40:12.230762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2627920 is same with the state(6) to be set 00:25:20.737 [2024-10-01 08:40:12.230767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2627920 is same with the state(6) to be set 00:25:20.737 [2024-10-01 08:40:12.230772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2627920 is same with the state(6) to be set 00:25:20.737 [2024-10-01 08:40:12.230777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2627920 is same with the state(6) to be set 00:25:20.737 [2024-10-01 08:40:12.230781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2627920 is same with the state(6) to be set 00:25:20.737 [2024-10-01 08:40:12.230787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2627920 is same with the state(6) to be set 00:25:20.737 [2024-10-01 08:40:12.230792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2627920 is same with the state(6) to be set 00:25:20.737 [2024-10-01 08:40:12.231158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2627df0 is same with the state(6) to be set 00:25:20.738 [2024-10-01 08:40:12.231177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2627df0 is same with the state(6) to be set 00:25:20.738 [2024-10-01 08:40:12.231183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2627df0 is same with the state(6) to be set 00:25:20.738 [2024-10-01 08:40:12.231189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2627df0 is same with the state(6) to be set 00:25:20.738 [2024-10-01 08:40:12.231227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f6a50 is same with the state(6) to be set 00:25:20.738 [2024-10-01 08:40:12.231245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f6a50 is same with the state(6) to be set 00:25:20.738 [2024-10-01 08:40:12.231250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f6a50 is same with the state(6) to be set 00:25:20.738 [2024-10-01 08:40:12.231255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f6a50 is same with the state(6) to be set 00:25:20.738 [2024-10-01 08:40:12.231261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f6a50 is same with the state(6) to be set 00:25:20.738 [2024-10-01 08:40:12.231275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f6a50 is same with the state(6) to be set 00:25:20.738 [2024-10-01 08:40:12.231284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f6a50 is same with the state(6) to be set 00:25:20.738 [2024-10-01 08:40:12.231290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f6a50 is same with the state(6) to be set 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 starting I/O failed: -6 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 starting I/O failed: -6 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 [2024-10-01 08:40:12.231476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2611c70 is same with the state(6) to be set 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 [2024-10-01 08:40:12.231494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2611c70 is same with the state(6) to be set 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 starting I/O failed: -6 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 starting I/O failed: -6 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 starting I/O failed: -6 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 starting I/O failed: -6 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 starting I/O failed: -6 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 starting I/O failed: -6 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 starting I/O failed: -6 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 starting I/O failed: -6 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 starting I/O failed: -6 00:25:20.738 [2024-10-01 08:40:12.232083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 starting I/O failed: -6 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 starting I/O failed: -6 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 starting I/O failed: -6 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 starting I/O failed: -6 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 starting I/O failed: -6 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 starting I/O failed: -6 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 starting I/O failed: -6 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 starting I/O failed: -6 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 starting I/O failed: -6 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 starting I/O failed: -6 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 starting I/O failed: -6 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 starting I/O failed: -6 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 starting I/O failed: -6 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 starting I/O failed: -6 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 starting I/O failed: -6 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 starting I/O failed: -6 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 starting I/O failed: -6 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 starting I/O failed: -6 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 starting I/O failed: -6 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 starting I/O failed: -6 00:25:20.738 [2024-10-01 08:40:12.232896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 starting I/O failed: -6 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 starting I/O failed: -6 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 starting I/O failed: -6 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 starting I/O failed: -6 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 starting I/O failed: -6 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 starting I/O failed: -6 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 starting I/O failed: -6 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 starting I/O failed: -6 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 starting I/O failed: -6 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 starting I/O failed: -6 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 starting I/O failed: -6 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 starting I/O failed: -6 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 starting I/O failed: -6 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 starting I/O failed: -6 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 starting I/O failed: -6 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 starting I/O failed: -6 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 starting I/O failed: -6 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 starting I/O failed: -6 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 starting I/O failed: -6 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 starting I/O failed: -6 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 starting I/O failed: -6 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 starting I/O failed: -6 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 starting I/O failed: -6 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 starting I/O failed: -6 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 starting I/O failed: -6 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 starting I/O failed: -6 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 starting I/O failed: -6 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 starting I/O failed: -6 00:25:20.738 Write completed with error (sct=0, sc=8) 00:25:20.738 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 [2024-10-01 08:40:12.233810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 [2024-10-01 08:40:12.235024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:20.739 NVMe io qpair process completion error 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 [2024-10-01 08:40:12.236270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 Write completed with error (sct=0, sc=8) 00:25:20.739 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 [2024-10-01 08:40:12.237098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 [2024-10-01 08:40:12.238044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.740 Write completed with error (sct=0, sc=8) 00:25:20.740 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 [2024-10-01 08:40:12.239736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:20.741 NVMe io qpair process completion error 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 [2024-10-01 08:40:12.240981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:20.741 starting I/O failed: -6 00:25:20.741 starting I/O failed: -6 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 [2024-10-01 08:40:12.241956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.741 starting I/O failed: -6 00:25:20.741 Write completed with error (sct=0, sc=8) 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 [2024-10-01 08:40:12.242876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 [2024-10-01 08:40:12.245287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:20.742 NVMe io qpair process completion error 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 [2024-10-01 08:40:12.246398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 starting I/O failed: -6 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.742 Write completed with error (sct=0, sc=8) 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 [2024-10-01 08:40:12.247290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 [2024-10-01 08:40:12.248201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.743 Write completed with error (sct=0, sc=8) 00:25:20.743 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 [2024-10-01 08:40:12.249825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:20.744 NVMe io qpair process completion error 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 [2024-10-01 08:40:12.250913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 [2024-10-01 08:40:12.251722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 Write completed with error (sct=0, sc=8) 00:25:20.744 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 [2024-10-01 08:40:12.252652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 [2024-10-01 08:40:12.255259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:20.745 NVMe io qpair process completion error 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 starting I/O failed: -6 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.745 Write completed with error (sct=0, sc=8) 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 [2024-10-01 08:40:12.256395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 [2024-10-01 08:40:12.257211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:20.746 starting I/O failed: -6 00:25:20.746 starting I/O failed: -6 00:25:20.746 starting I/O failed: -6 00:25:20.746 starting I/O failed: -6 00:25:20.746 starting I/O failed: -6 00:25:20.746 starting I/O failed: -6 00:25:20.746 starting I/O failed: -6 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 [2024-10-01 08:40:12.258399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.746 Write completed with error (sct=0, sc=8) 00:25:20.746 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 [2024-10-01 08:40:12.259863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:20.747 NVMe io qpair process completion error 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 [2024-10-01 08:40:12.261021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 [2024-10-01 08:40:12.261834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 starting I/O failed: -6 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.747 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 [2024-10-01 08:40:12.262761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 [2024-10-01 08:40:12.264423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:20.748 NVMe io qpair process completion error 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 [2024-10-01 08:40:12.265606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:20.748 starting I/O failed: -6 00:25:20.748 starting I/O failed: -6 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.748 Write completed with error (sct=0, sc=8) 00:25:20.748 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 [2024-10-01 08:40:12.266562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 [2024-10-01 08:40:12.267494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.749 Write completed with error (sct=0, sc=8) 00:25:20.749 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 [2024-10-01 08:40:12.270603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:20.750 NVMe io qpair process completion error 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 Write completed with error (sct=0, sc=8) 00:25:20.750 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 [2024-10-01 08:40:12.273105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 Write completed with error (sct=0, sc=8) 00:25:20.751 starting I/O failed: -6 00:25:20.751 [2024-10-01 08:40:12.274726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:20.751 NVMe io qpair process completion error 00:25:20.751 Initializing NVMe Controllers 00:25:20.751 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:25:20.751 Controller IO queue size 128, less than required. 00:25:20.751 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:20.751 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:25:20.751 Controller IO queue size 128, less than required. 00:25:20.751 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:20.751 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:25:20.751 Controller IO queue size 128, less than required. 00:25:20.751 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:20.751 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:25:20.751 Controller IO queue size 128, less than required. 00:25:20.751 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:20.751 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:25:20.751 Controller IO queue size 128, less than required. 00:25:20.751 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:20.751 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:25:20.751 Controller IO queue size 128, less than required. 00:25:20.751 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:20.751 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:25:20.751 Controller IO queue size 128, less than required. 00:25:20.751 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:20.751 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:25:20.751 Controller IO queue size 128, less than required. 00:25:20.751 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:20.751 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:25:20.751 Controller IO queue size 128, less than required. 00:25:20.751 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:20.751 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:20.751 Controller IO queue size 128, less than required. 00:25:20.751 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:20.751 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:25:20.751 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:25:20.751 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:25:20.751 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:25:20.751 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:25:20.751 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:25:20.751 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:25:20.752 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:25:20.752 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:25:20.752 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:20.752 Initialization complete. Launching workers. 00:25:20.752 ======================================================== 00:25:20.752 Latency(us) 00:25:20.752 Device Information : IOPS MiB/s Average min max 00:25:20.752 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1885.19 81.00 67907.57 770.67 118787.24 00:25:20.752 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1870.00 80.35 67849.56 763.70 120308.26 00:25:20.752 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1871.69 80.42 67806.17 620.88 146476.43 00:25:20.752 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1881.39 80.84 67480.94 692.27 117935.00 00:25:20.752 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1852.91 79.62 68550.97 942.15 147813.94 00:25:20.752 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1922.75 82.62 66080.83 777.52 121688.10 00:25:20.752 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1915.79 82.32 66357.33 615.57 118587.32 00:25:20.752 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1890.68 81.24 67257.98 649.28 126327.82 00:25:20.752 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1913.68 82.23 66474.16 696.76 128175.39 00:25:20.752 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1913.47 82.22 65858.87 579.63 118849.53 00:25:20.752 ======================================================== 00:25:20.752 Total : 18917.53 812.86 67152.59 579.63 147813.94 00:25:20.752 00:25:20.752 [2024-10-01 08:40:12.277400] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd7e60 is same with the state(6) to be set 00:25:20.752 [2024-10-01 08:40:12.277449] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd1fc0 is same with the state(6) to be set 00:25:20.752 [2024-10-01 08:40:12.277478] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd39d0 is same with the state(6) to be set 00:25:20.752 [2024-10-01 08:40:12.277507] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd1c90 is same with the state(6) to be set 00:25:20.752 [2024-10-01 08:40:12.277537] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd3bb0 is same with the state(6) to be set 00:25:20.752 [2024-10-01 08:40:12.277565] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd8190 is same with the state(6) to be set 00:25:20.752 [2024-10-01 08:40:12.277595] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd1960 is same with the state(6) to be set 00:25:20.752 [2024-10-01 08:40:12.277622] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd84c0 is same with the state(6) to be set 00:25:20.752 [2024-10-01 08:40:12.277650] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd1630 is same with the state(6) to be set 00:25:20.752 [2024-10-01 08:40:12.277678] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd37f0 is same with the state(6) to be set 00:25:20.752 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:25:20.752 08:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:25:21.693 08:40:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 3828761 00:25:21.693 08:40:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:25:21.693 08:40:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3828761 00:25:21.693 08:40:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:25:21.693 08:40:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:21.693 08:40:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:25:21.693 08:40:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:21.693 08:40:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 3828761 00:25:21.693 08:40:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:25:21.693 08:40:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:21.693 08:40:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:21.693 08:40:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:21.693 08:40:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:25:21.693 08:40:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:25:21.693 08:40:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:21.693 08:40:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:21.693 08:40:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:25:21.693 08:40:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # nvmfcleanup 00:25:21.693 08:40:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:25:21.693 08:40:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:21.693 08:40:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:25:21.693 08:40:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:21.693 08:40:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:21.693 rmmod nvme_tcp 00:25:21.954 rmmod nvme_fabrics 00:25:21.954 rmmod nvme_keyring 00:25:21.954 08:40:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:21.954 08:40:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:25:21.954 08:40:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:25:21.954 08:40:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@513 -- # '[' -n 3828373 ']' 00:25:21.954 08:40:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@514 -- # killprocess 3828373 00:25:21.954 08:40:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 3828373 ']' 00:25:21.954 08:40:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 3828373 00:25:21.954 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3828373) - No such process 00:25:21.954 08:40:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@977 -- # echo 'Process with pid 3828373 is not found' 00:25:21.954 Process with pid 3828373 is not found 00:25:21.954 08:40:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:25:21.954 08:40:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:25:21.954 08:40:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:25:21.954 08:40:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:25:21.954 08:40:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@787 -- # iptables-save 00:25:21.954 08:40:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:25:21.954 08:40:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@787 -- # iptables-restore 00:25:21.954 08:40:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:21.954 08:40:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:21.954 08:40:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:21.954 08:40:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:21.954 08:40:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:23.865 08:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:23.865 00:25:23.865 real 0m10.275s 00:25:23.865 user 0m27.855s 00:25:23.865 sys 0m4.035s 00:25:23.865 08:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:23.865 08:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:23.865 ************************************ 00:25:23.865 END TEST nvmf_shutdown_tc4 00:25:23.865 ************************************ 00:25:24.126 08:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:25:24.126 00:25:24.126 real 0m43.340s 00:25:24.126 user 1m45.836s 00:25:24.126 sys 0m13.672s 00:25:24.126 08:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:24.126 08:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:24.126 ************************************ 00:25:24.126 END TEST nvmf_shutdown 00:25:24.126 ************************************ 00:25:24.126 08:40:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:25:24.126 00:25:24.126 real 12m44.547s 00:25:24.126 user 26m59.759s 00:25:24.126 sys 3m43.835s 00:25:24.126 08:40:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:24.126 08:40:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:24.126 ************************************ 00:25:24.126 END TEST nvmf_target_extra 00:25:24.126 ************************************ 00:25:24.126 08:40:15 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:25:24.126 08:40:15 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:24.126 08:40:15 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:24.126 08:40:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:24.126 ************************************ 00:25:24.126 START TEST nvmf_host 00:25:24.126 ************************************ 00:25:24.126 08:40:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:25:24.126 * Looking for test storage... 00:25:24.126 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:25:24.126 08:40:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:24.126 08:40:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lcov --version 00:25:24.126 08:40:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:24.388 08:40:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:24.388 08:40:15 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:24.388 08:40:15 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:24.388 08:40:15 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:24.388 08:40:15 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:25:24.388 08:40:15 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:25:24.388 08:40:15 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:25:24.388 08:40:15 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:25:24.388 08:40:15 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:25:24.388 08:40:15 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:25:24.388 08:40:15 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:25:24.388 08:40:15 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:24.388 08:40:15 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:25:24.388 08:40:15 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:25:24.388 08:40:15 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:24.388 08:40:15 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:24.388 08:40:15 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:25:24.388 08:40:16 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:25:24.388 08:40:16 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:24.388 08:40:16 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:25:24.388 08:40:16 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:25:24.388 08:40:16 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:25:24.388 08:40:16 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:25:24.388 08:40:16 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:24.388 08:40:16 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:25:24.388 08:40:16 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:25:24.388 08:40:16 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:24.388 08:40:16 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:24.388 08:40:16 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:25:24.388 08:40:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:24.388 08:40:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:24.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:24.388 --rc genhtml_branch_coverage=1 00:25:24.388 --rc genhtml_function_coverage=1 00:25:24.388 --rc genhtml_legend=1 00:25:24.388 --rc geninfo_all_blocks=1 00:25:24.388 --rc geninfo_unexecuted_blocks=1 00:25:24.388 00:25:24.388 ' 00:25:24.388 08:40:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:24.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:24.388 --rc genhtml_branch_coverage=1 00:25:24.388 --rc genhtml_function_coverage=1 00:25:24.388 --rc genhtml_legend=1 00:25:24.388 --rc geninfo_all_blocks=1 00:25:24.388 --rc geninfo_unexecuted_blocks=1 00:25:24.389 00:25:24.389 ' 00:25:24.389 08:40:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:24.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:24.389 --rc genhtml_branch_coverage=1 00:25:24.389 --rc genhtml_function_coverage=1 00:25:24.389 --rc genhtml_legend=1 00:25:24.389 --rc geninfo_all_blocks=1 00:25:24.389 --rc geninfo_unexecuted_blocks=1 00:25:24.389 00:25:24.389 ' 00:25:24.389 08:40:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:24.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:24.389 --rc genhtml_branch_coverage=1 00:25:24.389 --rc genhtml_function_coverage=1 00:25:24.389 --rc genhtml_legend=1 00:25:24.389 --rc geninfo_all_blocks=1 00:25:24.389 --rc geninfo_unexecuted_blocks=1 00:25:24.389 00:25:24.389 ' 00:25:24.389 08:40:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:24.389 08:40:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:25:24.389 08:40:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:24.389 08:40:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:24.389 08:40:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:24.389 08:40:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:24.389 08:40:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:24.389 08:40:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:24.389 08:40:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:24.389 08:40:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:24.389 08:40:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:24.389 08:40:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:24.389 08:40:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:24.389 08:40:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:24.389 08:40:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:24.389 08:40:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:24.389 08:40:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:24.389 08:40:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:24.389 08:40:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:24.389 08:40:16 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:24.389 08:40:16 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:24.389 08:40:16 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:24.389 08:40:16 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:24.389 08:40:16 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.389 08:40:16 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.389 08:40:16 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.389 08:40:16 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:25:24.389 08:40:16 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.389 08:40:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:25:24.389 08:40:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:24.389 08:40:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:24.389 08:40:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:24.389 08:40:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:24.389 08:40:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:24.389 08:40:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:24.389 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:24.389 08:40:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:24.389 08:40:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:24.389 08:40:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:24.389 08:40:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:25:24.389 08:40:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:25:24.389 08:40:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:25:24.389 08:40:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:25:24.389 08:40:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:24.389 08:40:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:24.389 08:40:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.389 ************************************ 00:25:24.389 START TEST nvmf_multicontroller 00:25:24.389 ************************************ 00:25:24.389 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:25:24.389 * Looking for test storage... 00:25:24.389 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:24.389 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:24.389 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lcov --version 00:25:24.389 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:24.651 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:24.651 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:24.651 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:24.651 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:24.651 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:25:24.651 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:25:24.651 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:25:24.651 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:25:24.651 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:25:24.651 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:25:24.651 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:25:24.651 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:24.651 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:25:24.651 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:25:24.651 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:24.651 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:24.651 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:25:24.651 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:25:24.651 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:24.651 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:25:24.651 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:25:24.651 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:25:24.651 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:25:24.651 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:24.651 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:25:24.651 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:25:24.651 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:24.651 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:24.651 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:25:24.651 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:24.651 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:24.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:24.651 --rc genhtml_branch_coverage=1 00:25:24.651 --rc genhtml_function_coverage=1 00:25:24.651 --rc genhtml_legend=1 00:25:24.651 --rc geninfo_all_blocks=1 00:25:24.651 --rc geninfo_unexecuted_blocks=1 00:25:24.651 00:25:24.651 ' 00:25:24.651 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:24.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:24.651 --rc genhtml_branch_coverage=1 00:25:24.651 --rc genhtml_function_coverage=1 00:25:24.651 --rc genhtml_legend=1 00:25:24.651 --rc geninfo_all_blocks=1 00:25:24.651 --rc geninfo_unexecuted_blocks=1 00:25:24.651 00:25:24.651 ' 00:25:24.651 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:24.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:24.651 --rc genhtml_branch_coverage=1 00:25:24.651 --rc genhtml_function_coverage=1 00:25:24.651 --rc genhtml_legend=1 00:25:24.651 --rc geninfo_all_blocks=1 00:25:24.651 --rc geninfo_unexecuted_blocks=1 00:25:24.651 00:25:24.651 ' 00:25:24.651 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:24.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:24.651 --rc genhtml_branch_coverage=1 00:25:24.651 --rc genhtml_function_coverage=1 00:25:24.651 --rc genhtml_legend=1 00:25:24.651 --rc geninfo_all_blocks=1 00:25:24.651 --rc geninfo_unexecuted_blocks=1 00:25:24.651 00:25:24.651 ' 00:25:24.651 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:24.651 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:25:24.651 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:24.651 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:24.651 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:24.651 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:24.651 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:24.651 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:24.651 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:24.651 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:24.651 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:24.651 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:24.651 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:24.651 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:24.651 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:24.651 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:24.651 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:24.651 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:24.651 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:24.651 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:25:24.651 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:24.651 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:24.651 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:24.651 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.651 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.652 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.652 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:25:24.652 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.652 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:25:24.652 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:24.652 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:24.652 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:24.652 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:24.652 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:24.652 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:24.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:24.652 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:24.652 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:24.652 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:24.652 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:24.652 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:24.652 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:25:24.652 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:25:24.652 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:24.652 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:25:24.652 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:25:24.652 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:25:24.652 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:24.652 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@472 -- # prepare_net_devs 00:25:24.652 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@434 -- # local -g is_hw=no 00:25:24.652 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@436 -- # remove_spdk_ns 00:25:24.652 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:24.652 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:24.652 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:24.652 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:25:24.652 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:25:24.652 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:25:24.652 08:40:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:31.241 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:31.241 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:25:31.241 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:31.242 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:31.242 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:31.242 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:31.242 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # is_hw=yes 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:31.242 08:40:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:31.242 08:40:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:31.242 08:40:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:31.242 08:40:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:31.242 08:40:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:31.242 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:31.242 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.623 ms 00:25:31.242 00:25:31.242 --- 10.0.0.2 ping statistics --- 00:25:31.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:31.242 rtt min/avg/max/mdev = 0.623/0.623/0.623/0.000 ms 00:25:31.242 08:40:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:31.242 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:31.242 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:25:31.242 00:25:31.242 --- 10.0.0.1 ping statistics --- 00:25:31.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:31.242 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:25:31.242 08:40:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:31.242 08:40:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # return 0 00:25:31.242 08:40:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:25:31.242 08:40:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:31.242 08:40:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:25:31.242 08:40:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:25:31.242 08:40:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:31.242 08:40:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:25:31.243 08:40:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:25:31.505 08:40:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:25:31.505 08:40:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:25:31.505 08:40:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:31.505 08:40:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:31.505 08:40:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@505 -- # nvmfpid=3834159 00:25:31.505 08:40:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@506 -- # waitforlisten 3834159 00:25:31.505 08:40:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 3834159 ']' 00:25:31.505 08:40:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:31.505 08:40:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:31.505 08:40:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:31.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:31.505 08:40:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:31.505 08:40:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:31.505 08:40:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:31.505 [2024-10-01 08:40:23.142840] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:25:31.505 [2024-10-01 08:40:23.142908] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:31.505 [2024-10-01 08:40:23.230705] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:31.505 [2024-10-01 08:40:23.322699] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:31.505 [2024-10-01 08:40:23.322757] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:31.505 [2024-10-01 08:40:23.322766] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:31.505 [2024-10-01 08:40:23.322773] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:31.505 [2024-10-01 08:40:23.322779] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:31.505 [2024-10-01 08:40:23.324090] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:25:31.505 [2024-10-01 08:40:23.324444] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:25:31.505 [2024-10-01 08:40:23.324445] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:32.447 08:40:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:32.447 08:40:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:25:32.447 08:40:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:25:32.447 08:40:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:32.447 08:40:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:32.448 08:40:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:32.448 08:40:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:32.448 08:40:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.448 08:40:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:32.448 [2024-10-01 08:40:23.999494] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:32.448 08:40:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.448 08:40:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:32.448 08:40:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.448 08:40:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:32.448 Malloc0 00:25:32.448 08:40:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.448 08:40:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:32.448 08:40:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.448 08:40:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:32.448 08:40:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.448 08:40:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:32.448 08:40:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.448 08:40:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:32.448 08:40:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.448 08:40:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:32.448 08:40:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.448 08:40:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:32.448 [2024-10-01 08:40:24.070433] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:32.448 08:40:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.448 08:40:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:32.448 08:40:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.448 08:40:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:32.448 [2024-10-01 08:40:24.082392] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:32.448 08:40:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.448 08:40:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:32.448 08:40:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.448 08:40:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:32.448 Malloc1 00:25:32.448 08:40:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.448 08:40:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:25:32.448 08:40:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.448 08:40:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:32.448 08:40:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.448 08:40:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:25:32.448 08:40:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.448 08:40:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:32.448 08:40:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.448 08:40:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:32.448 08:40:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.448 08:40:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:32.448 08:40:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.448 08:40:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:25:32.448 08:40:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.448 08:40:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:32.448 08:40:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.448 08:40:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3834397 00:25:32.448 08:40:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:32.448 08:40:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:25:32.448 08:40:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3834397 /var/tmp/bdevperf.sock 00:25:32.448 08:40:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 3834397 ']' 00:25:32.448 08:40:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:32.448 08:40:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:32.448 08:40:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:32.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:32.448 08:40:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:32.448 08:40:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:33.391 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:33.391 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:25:33.391 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:25:33.391 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.391 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:33.391 NVMe0n1 00:25:33.391 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.391 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:33.391 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:25:33.391 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.391 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:33.391 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.391 1 00:25:33.391 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:25:33.391 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:25:33.391 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:25:33.391 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:33.391 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:33.391 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:33.391 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:33.391 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:25:33.391 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.391 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:33.391 request: 00:25:33.391 { 00:25:33.391 "name": "NVMe0", 00:25:33.391 "trtype": "tcp", 00:25:33.391 "traddr": "10.0.0.2", 00:25:33.391 "adrfam": "ipv4", 00:25:33.391 "trsvcid": "4420", 00:25:33.391 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:33.391 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:25:33.391 "hostaddr": "10.0.0.1", 00:25:33.391 "prchk_reftag": false, 00:25:33.391 "prchk_guard": false, 00:25:33.391 "hdgst": false, 00:25:33.391 "ddgst": false, 00:25:33.391 "allow_unrecognized_csi": false, 00:25:33.391 "method": "bdev_nvme_attach_controller", 00:25:33.391 "req_id": 1 00:25:33.391 } 00:25:33.391 Got JSON-RPC error response 00:25:33.391 response: 00:25:33.391 { 00:25:33.391 "code": -114, 00:25:33.391 "message": "A controller named NVMe0 already exists with the specified network path" 00:25:33.391 } 00:25:33.391 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:33.391 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:25:33.391 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:33.391 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:33.391 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:33.391 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:25:33.391 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:25:33.391 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:25:33.391 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:33.391 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:33.391 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:33.391 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:33.391 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:25:33.391 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.391 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:33.391 request: 00:25:33.391 { 00:25:33.391 "name": "NVMe0", 00:25:33.391 "trtype": "tcp", 00:25:33.391 "traddr": "10.0.0.2", 00:25:33.391 "adrfam": "ipv4", 00:25:33.391 "trsvcid": "4420", 00:25:33.391 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:33.391 "hostaddr": "10.0.0.1", 00:25:33.391 "prchk_reftag": false, 00:25:33.391 "prchk_guard": false, 00:25:33.391 "hdgst": false, 00:25:33.391 "ddgst": false, 00:25:33.391 "allow_unrecognized_csi": false, 00:25:33.391 "method": "bdev_nvme_attach_controller", 00:25:33.391 "req_id": 1 00:25:33.391 } 00:25:33.391 Got JSON-RPC error response 00:25:33.391 response: 00:25:33.391 { 00:25:33.391 "code": -114, 00:25:33.391 "message": "A controller named NVMe0 already exists with the specified network path" 00:25:33.391 } 00:25:33.391 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:33.391 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:25:33.391 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:33.392 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:33.392 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:33.392 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:25:33.392 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:25:33.392 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:25:33.392 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:33.392 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:33.392 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:33.392 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:33.392 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:25:33.392 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.392 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:33.392 request: 00:25:33.392 { 00:25:33.392 "name": "NVMe0", 00:25:33.392 "trtype": "tcp", 00:25:33.392 "traddr": "10.0.0.2", 00:25:33.392 "adrfam": "ipv4", 00:25:33.392 "trsvcid": "4420", 00:25:33.392 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:33.392 "hostaddr": "10.0.0.1", 00:25:33.392 "prchk_reftag": false, 00:25:33.392 "prchk_guard": false, 00:25:33.392 "hdgst": false, 00:25:33.392 "ddgst": false, 00:25:33.392 "multipath": "disable", 00:25:33.392 "allow_unrecognized_csi": false, 00:25:33.392 "method": "bdev_nvme_attach_controller", 00:25:33.392 "req_id": 1 00:25:33.392 } 00:25:33.392 Got JSON-RPC error response 00:25:33.392 response: 00:25:33.392 { 00:25:33.392 "code": -114, 00:25:33.392 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:25:33.392 } 00:25:33.392 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:33.392 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:25:33.392 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:33.392 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:33.392 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:33.392 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:25:33.392 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:25:33.392 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:25:33.392 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:33.392 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:33.392 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:33.392 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:33.392 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:25:33.392 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.392 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:33.392 request: 00:25:33.392 { 00:25:33.392 "name": "NVMe0", 00:25:33.392 "trtype": "tcp", 00:25:33.392 "traddr": "10.0.0.2", 00:25:33.392 "adrfam": "ipv4", 00:25:33.392 "trsvcid": "4420", 00:25:33.392 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:33.392 "hostaddr": "10.0.0.1", 00:25:33.392 "prchk_reftag": false, 00:25:33.392 "prchk_guard": false, 00:25:33.392 "hdgst": false, 00:25:33.392 "ddgst": false, 00:25:33.392 "multipath": "failover", 00:25:33.392 "allow_unrecognized_csi": false, 00:25:33.392 "method": "bdev_nvme_attach_controller", 00:25:33.392 "req_id": 1 00:25:33.392 } 00:25:33.392 Got JSON-RPC error response 00:25:33.392 response: 00:25:33.392 { 00:25:33.392 "code": -114, 00:25:33.392 "message": "A controller named NVMe0 already exists with the specified network path" 00:25:33.392 } 00:25:33.392 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:33.392 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:25:33.392 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:33.392 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:33.392 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:33.392 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:33.392 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.392 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:33.654 00:25:33.654 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.654 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:33.654 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.654 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:33.654 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.654 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:25:33.654 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.654 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:33.654 00:25:33.654 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.654 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:33.654 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:25:33.654 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.654 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:33.654 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.654 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:25:33.654 08:40:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:35.037 { 00:25:35.037 "results": [ 00:25:35.037 { 00:25:35.037 "job": "NVMe0n1", 00:25:35.037 "core_mask": "0x1", 00:25:35.037 "workload": "write", 00:25:35.037 "status": "finished", 00:25:35.037 "queue_depth": 128, 00:25:35.037 "io_size": 4096, 00:25:35.037 "runtime": 1.003662, 00:25:35.037 "iops": 21564.03251293762, 00:25:35.037 "mibps": 84.23450200366258, 00:25:35.037 "io_failed": 0, 00:25:35.037 "io_timeout": 0, 00:25:35.037 "avg_latency_us": 5925.48894453942, 00:25:35.037 "min_latency_us": 2075.306666666667, 00:25:35.037 "max_latency_us": 10922.666666666666 00:25:35.037 } 00:25:35.037 ], 00:25:35.037 "core_count": 1 00:25:35.037 } 00:25:35.037 08:40:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:25:35.037 08:40:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.037 08:40:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:35.037 08:40:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.037 08:40:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:25:35.037 08:40:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 3834397 00:25:35.037 08:40:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 3834397 ']' 00:25:35.037 08:40:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 3834397 00:25:35.037 08:40:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:25:35.037 08:40:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:35.037 08:40:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3834397 00:25:35.037 08:40:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:35.037 08:40:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:35.037 08:40:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3834397' 00:25:35.037 killing process with pid 3834397 00:25:35.037 08:40:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 3834397 00:25:35.037 08:40:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 3834397 00:25:35.037 08:40:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:35.037 08:40:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.037 08:40:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:35.037 08:40:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.037 08:40:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:35.037 08:40:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.037 08:40:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:35.037 08:40:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.037 08:40:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:25:35.037 08:40:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:35.037 08:40:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:25:35.037 08:40:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:25:35.037 08:40:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:25:35.037 08:40:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:25:35.037 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:25:35.037 [2024-10-01 08:40:24.213085] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:25:35.037 [2024-10-01 08:40:24.213139] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3834397 ] 00:25:35.037 [2024-10-01 08:40:24.273005] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:35.037 [2024-10-01 08:40:24.337566] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:35.037 [2024-10-01 08:40:25.380320] bdev.c:4696:bdev_name_add: *ERROR*: Bdev name d2e01a40-a584-44e4-abc4-d6ccf949482a already exists 00:25:35.037 [2024-10-01 08:40:25.380349] bdev.c:7837:bdev_register: *ERROR*: Unable to add uuid:d2e01a40-a584-44e4-abc4-d6ccf949482a alias for bdev NVMe1n1 00:25:35.037 [2024-10-01 08:40:25.380357] bdev_nvme.c:4481:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:25:35.037 Running I/O for 1 seconds... 00:25:35.037 21515.00 IOPS, 84.04 MiB/s 00:25:35.037 Latency(us) 00:25:35.037 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:35.037 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:25:35.037 NVMe0n1 : 1.00 21564.03 84.23 0.00 0.00 5925.49 2075.31 10922.67 00:25:35.037 =================================================================================================================== 00:25:35.037 Total : 21564.03 84.23 0.00 0.00 5925.49 2075.31 10922.67 00:25:35.037 Received shutdown signal, test time was about 1.000000 seconds 00:25:35.037 00:25:35.037 Latency(us) 00:25:35.037 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:35.037 =================================================================================================================== 00:25:35.037 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:35.037 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:25:35.037 08:40:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:35.037 08:40:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:25:35.037 08:40:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:25:35.037 08:40:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # nvmfcleanup 00:25:35.037 08:40:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:25:35.037 08:40:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:35.037 08:40:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:25:35.037 08:40:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:35.037 08:40:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:35.037 rmmod nvme_tcp 00:25:35.037 rmmod nvme_fabrics 00:25:35.037 rmmod nvme_keyring 00:25:35.037 08:40:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:35.037 08:40:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:25:35.037 08:40:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:25:35.037 08:40:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@513 -- # '[' -n 3834159 ']' 00:25:35.037 08:40:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@514 -- # killprocess 3834159 00:25:35.037 08:40:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 3834159 ']' 00:25:35.037 08:40:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 3834159 00:25:35.037 08:40:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:25:35.298 08:40:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:35.298 08:40:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3834159 00:25:35.298 08:40:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:35.298 08:40:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:35.298 08:40:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3834159' 00:25:35.298 killing process with pid 3834159 00:25:35.298 08:40:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 3834159 00:25:35.298 08:40:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 3834159 00:25:35.298 08:40:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:25:35.298 08:40:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:25:35.298 08:40:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:25:35.298 08:40:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:25:35.298 08:40:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@787 -- # iptables-save 00:25:35.298 08:40:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:25:35.298 08:40:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@787 -- # iptables-restore 00:25:35.298 08:40:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:35.298 08:40:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:35.298 08:40:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:35.298 08:40:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:35.298 08:40:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:37.844 00:25:37.844 real 0m13.053s 00:25:37.844 user 0m15.978s 00:25:37.844 sys 0m5.998s 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:37.844 ************************************ 00:25:37.844 END TEST nvmf_multicontroller 00:25:37.844 ************************************ 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.844 ************************************ 00:25:37.844 START TEST nvmf_aer 00:25:37.844 ************************************ 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:25:37.844 * Looking for test storage... 00:25:37.844 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lcov --version 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:37.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:37.844 --rc genhtml_branch_coverage=1 00:25:37.844 --rc genhtml_function_coverage=1 00:25:37.844 --rc genhtml_legend=1 00:25:37.844 --rc geninfo_all_blocks=1 00:25:37.844 --rc geninfo_unexecuted_blocks=1 00:25:37.844 00:25:37.844 ' 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:37.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:37.844 --rc genhtml_branch_coverage=1 00:25:37.844 --rc genhtml_function_coverage=1 00:25:37.844 --rc genhtml_legend=1 00:25:37.844 --rc geninfo_all_blocks=1 00:25:37.844 --rc geninfo_unexecuted_blocks=1 00:25:37.844 00:25:37.844 ' 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:37.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:37.844 --rc genhtml_branch_coverage=1 00:25:37.844 --rc genhtml_function_coverage=1 00:25:37.844 --rc genhtml_legend=1 00:25:37.844 --rc geninfo_all_blocks=1 00:25:37.844 --rc geninfo_unexecuted_blocks=1 00:25:37.844 00:25:37.844 ' 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:37.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:37.844 --rc genhtml_branch_coverage=1 00:25:37.844 --rc genhtml_function_coverage=1 00:25:37.844 --rc genhtml_legend=1 00:25:37.844 --rc geninfo_all_blocks=1 00:25:37.844 --rc geninfo_unexecuted_blocks=1 00:25:37.844 00:25:37.844 ' 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.844 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:25:37.845 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.845 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:25:37.845 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:37.845 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:37.845 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:37.845 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:37.845 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:37.845 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:37.845 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:37.845 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:37.845 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:37.845 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:37.845 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:25:37.845 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:25:37.845 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:37.845 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@472 -- # prepare_net_devs 00:25:37.845 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@434 -- # local -g is_hw=no 00:25:37.845 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@436 -- # remove_spdk_ns 00:25:37.845 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:37.845 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:37.845 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:37.845 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:25:37.845 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:25:37.845 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:25:37.845 08:40:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:46.041 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:46.041 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:25:46.041 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:46.041 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:46.041 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:46.041 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:46.041 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:46.041 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:25:46.041 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:46.041 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:25:46.041 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:25:46.041 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:25:46.041 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:25:46.041 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:25:46.041 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:25:46.041 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:46.041 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:46.041 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:46.041 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:46.041 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:46.041 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:46.041 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:46.041 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:46.041 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:46.041 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:46.041 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:46.041 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:25:46.042 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:25:46.042 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:25:46.042 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:25:46.042 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:25:46.042 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:25:46.042 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:46.042 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:46.042 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:46.042 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:46.042 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:46.042 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:46.042 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:46.042 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:46.042 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:46.042 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:46.042 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:46.042 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:46.042 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:46.042 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:46.042 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:46.042 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:46.042 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:25:46.042 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:25:46.042 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:25:46.042 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:46.042 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:46.042 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:46.042 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:46.042 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:46.042 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:46.042 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:46.042 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:46.042 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:46.042 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:46.042 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:46.042 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:46.042 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:46.042 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:46.042 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:46.042 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:46.042 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:46.042 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:46.042 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:46.042 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:46.042 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:25:46.042 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # is_hw=yes 00:25:46.042 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:25:46.042 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:25:46.042 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:25:46.042 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:46.042 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:46.042 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:46.042 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:46.042 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:46.042 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:46.042 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:46.042 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:46.042 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:46.042 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:46.042 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:46.042 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:46.042 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:46.042 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:46.042 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:46.042 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:46.042 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:46.042 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:46.042 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:46.042 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:46.042 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:46.043 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:46.043 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:46.043 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:46.043 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.543 ms 00:25:46.043 00:25:46.043 --- 10.0.0.2 ping statistics --- 00:25:46.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:46.043 rtt min/avg/max/mdev = 0.543/0.543/0.543/0.000 ms 00:25:46.043 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:46.043 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:46.043 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:25:46.043 00:25:46.043 --- 10.0.0.1 ping statistics --- 00:25:46.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:46.043 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:25:46.043 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:46.043 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # return 0 00:25:46.043 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:25:46.043 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:46.043 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:25:46.043 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:25:46.043 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:46.043 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:25:46.043 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:25:46.043 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:25:46.043 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:25:46.043 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:46.043 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:46.043 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@505 -- # nvmfpid=3839111 00:25:46.043 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@506 -- # waitforlisten 3839111 00:25:46.043 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 3839111 ']' 00:25:46.043 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:46.043 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:46.043 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:46.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:46.043 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:46.043 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:46.043 08:40:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:46.043 [2024-10-01 08:40:36.741918] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:25:46.043 [2024-10-01 08:40:36.741992] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:46.043 [2024-10-01 08:40:36.815258] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:46.043 [2024-10-01 08:40:36.889174] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:46.043 [2024-10-01 08:40:36.889213] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:46.043 [2024-10-01 08:40:36.889222] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:46.043 [2024-10-01 08:40:36.889229] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:46.043 [2024-10-01 08:40:36.889235] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:46.043 [2024-10-01 08:40:36.890795] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:46.043 [2024-10-01 08:40:36.890916] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:25:46.043 [2024-10-01 08:40:36.891072] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:46.043 [2024-10-01 08:40:36.891072] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:25:46.043 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:46.043 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:25:46.043 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:25:46.043 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:46.043 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:46.043 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:46.043 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:46.043 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.043 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:46.043 [2024-10-01 08:40:37.587341] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:46.043 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.043 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:25:46.043 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.043 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:46.043 Malloc0 00:25:46.043 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.043 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:25:46.043 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.043 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:46.043 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.043 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:46.043 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.043 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:46.043 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.043 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:46.043 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.043 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:46.043 [2024-10-01 08:40:37.646632] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:46.043 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.043 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:25:46.043 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.043 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:46.043 [ 00:25:46.043 { 00:25:46.043 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:46.043 "subtype": "Discovery", 00:25:46.043 "listen_addresses": [], 00:25:46.043 "allow_any_host": true, 00:25:46.043 "hosts": [] 00:25:46.043 }, 00:25:46.043 { 00:25:46.043 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:46.043 "subtype": "NVMe", 00:25:46.043 "listen_addresses": [ 00:25:46.043 { 00:25:46.044 "trtype": "TCP", 00:25:46.044 "adrfam": "IPv4", 00:25:46.044 "traddr": "10.0.0.2", 00:25:46.044 "trsvcid": "4420" 00:25:46.044 } 00:25:46.044 ], 00:25:46.044 "allow_any_host": true, 00:25:46.044 "hosts": [], 00:25:46.044 "serial_number": "SPDK00000000000001", 00:25:46.044 "model_number": "SPDK bdev Controller", 00:25:46.044 "max_namespaces": 2, 00:25:46.044 "min_cntlid": 1, 00:25:46.044 "max_cntlid": 65519, 00:25:46.044 "namespaces": [ 00:25:46.044 { 00:25:46.044 "nsid": 1, 00:25:46.044 "bdev_name": "Malloc0", 00:25:46.044 "name": "Malloc0", 00:25:46.044 "nguid": "99D4AB30BE4B4CFEA6D865AD6F68C7F0", 00:25:46.044 "uuid": "99d4ab30-be4b-4cfe-a6d8-65ad6f68c7f0" 00:25:46.044 } 00:25:46.044 ] 00:25:46.044 } 00:25:46.044 ] 00:25:46.044 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.044 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:25:46.044 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:25:46.044 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3839226 00:25:46.044 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:25:46.044 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:25:46.044 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:25:46.044 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:46.044 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:25:46.044 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:25:46.044 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:25:46.044 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:46.044 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:25:46.044 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:25:46.044 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:25:46.341 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:46.341 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:46.341 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:25:46.341 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:25:46.341 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.341 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:46.341 Malloc1 00:25:46.341 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.341 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:25:46.341 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.341 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:46.341 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.341 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:25:46.341 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.341 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:46.341 Asynchronous Event Request test 00:25:46.341 Attaching to 10.0.0.2 00:25:46.341 Attached to 10.0.0.2 00:25:46.341 Registering asynchronous event callbacks... 00:25:46.341 Starting namespace attribute notice tests for all controllers... 00:25:46.341 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:25:46.341 aer_cb - Changed Namespace 00:25:46.341 Cleaning up... 00:25:46.341 [ 00:25:46.341 { 00:25:46.341 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:46.341 "subtype": "Discovery", 00:25:46.341 "listen_addresses": [], 00:25:46.341 "allow_any_host": true, 00:25:46.341 "hosts": [] 00:25:46.341 }, 00:25:46.341 { 00:25:46.341 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:46.341 "subtype": "NVMe", 00:25:46.341 "listen_addresses": [ 00:25:46.341 { 00:25:46.341 "trtype": "TCP", 00:25:46.341 "adrfam": "IPv4", 00:25:46.341 "traddr": "10.0.0.2", 00:25:46.341 "trsvcid": "4420" 00:25:46.341 } 00:25:46.341 ], 00:25:46.341 "allow_any_host": true, 00:25:46.341 "hosts": [], 00:25:46.341 "serial_number": "SPDK00000000000001", 00:25:46.341 "model_number": "SPDK bdev Controller", 00:25:46.341 "max_namespaces": 2, 00:25:46.341 "min_cntlid": 1, 00:25:46.341 "max_cntlid": 65519, 00:25:46.341 "namespaces": [ 00:25:46.341 { 00:25:46.341 "nsid": 1, 00:25:46.341 "bdev_name": "Malloc0", 00:25:46.341 "name": "Malloc0", 00:25:46.341 "nguid": "99D4AB30BE4B4CFEA6D865AD6F68C7F0", 00:25:46.341 "uuid": "99d4ab30-be4b-4cfe-a6d8-65ad6f68c7f0" 00:25:46.341 }, 00:25:46.341 { 00:25:46.341 "nsid": 2, 00:25:46.341 "bdev_name": "Malloc1", 00:25:46.341 "name": "Malloc1", 00:25:46.341 "nguid": "DF325BE0D4E34749AE1D160B5C79C9A9", 00:25:46.341 "uuid": "df325be0-d4e3-4749-ae1d-160b5c79c9a9" 00:25:46.341 } 00:25:46.341 ] 00:25:46.341 } 00:25:46.341 ] 00:25:46.341 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.341 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3839226 00:25:46.341 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:25:46.341 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.341 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:46.341 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.341 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:25:46.341 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.341 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:46.341 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.341 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:46.341 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.341 08:40:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:46.341 08:40:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.341 08:40:38 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:25:46.341 08:40:38 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:25:46.341 08:40:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # nvmfcleanup 00:25:46.341 08:40:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:25:46.341 08:40:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:46.341 08:40:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:25:46.341 08:40:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:46.341 08:40:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:46.341 rmmod nvme_tcp 00:25:46.341 rmmod nvme_fabrics 00:25:46.341 rmmod nvme_keyring 00:25:46.341 08:40:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:46.341 08:40:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:25:46.341 08:40:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:25:46.342 08:40:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@513 -- # '[' -n 3839111 ']' 00:25:46.342 08:40:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@514 -- # killprocess 3839111 00:25:46.342 08:40:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 3839111 ']' 00:25:46.342 08:40:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 3839111 00:25:46.342 08:40:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:25:46.342 08:40:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:46.342 08:40:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3839111 00:25:46.342 08:40:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:46.342 08:40:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:46.342 08:40:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3839111' 00:25:46.342 killing process with pid 3839111 00:25:46.342 08:40:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 3839111 00:25:46.342 08:40:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 3839111 00:25:46.634 08:40:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:25:46.634 08:40:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:25:46.634 08:40:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:25:46.634 08:40:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:25:46.634 08:40:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@787 -- # iptables-save 00:25:46.634 08:40:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:25:46.634 08:40:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@787 -- # iptables-restore 00:25:46.634 08:40:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:46.634 08:40:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:46.634 08:40:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:46.634 08:40:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:46.634 08:40:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:48.546 08:40:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:48.546 00:25:48.546 real 0m11.146s 00:25:48.546 user 0m7.673s 00:25:48.546 sys 0m5.950s 00:25:48.807 08:40:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:48.807 08:40:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:48.807 ************************************ 00:25:48.807 END TEST nvmf_aer 00:25:48.807 ************************************ 00:25:48.807 08:40:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:25:48.807 08:40:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:48.807 08:40:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:48.807 08:40:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.807 ************************************ 00:25:48.807 START TEST nvmf_async_init 00:25:48.807 ************************************ 00:25:48.807 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:25:48.807 * Looking for test storage... 00:25:48.807 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:48.807 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:48.807 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lcov --version 00:25:48.807 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:48.807 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:48.807 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:48.807 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:48.807 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:48.807 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:25:48.807 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:25:48.807 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:25:48.807 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:25:48.807 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:25:48.807 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:25:48.807 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:25:48.807 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:48.807 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:25:48.807 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:25:48.807 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:48.807 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:49.069 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:25:49.069 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:25:49.069 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:49.069 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:25:49.069 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:25:49.069 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:25:49.069 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:25:49.069 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:49.069 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:25:49.069 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:25:49.069 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:49.069 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:49.069 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:25:49.069 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:49.069 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:49.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:49.069 --rc genhtml_branch_coverage=1 00:25:49.069 --rc genhtml_function_coverage=1 00:25:49.069 --rc genhtml_legend=1 00:25:49.069 --rc geninfo_all_blocks=1 00:25:49.069 --rc geninfo_unexecuted_blocks=1 00:25:49.069 00:25:49.069 ' 00:25:49.069 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:49.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:49.069 --rc genhtml_branch_coverage=1 00:25:49.069 --rc genhtml_function_coverage=1 00:25:49.069 --rc genhtml_legend=1 00:25:49.069 --rc geninfo_all_blocks=1 00:25:49.069 --rc geninfo_unexecuted_blocks=1 00:25:49.069 00:25:49.069 ' 00:25:49.069 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:49.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:49.069 --rc genhtml_branch_coverage=1 00:25:49.069 --rc genhtml_function_coverage=1 00:25:49.069 --rc genhtml_legend=1 00:25:49.069 --rc geninfo_all_blocks=1 00:25:49.069 --rc geninfo_unexecuted_blocks=1 00:25:49.069 00:25:49.069 ' 00:25:49.069 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:49.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:49.069 --rc genhtml_branch_coverage=1 00:25:49.069 --rc genhtml_function_coverage=1 00:25:49.069 --rc genhtml_legend=1 00:25:49.069 --rc geninfo_all_blocks=1 00:25:49.069 --rc geninfo_unexecuted_blocks=1 00:25:49.069 00:25:49.069 ' 00:25:49.069 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:49.069 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:25:49.069 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:49.069 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:49.070 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:49.070 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:49.070 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:49.070 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:49.070 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:49.070 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:49.070 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:49.070 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:49.070 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:49.070 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:49.070 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:49.070 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:49.070 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:49.070 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:49.070 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:49.070 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:25:49.070 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:49.070 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:49.070 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:49.070 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.070 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.070 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.070 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:25:49.070 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.070 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:25:49.070 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:49.070 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:49.070 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:49.070 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:49.070 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:49.070 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:49.070 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:49.070 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:49.070 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:49.070 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:49.070 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:25:49.070 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:25:49.070 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:25:49.070 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:25:49.070 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:25:49.070 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:25:49.070 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=2fefb2e3ed43449e9e70d76f889fb7b6 00:25:49.070 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:25:49.070 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:25:49.070 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:49.070 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@472 -- # prepare_net_devs 00:25:49.070 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@434 -- # local -g is_hw=no 00:25:49.070 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@436 -- # remove_spdk_ns 00:25:49.070 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:49.070 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:49.070 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:49.070 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:25:49.070 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:25:49.070 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:25:49.070 08:40:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:55.656 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:55.656 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:25:55.656 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:55.656 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:55.656 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:55.656 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:55.656 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:55.656 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:25:55.656 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:55.656 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:25:55.656 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:25:55.656 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:25:55.656 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:25:55.656 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:25:55.656 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:25:55.656 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:55.656 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:55.656 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:55.656 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:55.656 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:55.656 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:55.656 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:55.656 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:55.656 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:55.656 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:55.656 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:55.656 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:25:55.656 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:25:55.656 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:25:55.656 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:25:55.656 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:25:55.656 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:25:55.656 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:55.656 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:55.656 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:55.656 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:55.656 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:55.656 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:55.656 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:55.656 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:55.656 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:55.656 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:55.656 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:55.656 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:55.656 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:55.656 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:55.656 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:55.657 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:55.657 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:25:55.657 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:25:55.657 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:25:55.657 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:55.657 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:55.657 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:55.657 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:55.657 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:55.657 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:55.657 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:55.657 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:55.657 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:55.657 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:55.657 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:55.657 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:55.657 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:55.657 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:55.657 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:55.657 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:55.657 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:55.657 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:55.657 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:55.657 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:55.657 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:25:55.657 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # is_hw=yes 00:25:55.657 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:25:55.657 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:25:55.657 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:25:55.657 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:55.657 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:55.657 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:55.657 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:55.657 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:55.657 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:55.657 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:55.657 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:55.657 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:55.657 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:55.657 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:55.657 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:55.657 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:55.657 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:55.657 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:55.657 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:55.657 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:55.657 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:55.657 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:55.918 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:55.918 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:55.918 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:55.918 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:55.918 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:55.918 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.564 ms 00:25:55.918 00:25:55.918 --- 10.0.0.2 ping statistics --- 00:25:55.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:55.918 rtt min/avg/max/mdev = 0.564/0.564/0.564/0.000 ms 00:25:55.918 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:55.918 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:55.918 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:25:55.918 00:25:55.918 --- 10.0.0.1 ping statistics --- 00:25:55.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:55.918 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:25:55.918 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:55.918 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # return 0 00:25:55.918 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:25:55.918 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:55.918 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:25:55.918 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:25:55.918 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:55.918 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:25:55.918 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:25:55.918 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:25:55.918 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:25:55.918 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:55.918 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:55.918 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@505 -- # nvmfpid=3843556 00:25:55.918 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@506 -- # waitforlisten 3843556 00:25:55.918 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:55.918 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 3843556 ']' 00:25:55.918 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:55.918 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:55.918 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:55.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:55.918 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:55.918 08:40:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:55.918 [2024-10-01 08:40:47.656169] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:25:55.918 [2024-10-01 08:40:47.656221] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:55.918 [2024-10-01 08:40:47.722659] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:56.179 [2024-10-01 08:40:47.786116] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:56.179 [2024-10-01 08:40:47.786155] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:56.179 [2024-10-01 08:40:47.786163] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:56.179 [2024-10-01 08:40:47.786170] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:56.179 [2024-10-01 08:40:47.786176] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:56.179 [2024-10-01 08:40:47.786760] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:56.748 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:56.748 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:25:56.748 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:25:56.748 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:56.748 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:56.748 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:56.748 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:56.748 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.748 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:56.748 [2024-10-01 08:40:48.494457] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:56.748 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.748 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:25:56.748 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.748 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:56.748 null0 00:25:56.748 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.748 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:25:56.748 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.748 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:56.748 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.748 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:25:56.749 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.749 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:56.749 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.749 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 2fefb2e3ed43449e9e70d76f889fb7b6 00:25:56.749 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.749 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:56.749 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.749 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:56.749 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.749 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:56.749 [2024-10-01 08:40:48.534727] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:56.749 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.749 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:25:56.749 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.749 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:57.009 nvme0n1 00:25:57.009 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.009 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:57.009 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.009 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:57.009 [ 00:25:57.009 { 00:25:57.009 "name": "nvme0n1", 00:25:57.009 "aliases": [ 00:25:57.009 "2fefb2e3-ed43-449e-9e70-d76f889fb7b6" 00:25:57.009 ], 00:25:57.009 "product_name": "NVMe disk", 00:25:57.009 "block_size": 512, 00:25:57.009 "num_blocks": 2097152, 00:25:57.009 "uuid": "2fefb2e3-ed43-449e-9e70-d76f889fb7b6", 00:25:57.009 "numa_id": 0, 00:25:57.009 "assigned_rate_limits": { 00:25:57.009 "rw_ios_per_sec": 0, 00:25:57.009 "rw_mbytes_per_sec": 0, 00:25:57.009 "r_mbytes_per_sec": 0, 00:25:57.009 "w_mbytes_per_sec": 0 00:25:57.009 }, 00:25:57.009 "claimed": false, 00:25:57.009 "zoned": false, 00:25:57.009 "supported_io_types": { 00:25:57.010 "read": true, 00:25:57.010 "write": true, 00:25:57.010 "unmap": false, 00:25:57.010 "flush": true, 00:25:57.010 "reset": true, 00:25:57.010 "nvme_admin": true, 00:25:57.010 "nvme_io": true, 00:25:57.010 "nvme_io_md": false, 00:25:57.010 "write_zeroes": true, 00:25:57.010 "zcopy": false, 00:25:57.010 "get_zone_info": false, 00:25:57.010 "zone_management": false, 00:25:57.010 "zone_append": false, 00:25:57.010 "compare": true, 00:25:57.010 "compare_and_write": true, 00:25:57.010 "abort": true, 00:25:57.010 "seek_hole": false, 00:25:57.010 "seek_data": false, 00:25:57.010 "copy": true, 00:25:57.010 "nvme_iov_md": false 00:25:57.010 }, 00:25:57.010 "memory_domains": [ 00:25:57.010 { 00:25:57.010 "dma_device_id": "system", 00:25:57.010 "dma_device_type": 1 00:25:57.010 } 00:25:57.010 ], 00:25:57.010 "driver_specific": { 00:25:57.010 "nvme": [ 00:25:57.010 { 00:25:57.010 "trid": { 00:25:57.010 "trtype": "TCP", 00:25:57.010 "adrfam": "IPv4", 00:25:57.010 "traddr": "10.0.0.2", 00:25:57.010 "trsvcid": "4420", 00:25:57.010 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:57.010 }, 00:25:57.010 "ctrlr_data": { 00:25:57.010 "cntlid": 1, 00:25:57.010 "vendor_id": "0x8086", 00:25:57.010 "model_number": "SPDK bdev Controller", 00:25:57.010 "serial_number": "00000000000000000000", 00:25:57.010 "firmware_revision": "25.01", 00:25:57.010 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:57.010 "oacs": { 00:25:57.010 "security": 0, 00:25:57.010 "format": 0, 00:25:57.010 "firmware": 0, 00:25:57.010 "ns_manage": 0 00:25:57.010 }, 00:25:57.010 "multi_ctrlr": true, 00:25:57.010 "ana_reporting": false 00:25:57.010 }, 00:25:57.010 "vs": { 00:25:57.010 "nvme_version": "1.3" 00:25:57.010 }, 00:25:57.010 "ns_data": { 00:25:57.010 "id": 1, 00:25:57.010 "can_share": true 00:25:57.010 } 00:25:57.010 } 00:25:57.010 ], 00:25:57.010 "mp_policy": "active_passive" 00:25:57.010 } 00:25:57.010 } 00:25:57.010 ] 00:25:57.010 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.010 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:25:57.010 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.010 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:57.010 [2024-10-01 08:40:48.783786] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:57.010 [2024-10-01 08:40:48.783849] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17cbee0 (9): Bad file descriptor 00:25:57.271 [2024-10-01 08:40:48.916100] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:57.271 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.271 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:57.271 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.271 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:57.271 [ 00:25:57.271 { 00:25:57.271 "name": "nvme0n1", 00:25:57.271 "aliases": [ 00:25:57.271 "2fefb2e3-ed43-449e-9e70-d76f889fb7b6" 00:25:57.271 ], 00:25:57.271 "product_name": "NVMe disk", 00:25:57.271 "block_size": 512, 00:25:57.271 "num_blocks": 2097152, 00:25:57.271 "uuid": "2fefb2e3-ed43-449e-9e70-d76f889fb7b6", 00:25:57.271 "numa_id": 0, 00:25:57.271 "assigned_rate_limits": { 00:25:57.271 "rw_ios_per_sec": 0, 00:25:57.271 "rw_mbytes_per_sec": 0, 00:25:57.271 "r_mbytes_per_sec": 0, 00:25:57.271 "w_mbytes_per_sec": 0 00:25:57.271 }, 00:25:57.271 "claimed": false, 00:25:57.271 "zoned": false, 00:25:57.271 "supported_io_types": { 00:25:57.271 "read": true, 00:25:57.271 "write": true, 00:25:57.271 "unmap": false, 00:25:57.271 "flush": true, 00:25:57.271 "reset": true, 00:25:57.271 "nvme_admin": true, 00:25:57.271 "nvme_io": true, 00:25:57.271 "nvme_io_md": false, 00:25:57.271 "write_zeroes": true, 00:25:57.271 "zcopy": false, 00:25:57.271 "get_zone_info": false, 00:25:57.271 "zone_management": false, 00:25:57.271 "zone_append": false, 00:25:57.271 "compare": true, 00:25:57.271 "compare_and_write": true, 00:25:57.271 "abort": true, 00:25:57.271 "seek_hole": false, 00:25:57.271 "seek_data": false, 00:25:57.271 "copy": true, 00:25:57.271 "nvme_iov_md": false 00:25:57.271 }, 00:25:57.271 "memory_domains": [ 00:25:57.271 { 00:25:57.271 "dma_device_id": "system", 00:25:57.271 "dma_device_type": 1 00:25:57.271 } 00:25:57.271 ], 00:25:57.271 "driver_specific": { 00:25:57.271 "nvme": [ 00:25:57.271 { 00:25:57.271 "trid": { 00:25:57.271 "trtype": "TCP", 00:25:57.271 "adrfam": "IPv4", 00:25:57.271 "traddr": "10.0.0.2", 00:25:57.271 "trsvcid": "4420", 00:25:57.271 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:57.271 }, 00:25:57.271 "ctrlr_data": { 00:25:57.271 "cntlid": 2, 00:25:57.271 "vendor_id": "0x8086", 00:25:57.271 "model_number": "SPDK bdev Controller", 00:25:57.271 "serial_number": "00000000000000000000", 00:25:57.271 "firmware_revision": "25.01", 00:25:57.271 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:57.271 "oacs": { 00:25:57.271 "security": 0, 00:25:57.271 "format": 0, 00:25:57.271 "firmware": 0, 00:25:57.271 "ns_manage": 0 00:25:57.271 }, 00:25:57.271 "multi_ctrlr": true, 00:25:57.271 "ana_reporting": false 00:25:57.271 }, 00:25:57.271 "vs": { 00:25:57.271 "nvme_version": "1.3" 00:25:57.271 }, 00:25:57.271 "ns_data": { 00:25:57.271 "id": 1, 00:25:57.271 "can_share": true 00:25:57.271 } 00:25:57.271 } 00:25:57.271 ], 00:25:57.271 "mp_policy": "active_passive" 00:25:57.271 } 00:25:57.271 } 00:25:57.271 ] 00:25:57.271 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.271 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.271 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.271 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:57.271 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.271 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:25:57.271 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.VqBfhJEzLV 00:25:57.271 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:25:57.271 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.VqBfhJEzLV 00:25:57.271 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.VqBfhJEzLV 00:25:57.271 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.271 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:57.271 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.271 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:25:57.271 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.271 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:57.271 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.271 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:25:57.271 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.271 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:57.271 [2024-10-01 08:40:48.976390] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:57.271 [2024-10-01 08:40:48.976505] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:57.271 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.271 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:25:57.271 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.271 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:57.271 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.271 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:57.271 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.271 08:40:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:57.271 [2024-10-01 08:40:48.996461] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:57.271 nvme0n1 00:25:57.271 08:40:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.271 08:40:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:57.271 08:40:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.271 08:40:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:57.271 [ 00:25:57.271 { 00:25:57.271 "name": "nvme0n1", 00:25:57.271 "aliases": [ 00:25:57.271 "2fefb2e3-ed43-449e-9e70-d76f889fb7b6" 00:25:57.271 ], 00:25:57.271 "product_name": "NVMe disk", 00:25:57.271 "block_size": 512, 00:25:57.271 "num_blocks": 2097152, 00:25:57.271 "uuid": "2fefb2e3-ed43-449e-9e70-d76f889fb7b6", 00:25:57.271 "numa_id": 0, 00:25:57.271 "assigned_rate_limits": { 00:25:57.271 "rw_ios_per_sec": 0, 00:25:57.271 "rw_mbytes_per_sec": 0, 00:25:57.271 "r_mbytes_per_sec": 0, 00:25:57.272 "w_mbytes_per_sec": 0 00:25:57.272 }, 00:25:57.272 "claimed": false, 00:25:57.272 "zoned": false, 00:25:57.272 "supported_io_types": { 00:25:57.272 "read": true, 00:25:57.272 "write": true, 00:25:57.272 "unmap": false, 00:25:57.272 "flush": true, 00:25:57.272 "reset": true, 00:25:57.272 "nvme_admin": true, 00:25:57.272 "nvme_io": true, 00:25:57.272 "nvme_io_md": false, 00:25:57.272 "write_zeroes": true, 00:25:57.272 "zcopy": false, 00:25:57.272 "get_zone_info": false, 00:25:57.272 "zone_management": false, 00:25:57.272 "zone_append": false, 00:25:57.272 "compare": true, 00:25:57.272 "compare_and_write": true, 00:25:57.272 "abort": true, 00:25:57.272 "seek_hole": false, 00:25:57.272 "seek_data": false, 00:25:57.272 "copy": true, 00:25:57.272 "nvme_iov_md": false 00:25:57.272 }, 00:25:57.272 "memory_domains": [ 00:25:57.272 { 00:25:57.272 "dma_device_id": "system", 00:25:57.272 "dma_device_type": 1 00:25:57.272 } 00:25:57.272 ], 00:25:57.272 "driver_specific": { 00:25:57.272 "nvme": [ 00:25:57.272 { 00:25:57.272 "trid": { 00:25:57.272 "trtype": "TCP", 00:25:57.272 "adrfam": "IPv4", 00:25:57.272 "traddr": "10.0.0.2", 00:25:57.272 "trsvcid": "4421", 00:25:57.272 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:57.272 }, 00:25:57.272 "ctrlr_data": { 00:25:57.272 "cntlid": 3, 00:25:57.272 "vendor_id": "0x8086", 00:25:57.272 "model_number": "SPDK bdev Controller", 00:25:57.272 "serial_number": "00000000000000000000", 00:25:57.272 "firmware_revision": "25.01", 00:25:57.272 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:57.272 "oacs": { 00:25:57.272 "security": 0, 00:25:57.272 "format": 0, 00:25:57.272 "firmware": 0, 00:25:57.272 "ns_manage": 0 00:25:57.272 }, 00:25:57.272 "multi_ctrlr": true, 00:25:57.272 "ana_reporting": false 00:25:57.272 }, 00:25:57.272 "vs": { 00:25:57.272 "nvme_version": "1.3" 00:25:57.272 }, 00:25:57.272 "ns_data": { 00:25:57.272 "id": 1, 00:25:57.272 "can_share": true 00:25:57.272 } 00:25:57.272 } 00:25:57.272 ], 00:25:57.272 "mp_policy": "active_passive" 00:25:57.272 } 00:25:57.272 } 00:25:57.272 ] 00:25:57.272 08:40:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.272 08:40:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.272 08:40:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.272 08:40:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:57.532 08:40:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.532 08:40:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.VqBfhJEzLV 00:25:57.532 08:40:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:25:57.532 08:40:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:25:57.532 08:40:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # nvmfcleanup 00:25:57.532 08:40:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:25:57.532 08:40:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:57.532 08:40:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:25:57.532 08:40:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:57.532 08:40:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:57.532 rmmod nvme_tcp 00:25:57.532 rmmod nvme_fabrics 00:25:57.532 rmmod nvme_keyring 00:25:57.532 08:40:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:57.532 08:40:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:25:57.532 08:40:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:25:57.532 08:40:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@513 -- # '[' -n 3843556 ']' 00:25:57.532 08:40:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@514 -- # killprocess 3843556 00:25:57.532 08:40:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 3843556 ']' 00:25:57.532 08:40:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 3843556 00:25:57.532 08:40:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:25:57.532 08:40:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:57.532 08:40:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3843556 00:25:57.532 08:40:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:57.532 08:40:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:57.532 08:40:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3843556' 00:25:57.532 killing process with pid 3843556 00:25:57.532 08:40:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 3843556 00:25:57.532 08:40:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 3843556 00:25:57.793 08:40:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:25:57.793 08:40:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:25:57.793 08:40:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:25:57.793 08:40:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:25:57.793 08:40:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@787 -- # iptables-save 00:25:57.793 08:40:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:25:57.793 08:40:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@787 -- # iptables-restore 00:25:57.793 08:40:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:57.793 08:40:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:57.793 08:40:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:57.793 08:40:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:57.793 08:40:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:59.706 08:40:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:59.707 00:25:59.707 real 0m11.009s 00:25:59.707 user 0m3.794s 00:25:59.707 sys 0m5.664s 00:25:59.707 08:40:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:59.707 08:40:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:59.707 ************************************ 00:25:59.707 END TEST nvmf_async_init 00:25:59.707 ************************************ 00:25:59.707 08:40:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:59.707 08:40:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:59.707 08:40:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:59.707 08:40:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.968 ************************************ 00:25:59.968 START TEST dma 00:25:59.968 ************************************ 00:25:59.968 08:40:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:59.968 * Looking for test storage... 00:25:59.968 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:59.968 08:40:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:59.968 08:40:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lcov --version 00:25:59.968 08:40:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:59.968 08:40:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:59.968 08:40:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:59.968 08:40:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:59.968 08:40:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:59.968 08:40:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:25:59.968 08:40:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:25:59.968 08:40:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:25:59.968 08:40:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:25:59.968 08:40:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:25:59.968 08:40:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:25:59.968 08:40:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:25:59.968 08:40:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:59.968 08:40:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:25:59.968 08:40:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:25:59.968 08:40:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:59.968 08:40:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:59.968 08:40:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:25:59.968 08:40:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:25:59.968 08:40:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:59.968 08:40:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:25:59.968 08:40:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:25:59.968 08:40:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:25:59.968 08:40:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:25:59.968 08:40:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:59.968 08:40:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:25:59.968 08:40:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:25:59.968 08:40:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:59.968 08:40:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:59.968 08:40:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:25:59.968 08:40:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:59.968 08:40:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:59.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:59.968 --rc genhtml_branch_coverage=1 00:25:59.968 --rc genhtml_function_coverage=1 00:25:59.968 --rc genhtml_legend=1 00:25:59.968 --rc geninfo_all_blocks=1 00:25:59.968 --rc geninfo_unexecuted_blocks=1 00:25:59.968 00:25:59.968 ' 00:25:59.968 08:40:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:59.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:59.968 --rc genhtml_branch_coverage=1 00:25:59.968 --rc genhtml_function_coverage=1 00:25:59.968 --rc genhtml_legend=1 00:25:59.968 --rc geninfo_all_blocks=1 00:25:59.968 --rc geninfo_unexecuted_blocks=1 00:25:59.968 00:25:59.968 ' 00:25:59.968 08:40:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:59.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:59.968 --rc genhtml_branch_coverage=1 00:25:59.968 --rc genhtml_function_coverage=1 00:25:59.968 --rc genhtml_legend=1 00:25:59.968 --rc geninfo_all_blocks=1 00:25:59.968 --rc geninfo_unexecuted_blocks=1 00:25:59.968 00:25:59.968 ' 00:25:59.968 08:40:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:59.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:59.968 --rc genhtml_branch_coverage=1 00:25:59.968 --rc genhtml_function_coverage=1 00:25:59.968 --rc genhtml_legend=1 00:25:59.968 --rc geninfo_all_blocks=1 00:25:59.968 --rc geninfo_unexecuted_blocks=1 00:25:59.968 00:25:59.968 ' 00:25:59.968 08:40:51 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:59.968 08:40:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:25:59.968 08:40:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:59.968 08:40:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:59.968 08:40:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:59.968 08:40:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:59.968 08:40:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:59.968 08:40:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:59.968 08:40:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:59.968 08:40:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:59.969 08:40:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:59.969 08:40:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:59.969 08:40:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:59.969 08:40:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:59.969 08:40:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:59.969 08:40:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:59.969 08:40:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:59.969 08:40:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:59.969 08:40:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:59.969 08:40:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:25:59.969 08:40:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:59.969 08:40:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:59.969 08:40:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:59.969 08:40:51 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.969 08:40:51 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.969 08:40:51 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.969 08:40:51 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:25:59.969 08:40:51 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.969 08:40:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:25:59.969 08:40:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:59.969 08:40:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:59.969 08:40:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:59.969 08:40:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:59.969 08:40:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:59.969 08:40:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:59.969 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:59.969 08:40:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:59.969 08:40:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:59.969 08:40:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:59.969 08:40:51 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:25:59.969 08:40:51 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:25:59.969 00:25:59.969 real 0m0.230s 00:25:59.969 user 0m0.140s 00:25:59.969 sys 0m0.106s 00:25:59.969 08:40:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:59.969 08:40:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:25:59.969 ************************************ 00:25:59.969 END TEST dma 00:25:59.969 ************************************ 00:26:00.230 08:40:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:26:00.230 08:40:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:00.230 08:40:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:00.230 08:40:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.230 ************************************ 00:26:00.230 START TEST nvmf_identify 00:26:00.230 ************************************ 00:26:00.230 08:40:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:26:00.230 * Looking for test storage... 00:26:00.230 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:00.230 08:40:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:00.230 08:40:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lcov --version 00:26:00.230 08:40:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:00.230 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:00.230 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:00.230 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:00.230 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:00.230 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:26:00.230 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:26:00.230 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:26:00.230 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:26:00.230 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:26:00.230 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:26:00.230 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:26:00.230 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:00.230 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:26:00.230 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:26:00.230 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:00.230 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:00.230 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:26:00.230 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:26:00.230 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:00.230 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:26:00.230 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:26:00.230 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:26:00.231 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:26:00.231 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:00.231 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:26:00.231 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:26:00.231 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:00.231 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:00.231 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:26:00.231 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:00.231 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:00.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:00.231 --rc genhtml_branch_coverage=1 00:26:00.231 --rc genhtml_function_coverage=1 00:26:00.231 --rc genhtml_legend=1 00:26:00.231 --rc geninfo_all_blocks=1 00:26:00.231 --rc geninfo_unexecuted_blocks=1 00:26:00.231 00:26:00.231 ' 00:26:00.231 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:00.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:00.231 --rc genhtml_branch_coverage=1 00:26:00.231 --rc genhtml_function_coverage=1 00:26:00.231 --rc genhtml_legend=1 00:26:00.231 --rc geninfo_all_blocks=1 00:26:00.231 --rc geninfo_unexecuted_blocks=1 00:26:00.231 00:26:00.231 ' 00:26:00.231 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:00.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:00.231 --rc genhtml_branch_coverage=1 00:26:00.231 --rc genhtml_function_coverage=1 00:26:00.231 --rc genhtml_legend=1 00:26:00.231 --rc geninfo_all_blocks=1 00:26:00.231 --rc geninfo_unexecuted_blocks=1 00:26:00.231 00:26:00.231 ' 00:26:00.231 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:00.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:00.231 --rc genhtml_branch_coverage=1 00:26:00.231 --rc genhtml_function_coverage=1 00:26:00.231 --rc genhtml_legend=1 00:26:00.231 --rc geninfo_all_blocks=1 00:26:00.231 --rc geninfo_unexecuted_blocks=1 00:26:00.231 00:26:00.231 ' 00:26:00.231 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:00.231 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:26:00.231 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:00.231 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:00.231 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:00.231 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:00.231 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:00.231 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:00.231 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:00.231 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:00.231 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:00.231 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:00.231 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:00.231 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:00.231 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:00.493 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:00.493 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:00.493 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:00.493 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:00.493 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:26:00.493 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:00.493 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:00.493 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:00.493 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.493 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.493 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.493 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:26:00.493 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.493 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:26:00.493 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:00.493 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:00.493 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:00.494 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:00.494 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:00.494 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:00.494 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:00.494 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:00.494 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:00.494 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:00.494 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:00.494 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:00.494 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:26:00.494 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:26:00.494 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:00.494 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # prepare_net_devs 00:26:00.494 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@434 -- # local -g is_hw=no 00:26:00.494 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # remove_spdk_ns 00:26:00.494 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:00.494 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:00.494 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:00.494 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:26:00.494 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:26:00.494 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:26:00.494 08:40:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:08.633 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:08.633 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:26:08.633 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:08.633 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:08.633 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:08.633 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:08.633 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:08.633 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:26:08.633 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:08.633 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:26:08.633 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:26:08.633 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:26:08.633 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:26:08.633 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:26:08.633 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:26:08.633 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:08.633 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:08.633 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:08.633 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:08.633 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:08.633 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:08.633 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:08.633 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:08.633 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:08.633 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:08.633 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:08.633 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:26:08.633 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:26:08.633 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:26:08.633 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:26:08.633 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:26:08.633 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:26:08.633 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:26:08.633 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:08.633 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:08.633 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:26:08.633 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:26:08.633 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:08.633 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:08.633 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:26:08.633 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:26:08.633 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:08.633 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:08.633 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:26:08.633 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:26:08.633 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:08.633 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:08.633 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:26:08.633 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:26:08.633 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:26:08.633 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:26:08.633 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:26:08.633 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:08.633 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:26:08.634 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:08.634 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ up == up ]] 00:26:08.634 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:26:08.634 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:08.634 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:08.634 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:08.634 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:26:08.634 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:26:08.634 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:08.634 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:26:08.634 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:08.634 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ up == up ]] 00:26:08.634 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:26:08.634 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:08.634 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:08.634 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:08.634 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:26:08.634 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:26:08.634 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # is_hw=yes 00:26:08.634 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:26:08.634 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:26:08.634 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:26:08.634 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:08.634 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:08.634 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:08.634 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:08.634 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:08.634 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:08.634 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:08.634 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:08.634 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:08.634 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:08.634 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:08.634 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:08.634 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:08.634 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:08.634 08:40:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:08.634 08:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:08.634 08:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:08.634 08:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:08.634 08:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:08.634 08:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:08.634 08:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:08.634 08:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:08.634 08:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:08.634 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:08.634 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.610 ms 00:26:08.634 00:26:08.634 --- 10.0.0.2 ping statistics --- 00:26:08.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:08.634 rtt min/avg/max/mdev = 0.610/0.610/0.610/0.000 ms 00:26:08.634 08:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:08.634 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:08.634 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:26:08.634 00:26:08.634 --- 10.0.0.1 ping statistics --- 00:26:08.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:08.634 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:26:08.634 08:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:08.634 08:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # return 0 00:26:08.634 08:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:26:08.634 08:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:08.634 08:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:26:08.634 08:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:26:08.634 08:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:08.634 08:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:26:08.634 08:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:26:08.634 08:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:26:08.634 08:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:08.634 08:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:08.634 08:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3847965 00:26:08.634 08:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:08.634 08:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:08.634 08:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3847965 00:26:08.634 08:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 3847965 ']' 00:26:08.634 08:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:08.634 08:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:08.634 08:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:08.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:08.634 08:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:08.634 08:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:08.634 [2024-10-01 08:40:59.362967] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:26:08.634 [2024-10-01 08:40:59.363049] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:08.634 [2024-10-01 08:40:59.436487] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:08.634 [2024-10-01 08:40:59.513601] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:08.634 [2024-10-01 08:40:59.513641] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:08.634 [2024-10-01 08:40:59.513649] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:08.634 [2024-10-01 08:40:59.513655] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:08.634 [2024-10-01 08:40:59.513661] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:08.634 [2024-10-01 08:40:59.515500] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:08.634 [2024-10-01 08:40:59.515635] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:26:08.634 [2024-10-01 08:40:59.515794] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:08.634 [2024-10-01 08:40:59.515795] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:26:08.634 08:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:08.634 08:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:26:08.634 08:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:08.634 08:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.634 08:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:08.634 [2024-10-01 08:41:00.181145] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:08.634 08:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.634 08:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:26:08.634 08:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:08.634 08:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:08.634 08:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:08.634 08:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.634 08:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:08.634 Malloc0 00:26:08.634 08:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.634 08:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:08.634 08:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.634 08:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:08.634 08:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.634 08:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:26:08.635 08:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.635 08:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:08.635 08:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.635 08:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:08.635 08:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.635 08:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:08.635 [2024-10-01 08:41:00.280407] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:08.635 08:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.635 08:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:08.635 08:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.635 08:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:08.635 08:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.635 08:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:26:08.635 08:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.635 08:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:08.635 [ 00:26:08.635 { 00:26:08.635 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:08.635 "subtype": "Discovery", 00:26:08.635 "listen_addresses": [ 00:26:08.635 { 00:26:08.635 "trtype": "TCP", 00:26:08.635 "adrfam": "IPv4", 00:26:08.635 "traddr": "10.0.0.2", 00:26:08.635 "trsvcid": "4420" 00:26:08.635 } 00:26:08.635 ], 00:26:08.635 "allow_any_host": true, 00:26:08.635 "hosts": [] 00:26:08.635 }, 00:26:08.635 { 00:26:08.635 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:08.635 "subtype": "NVMe", 00:26:08.635 "listen_addresses": [ 00:26:08.635 { 00:26:08.635 "trtype": "TCP", 00:26:08.635 "adrfam": "IPv4", 00:26:08.635 "traddr": "10.0.0.2", 00:26:08.635 "trsvcid": "4420" 00:26:08.635 } 00:26:08.635 ], 00:26:08.635 "allow_any_host": true, 00:26:08.635 "hosts": [], 00:26:08.635 "serial_number": "SPDK00000000000001", 00:26:08.635 "model_number": "SPDK bdev Controller", 00:26:08.635 "max_namespaces": 32, 00:26:08.635 "min_cntlid": 1, 00:26:08.635 "max_cntlid": 65519, 00:26:08.635 "namespaces": [ 00:26:08.635 { 00:26:08.635 "nsid": 1, 00:26:08.635 "bdev_name": "Malloc0", 00:26:08.635 "name": "Malloc0", 00:26:08.635 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:26:08.635 "eui64": "ABCDEF0123456789", 00:26:08.635 "uuid": "9c151ba5-5bc9-434c-abd9-9554e3694bf1" 00:26:08.635 } 00:26:08.635 ] 00:26:08.635 } 00:26:08.635 ] 00:26:08.635 08:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.635 08:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:26:08.635 [2024-10-01 08:41:00.345237] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:26:08.635 [2024-10-01 08:41:00.345307] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3848310 ] 00:26:08.635 [2024-10-01 08:41:00.377674] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:26:08.635 [2024-10-01 08:41:00.377724] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:26:08.635 [2024-10-01 08:41:00.377730] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:26:08.635 [2024-10-01 08:41:00.377745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:26:08.635 [2024-10-01 08:41:00.377754] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:26:08.635 [2024-10-01 08:41:00.381266] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:26:08.635 [2024-10-01 08:41:00.381301] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2413760 0 00:26:08.635 [2024-10-01 08:41:00.388007] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:26:08.635 [2024-10-01 08:41:00.388019] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:26:08.635 [2024-10-01 08:41:00.388024] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:26:08.635 [2024-10-01 08:41:00.388028] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:26:08.635 [2024-10-01 08:41:00.388056] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.635 [2024-10-01 08:41:00.388061] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.635 [2024-10-01 08:41:00.388066] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2413760) 00:26:08.635 [2024-10-01 08:41:00.388079] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:26:08.635 [2024-10-01 08:41:00.388097] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473480, cid 0, qid 0 00:26:08.635 [2024-10-01 08:41:00.396005] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.635 [2024-10-01 08:41:00.396015] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.635 [2024-10-01 08:41:00.396019] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.635 [2024-10-01 08:41:00.396023] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2473480) on tqpair=0x2413760 00:26:08.635 [2024-10-01 08:41:00.396033] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:26:08.635 [2024-10-01 08:41:00.396040] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:26:08.635 [2024-10-01 08:41:00.396045] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:26:08.635 [2024-10-01 08:41:00.396058] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.635 [2024-10-01 08:41:00.396062] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.635 [2024-10-01 08:41:00.396066] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2413760) 00:26:08.635 [2024-10-01 08:41:00.396074] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.635 [2024-10-01 08:41:00.396088] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473480, cid 0, qid 0 00:26:08.635 [2024-10-01 08:41:00.396258] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.635 [2024-10-01 08:41:00.396265] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.635 [2024-10-01 08:41:00.396268] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.635 [2024-10-01 08:41:00.396273] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2473480) on tqpair=0x2413760 00:26:08.635 [2024-10-01 08:41:00.396277] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:26:08.635 [2024-10-01 08:41:00.396285] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:26:08.635 [2024-10-01 08:41:00.396296] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.635 [2024-10-01 08:41:00.396300] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.635 [2024-10-01 08:41:00.396304] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2413760) 00:26:08.635 [2024-10-01 08:41:00.396311] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.635 [2024-10-01 08:41:00.396321] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473480, cid 0, qid 0 00:26:08.635 [2024-10-01 08:41:00.396455] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.635 [2024-10-01 08:41:00.396462] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.635 [2024-10-01 08:41:00.396466] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.635 [2024-10-01 08:41:00.396470] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2473480) on tqpair=0x2413760 00:26:08.635 [2024-10-01 08:41:00.396475] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:26:08.635 [2024-10-01 08:41:00.396483] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:26:08.635 [2024-10-01 08:41:00.396489] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.635 [2024-10-01 08:41:00.396493] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.635 [2024-10-01 08:41:00.396497] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2413760) 00:26:08.635 [2024-10-01 08:41:00.396504] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.635 [2024-10-01 08:41:00.396515] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473480, cid 0, qid 0 00:26:08.635 [2024-10-01 08:41:00.396617] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.635 [2024-10-01 08:41:00.396623] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.635 [2024-10-01 08:41:00.396627] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.635 [2024-10-01 08:41:00.396631] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2473480) on tqpair=0x2413760 00:26:08.635 [2024-10-01 08:41:00.396636] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:26:08.635 [2024-10-01 08:41:00.396645] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.635 [2024-10-01 08:41:00.396649] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.635 [2024-10-01 08:41:00.396653] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2413760) 00:26:08.636 [2024-10-01 08:41:00.396659] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.636 [2024-10-01 08:41:00.396670] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473480, cid 0, qid 0 00:26:08.636 [2024-10-01 08:41:00.396807] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.636 [2024-10-01 08:41:00.396814] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.636 [2024-10-01 08:41:00.396817] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.636 [2024-10-01 08:41:00.396821] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2473480) on tqpair=0x2413760 00:26:08.636 [2024-10-01 08:41:00.396826] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:26:08.636 [2024-10-01 08:41:00.396831] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:26:08.636 [2024-10-01 08:41:00.396838] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:26:08.636 [2024-10-01 08:41:00.396946] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:26:08.636 [2024-10-01 08:41:00.396951] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:26:08.636 [2024-10-01 08:41:00.396959] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.636 [2024-10-01 08:41:00.396963] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.636 [2024-10-01 08:41:00.396967] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2413760) 00:26:08.636 [2024-10-01 08:41:00.396974] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.636 [2024-10-01 08:41:00.396984] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473480, cid 0, qid 0 00:26:08.636 [2024-10-01 08:41:00.397253] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.636 [2024-10-01 08:41:00.397259] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.636 [2024-10-01 08:41:00.397263] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.636 [2024-10-01 08:41:00.397267] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2473480) on tqpair=0x2413760 00:26:08.636 [2024-10-01 08:41:00.397272] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:26:08.636 [2024-10-01 08:41:00.397281] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.636 [2024-10-01 08:41:00.397286] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.636 [2024-10-01 08:41:00.397289] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2413760) 00:26:08.636 [2024-10-01 08:41:00.397296] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.636 [2024-10-01 08:41:00.397307] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473480, cid 0, qid 0 00:26:08.636 [2024-10-01 08:41:00.397402] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.636 [2024-10-01 08:41:00.397409] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.636 [2024-10-01 08:41:00.397412] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.636 [2024-10-01 08:41:00.397416] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2473480) on tqpair=0x2413760 00:26:08.636 [2024-10-01 08:41:00.397421] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:26:08.636 [2024-10-01 08:41:00.397425] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:26:08.636 [2024-10-01 08:41:00.397433] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:26:08.636 [2024-10-01 08:41:00.397441] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:26:08.636 [2024-10-01 08:41:00.397450] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.636 [2024-10-01 08:41:00.397454] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2413760) 00:26:08.636 [2024-10-01 08:41:00.397461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.636 [2024-10-01 08:41:00.397472] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473480, cid 0, qid 0 00:26:08.636 [2024-10-01 08:41:00.397611] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:08.636 [2024-10-01 08:41:00.397617] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:08.636 [2024-10-01 08:41:00.397621] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:08.636 [2024-10-01 08:41:00.397627] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2413760): datao=0, datal=4096, cccid=0 00:26:08.636 [2024-10-01 08:41:00.397632] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2473480) on tqpair(0x2413760): expected_datao=0, payload_size=4096 00:26:08.636 [2024-10-01 08:41:00.397637] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.636 [2024-10-01 08:41:00.397658] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:08.636 [2024-10-01 08:41:00.397663] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:08.636 [2024-10-01 08:41:00.397755] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.636 [2024-10-01 08:41:00.397761] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.636 [2024-10-01 08:41:00.397765] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.636 [2024-10-01 08:41:00.397769] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2473480) on tqpair=0x2413760 00:26:08.636 [2024-10-01 08:41:00.397776] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:26:08.636 [2024-10-01 08:41:00.397781] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:26:08.636 [2024-10-01 08:41:00.397786] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:26:08.636 [2024-10-01 08:41:00.397791] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:26:08.636 [2024-10-01 08:41:00.397796] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:26:08.636 [2024-10-01 08:41:00.397800] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:26:08.636 [2024-10-01 08:41:00.397808] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:26:08.636 [2024-10-01 08:41:00.397815] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.636 [2024-10-01 08:41:00.397819] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.636 [2024-10-01 08:41:00.397823] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2413760) 00:26:08.636 [2024-10-01 08:41:00.397830] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:08.636 [2024-10-01 08:41:00.397841] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473480, cid 0, qid 0 00:26:08.636 [2024-10-01 08:41:00.398006] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.636 [2024-10-01 08:41:00.398013] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.636 [2024-10-01 08:41:00.398017] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.636 [2024-10-01 08:41:00.398021] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2473480) on tqpair=0x2413760 00:26:08.636 [2024-10-01 08:41:00.398028] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.636 [2024-10-01 08:41:00.398032] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.636 [2024-10-01 08:41:00.398036] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2413760) 00:26:08.636 [2024-10-01 08:41:00.398042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:08.636 [2024-10-01 08:41:00.398049] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.636 [2024-10-01 08:41:00.398053] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.636 [2024-10-01 08:41:00.398056] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2413760) 00:26:08.636 [2024-10-01 08:41:00.398063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:08.636 [2024-10-01 08:41:00.398069] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.636 [2024-10-01 08:41:00.398075] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.636 [2024-10-01 08:41:00.398079] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2413760) 00:26:08.636 [2024-10-01 08:41:00.398085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:08.636 [2024-10-01 08:41:00.398091] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.636 [2024-10-01 08:41:00.398095] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.636 [2024-10-01 08:41:00.398099] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2413760) 00:26:08.636 [2024-10-01 08:41:00.398105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:08.636 [2024-10-01 08:41:00.398109] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:26:08.636 [2024-10-01 08:41:00.398120] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:26:08.636 [2024-10-01 08:41:00.398126] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.636 [2024-10-01 08:41:00.398130] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2413760) 00:26:08.636 [2024-10-01 08:41:00.398137] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.636 [2024-10-01 08:41:00.398149] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473480, cid 0, qid 0 00:26:08.636 [2024-10-01 08:41:00.398155] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473600, cid 1, qid 0 00:26:08.636 [2024-10-01 08:41:00.398160] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473780, cid 2, qid 0 00:26:08.636 [2024-10-01 08:41:00.398165] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473900, cid 3, qid 0 00:26:08.637 [2024-10-01 08:41:00.398169] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473a80, cid 4, qid 0 00:26:08.637 [2024-10-01 08:41:00.398436] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.637 [2024-10-01 08:41:00.398442] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.637 [2024-10-01 08:41:00.398446] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.637 [2024-10-01 08:41:00.398450] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2473a80) on tqpair=0x2413760 00:26:08.637 [2024-10-01 08:41:00.398455] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:26:08.637 [2024-10-01 08:41:00.398460] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:26:08.637 [2024-10-01 08:41:00.398470] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.637 [2024-10-01 08:41:00.398474] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2413760) 00:26:08.637 [2024-10-01 08:41:00.398481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.637 [2024-10-01 08:41:00.398491] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473a80, cid 4, qid 0 00:26:08.637 [2024-10-01 08:41:00.398577] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:08.637 [2024-10-01 08:41:00.398584] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:08.637 [2024-10-01 08:41:00.398588] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:08.637 [2024-10-01 08:41:00.398592] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2413760): datao=0, datal=4096, cccid=4 00:26:08.637 [2024-10-01 08:41:00.398596] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2473a80) on tqpair(0x2413760): expected_datao=0, payload_size=4096 00:26:08.637 [2024-10-01 08:41:00.398603] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.637 [2024-10-01 08:41:00.398610] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:08.637 [2024-10-01 08:41:00.398614] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:08.637 [2024-10-01 08:41:00.398738] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.637 [2024-10-01 08:41:00.398744] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.637 [2024-10-01 08:41:00.398748] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.637 [2024-10-01 08:41:00.398752] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2473a80) on tqpair=0x2413760 00:26:08.637 [2024-10-01 08:41:00.398763] nvme_ctrlr.c:4189:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:26:08.637 [2024-10-01 08:41:00.398786] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.637 [2024-10-01 08:41:00.398790] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2413760) 00:26:08.637 [2024-10-01 08:41:00.398797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.637 [2024-10-01 08:41:00.398804] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.637 [2024-10-01 08:41:00.398808] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.637 [2024-10-01 08:41:00.398812] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2413760) 00:26:08.637 [2024-10-01 08:41:00.398818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:26:08.637 [2024-10-01 08:41:00.398830] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473a80, cid 4, qid 0 00:26:08.637 [2024-10-01 08:41:00.398835] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473c00, cid 5, qid 0 00:26:08.637 [2024-10-01 08:41:00.402001] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:08.637 [2024-10-01 08:41:00.402010] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:08.637 [2024-10-01 08:41:00.402013] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:08.637 [2024-10-01 08:41:00.402017] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2413760): datao=0, datal=1024, cccid=4 00:26:08.637 [2024-10-01 08:41:00.402022] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2473a80) on tqpair(0x2413760): expected_datao=0, payload_size=1024 00:26:08.637 [2024-10-01 08:41:00.402026] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.637 [2024-10-01 08:41:00.402033] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:08.637 [2024-10-01 08:41:00.402037] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:08.637 [2024-10-01 08:41:00.402042] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.637 [2024-10-01 08:41:00.402048] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.637 [2024-10-01 08:41:00.402052] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.637 [2024-10-01 08:41:00.402056] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2473c00) on tqpair=0x2413760 00:26:08.637 [2024-10-01 08:41:00.443002] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.637 [2024-10-01 08:41:00.443010] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.637 [2024-10-01 08:41:00.443014] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.637 [2024-10-01 08:41:00.443018] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2473a80) on tqpair=0x2413760 00:26:08.637 [2024-10-01 08:41:00.443034] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.637 [2024-10-01 08:41:00.443039] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2413760) 00:26:08.637 [2024-10-01 08:41:00.443046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.637 [2024-10-01 08:41:00.443062] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473a80, cid 4, qid 0 00:26:08.637 [2024-10-01 08:41:00.443244] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:08.637 [2024-10-01 08:41:00.443251] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:08.637 [2024-10-01 08:41:00.443255] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:08.637 [2024-10-01 08:41:00.443258] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2413760): datao=0, datal=3072, cccid=4 00:26:08.637 [2024-10-01 08:41:00.443263] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2473a80) on tqpair(0x2413760): expected_datao=0, payload_size=3072 00:26:08.637 [2024-10-01 08:41:00.443267] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.637 [2024-10-01 08:41:00.443278] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:08.637 [2024-10-01 08:41:00.443282] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:08.637 [2024-10-01 08:41:00.443414] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.637 [2024-10-01 08:41:00.443420] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.637 [2024-10-01 08:41:00.443424] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.637 [2024-10-01 08:41:00.443428] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2473a80) on tqpair=0x2413760 00:26:08.637 [2024-10-01 08:41:00.443436] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.637 [2024-10-01 08:41:00.443440] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2413760) 00:26:08.637 [2024-10-01 08:41:00.443446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.637 [2024-10-01 08:41:00.443460] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473a80, cid 4, qid 0 00:26:08.637 [2024-10-01 08:41:00.443654] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:08.637 [2024-10-01 08:41:00.443660] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:08.637 [2024-10-01 08:41:00.443664] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:08.637 [2024-10-01 08:41:00.443668] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2413760): datao=0, datal=8, cccid=4 00:26:08.637 [2024-10-01 08:41:00.443672] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2473a80) on tqpair(0x2413760): expected_datao=0, payload_size=8 00:26:08.637 [2024-10-01 08:41:00.443677] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.637 [2024-10-01 08:41:00.443683] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:08.637 [2024-10-01 08:41:00.443687] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:08.902 [2024-10-01 08:41:00.486003] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.902 [2024-10-01 08:41:00.486013] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.902 [2024-10-01 08:41:00.486017] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.902 [2024-10-01 08:41:00.486021] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2473a80) on tqpair=0x2413760 00:26:08.902 ===================================================== 00:26:08.902 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:08.902 ===================================================== 00:26:08.902 Controller Capabilities/Features 00:26:08.902 ================================ 00:26:08.902 Vendor ID: 0000 00:26:08.902 Subsystem Vendor ID: 0000 00:26:08.902 Serial Number: .................... 00:26:08.902 Model Number: ........................................ 00:26:08.902 Firmware Version: 25.01 00:26:08.902 Recommended Arb Burst: 0 00:26:08.902 IEEE OUI Identifier: 00 00 00 00:26:08.902 Multi-path I/O 00:26:08.902 May have multiple subsystem ports: No 00:26:08.902 May have multiple controllers: No 00:26:08.902 Associated with SR-IOV VF: No 00:26:08.902 Max Data Transfer Size: 131072 00:26:08.902 Max Number of Namespaces: 0 00:26:08.902 Max Number of I/O Queues: 1024 00:26:08.902 NVMe Specification Version (VS): 1.3 00:26:08.902 NVMe Specification Version (Identify): 1.3 00:26:08.902 Maximum Queue Entries: 128 00:26:08.902 Contiguous Queues Required: Yes 00:26:08.902 Arbitration Mechanisms Supported 00:26:08.902 Weighted Round Robin: Not Supported 00:26:08.902 Vendor Specific: Not Supported 00:26:08.902 Reset Timeout: 15000 ms 00:26:08.902 Doorbell Stride: 4 bytes 00:26:08.902 NVM Subsystem Reset: Not Supported 00:26:08.902 Command Sets Supported 00:26:08.902 NVM Command Set: Supported 00:26:08.902 Boot Partition: Not Supported 00:26:08.902 Memory Page Size Minimum: 4096 bytes 00:26:08.902 Memory Page Size Maximum: 4096 bytes 00:26:08.902 Persistent Memory Region: Not Supported 00:26:08.902 Optional Asynchronous Events Supported 00:26:08.902 Namespace Attribute Notices: Not Supported 00:26:08.902 Firmware Activation Notices: Not Supported 00:26:08.902 ANA Change Notices: Not Supported 00:26:08.902 PLE Aggregate Log Change Notices: Not Supported 00:26:08.902 LBA Status Info Alert Notices: Not Supported 00:26:08.902 EGE Aggregate Log Change Notices: Not Supported 00:26:08.902 Normal NVM Subsystem Shutdown event: Not Supported 00:26:08.902 Zone Descriptor Change Notices: Not Supported 00:26:08.902 Discovery Log Change Notices: Supported 00:26:08.902 Controller Attributes 00:26:08.902 128-bit Host Identifier: Not Supported 00:26:08.902 Non-Operational Permissive Mode: Not Supported 00:26:08.902 NVM Sets: Not Supported 00:26:08.902 Read Recovery Levels: Not Supported 00:26:08.902 Endurance Groups: Not Supported 00:26:08.902 Predictable Latency Mode: Not Supported 00:26:08.902 Traffic Based Keep ALive: Not Supported 00:26:08.902 Namespace Granularity: Not Supported 00:26:08.902 SQ Associations: Not Supported 00:26:08.902 UUID List: Not Supported 00:26:08.902 Multi-Domain Subsystem: Not Supported 00:26:08.902 Fixed Capacity Management: Not Supported 00:26:08.902 Variable Capacity Management: Not Supported 00:26:08.902 Delete Endurance Group: Not Supported 00:26:08.902 Delete NVM Set: Not Supported 00:26:08.902 Extended LBA Formats Supported: Not Supported 00:26:08.902 Flexible Data Placement Supported: Not Supported 00:26:08.902 00:26:08.902 Controller Memory Buffer Support 00:26:08.902 ================================ 00:26:08.902 Supported: No 00:26:08.902 00:26:08.902 Persistent Memory Region Support 00:26:08.902 ================================ 00:26:08.902 Supported: No 00:26:08.902 00:26:08.902 Admin Command Set Attributes 00:26:08.902 ============================ 00:26:08.902 Security Send/Receive: Not Supported 00:26:08.902 Format NVM: Not Supported 00:26:08.902 Firmware Activate/Download: Not Supported 00:26:08.902 Namespace Management: Not Supported 00:26:08.902 Device Self-Test: Not Supported 00:26:08.902 Directives: Not Supported 00:26:08.902 NVMe-MI: Not Supported 00:26:08.902 Virtualization Management: Not Supported 00:26:08.902 Doorbell Buffer Config: Not Supported 00:26:08.902 Get LBA Status Capability: Not Supported 00:26:08.902 Command & Feature Lockdown Capability: Not Supported 00:26:08.902 Abort Command Limit: 1 00:26:08.902 Async Event Request Limit: 4 00:26:08.902 Number of Firmware Slots: N/A 00:26:08.902 Firmware Slot 1 Read-Only: N/A 00:26:08.902 Firmware Activation Without Reset: N/A 00:26:08.902 Multiple Update Detection Support: N/A 00:26:08.902 Firmware Update Granularity: No Information Provided 00:26:08.902 Per-Namespace SMART Log: No 00:26:08.902 Asymmetric Namespace Access Log Page: Not Supported 00:26:08.902 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:08.902 Command Effects Log Page: Not Supported 00:26:08.902 Get Log Page Extended Data: Supported 00:26:08.902 Telemetry Log Pages: Not Supported 00:26:08.902 Persistent Event Log Pages: Not Supported 00:26:08.902 Supported Log Pages Log Page: May Support 00:26:08.902 Commands Supported & Effects Log Page: Not Supported 00:26:08.902 Feature Identifiers & Effects Log Page:May Support 00:26:08.902 NVMe-MI Commands & Effects Log Page: May Support 00:26:08.902 Data Area 4 for Telemetry Log: Not Supported 00:26:08.902 Error Log Page Entries Supported: 128 00:26:08.902 Keep Alive: Not Supported 00:26:08.902 00:26:08.902 NVM Command Set Attributes 00:26:08.902 ========================== 00:26:08.902 Submission Queue Entry Size 00:26:08.902 Max: 1 00:26:08.902 Min: 1 00:26:08.902 Completion Queue Entry Size 00:26:08.902 Max: 1 00:26:08.902 Min: 1 00:26:08.902 Number of Namespaces: 0 00:26:08.903 Compare Command: Not Supported 00:26:08.903 Write Uncorrectable Command: Not Supported 00:26:08.903 Dataset Management Command: Not Supported 00:26:08.903 Write Zeroes Command: Not Supported 00:26:08.903 Set Features Save Field: Not Supported 00:26:08.903 Reservations: Not Supported 00:26:08.903 Timestamp: Not Supported 00:26:08.903 Copy: Not Supported 00:26:08.903 Volatile Write Cache: Not Present 00:26:08.903 Atomic Write Unit (Normal): 1 00:26:08.903 Atomic Write Unit (PFail): 1 00:26:08.903 Atomic Compare & Write Unit: 1 00:26:08.903 Fused Compare & Write: Supported 00:26:08.903 Scatter-Gather List 00:26:08.903 SGL Command Set: Supported 00:26:08.903 SGL Keyed: Supported 00:26:08.903 SGL Bit Bucket Descriptor: Not Supported 00:26:08.903 SGL Metadata Pointer: Not Supported 00:26:08.903 Oversized SGL: Not Supported 00:26:08.903 SGL Metadata Address: Not Supported 00:26:08.903 SGL Offset: Supported 00:26:08.903 Transport SGL Data Block: Not Supported 00:26:08.903 Replay Protected Memory Block: Not Supported 00:26:08.903 00:26:08.903 Firmware Slot Information 00:26:08.903 ========================= 00:26:08.903 Active slot: 0 00:26:08.903 00:26:08.903 00:26:08.903 Error Log 00:26:08.903 ========= 00:26:08.903 00:26:08.903 Active Namespaces 00:26:08.903 ================= 00:26:08.903 Discovery Log Page 00:26:08.903 ================== 00:26:08.903 Generation Counter: 2 00:26:08.903 Number of Records: 2 00:26:08.903 Record Format: 0 00:26:08.903 00:26:08.903 Discovery Log Entry 0 00:26:08.903 ---------------------- 00:26:08.903 Transport Type: 3 (TCP) 00:26:08.903 Address Family: 1 (IPv4) 00:26:08.903 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:08.903 Entry Flags: 00:26:08.903 Duplicate Returned Information: 1 00:26:08.903 Explicit Persistent Connection Support for Discovery: 1 00:26:08.903 Transport Requirements: 00:26:08.903 Secure Channel: Not Required 00:26:08.903 Port ID: 0 (0x0000) 00:26:08.903 Controller ID: 65535 (0xffff) 00:26:08.903 Admin Max SQ Size: 128 00:26:08.903 Transport Service Identifier: 4420 00:26:08.903 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:08.903 Transport Address: 10.0.0.2 00:26:08.903 Discovery Log Entry 1 00:26:08.903 ---------------------- 00:26:08.903 Transport Type: 3 (TCP) 00:26:08.903 Address Family: 1 (IPv4) 00:26:08.903 Subsystem Type: 2 (NVM Subsystem) 00:26:08.903 Entry Flags: 00:26:08.903 Duplicate Returned Information: 0 00:26:08.903 Explicit Persistent Connection Support for Discovery: 0 00:26:08.903 Transport Requirements: 00:26:08.903 Secure Channel: Not Required 00:26:08.903 Port ID: 0 (0x0000) 00:26:08.903 Controller ID: 65535 (0xffff) 00:26:08.903 Admin Max SQ Size: 128 00:26:08.903 Transport Service Identifier: 4420 00:26:08.903 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:26:08.903 Transport Address: 10.0.0.2 [2024-10-01 08:41:00.486104] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:26:08.903 [2024-10-01 08:41:00.486114] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2473480) on tqpair=0x2413760 00:26:08.903 [2024-10-01 08:41:00.486121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.903 [2024-10-01 08:41:00.486127] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2473600) on tqpair=0x2413760 00:26:08.903 [2024-10-01 08:41:00.486131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.903 [2024-10-01 08:41:00.486137] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2473780) on tqpair=0x2413760 00:26:08.903 [2024-10-01 08:41:00.486141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.903 [2024-10-01 08:41:00.486148] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2473900) on tqpair=0x2413760 00:26:08.903 [2024-10-01 08:41:00.486153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.903 [2024-10-01 08:41:00.486161] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.903 [2024-10-01 08:41:00.486165] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.903 [2024-10-01 08:41:00.486169] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2413760) 00:26:08.903 [2024-10-01 08:41:00.486176] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.903 [2024-10-01 08:41:00.486190] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473900, cid 3, qid 0 00:26:08.903 [2024-10-01 08:41:00.486373] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.903 [2024-10-01 08:41:00.486380] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.903 [2024-10-01 08:41:00.486383] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.903 [2024-10-01 08:41:00.486387] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2473900) on tqpair=0x2413760 00:26:08.903 [2024-10-01 08:41:00.486394] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.903 [2024-10-01 08:41:00.486398] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.903 [2024-10-01 08:41:00.486401] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2413760) 00:26:08.903 [2024-10-01 08:41:00.486408] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.903 [2024-10-01 08:41:00.486421] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473900, cid 3, qid 0 00:26:08.903 [2024-10-01 08:41:00.486588] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.903 [2024-10-01 08:41:00.486594] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.903 [2024-10-01 08:41:00.486598] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.903 [2024-10-01 08:41:00.486602] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2473900) on tqpair=0x2413760 00:26:08.903 [2024-10-01 08:41:00.486607] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:26:08.903 [2024-10-01 08:41:00.486614] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:26:08.903 [2024-10-01 08:41:00.486623] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.903 [2024-10-01 08:41:00.486627] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.903 [2024-10-01 08:41:00.486631] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2413760) 00:26:08.903 [2024-10-01 08:41:00.486638] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.903 [2024-10-01 08:41:00.486648] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473900, cid 3, qid 0 00:26:08.903 [2024-10-01 08:41:00.486747] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.903 [2024-10-01 08:41:00.486753] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.903 [2024-10-01 08:41:00.486757] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.903 [2024-10-01 08:41:00.486760] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2473900) on tqpair=0x2413760 00:26:08.903 [2024-10-01 08:41:00.486771] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.903 [2024-10-01 08:41:00.486775] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.903 [2024-10-01 08:41:00.486778] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2413760) 00:26:08.903 [2024-10-01 08:41:00.486785] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.903 [2024-10-01 08:41:00.486797] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473900, cid 3, qid 0 00:26:08.903 [2024-10-01 08:41:00.486936] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.903 [2024-10-01 08:41:00.486943] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.903 [2024-10-01 08:41:00.486946] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.903 [2024-10-01 08:41:00.486950] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2473900) on tqpair=0x2413760 00:26:08.903 [2024-10-01 08:41:00.486960] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.903 [2024-10-01 08:41:00.486964] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.903 [2024-10-01 08:41:00.486967] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2413760) 00:26:08.903 [2024-10-01 08:41:00.486974] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.903 [2024-10-01 08:41:00.486984] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473900, cid 3, qid 0 00:26:08.903 [2024-10-01 08:41:00.487155] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.903 [2024-10-01 08:41:00.487162] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.903 [2024-10-01 08:41:00.487165] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.903 [2024-10-01 08:41:00.487169] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2473900) on tqpair=0x2413760 00:26:08.903 [2024-10-01 08:41:00.487179] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.903 [2024-10-01 08:41:00.487183] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.903 [2024-10-01 08:41:00.487186] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2413760) 00:26:08.904 [2024-10-01 08:41:00.487193] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.904 [2024-10-01 08:41:00.487204] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473900, cid 3, qid 0 00:26:08.904 [2024-10-01 08:41:00.487348] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.904 [2024-10-01 08:41:00.487355] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.904 [2024-10-01 08:41:00.487358] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.904 [2024-10-01 08:41:00.487362] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2473900) on tqpair=0x2413760 00:26:08.904 [2024-10-01 08:41:00.487372] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.904 [2024-10-01 08:41:00.487375] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.904 [2024-10-01 08:41:00.487379] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2413760) 00:26:08.904 [2024-10-01 08:41:00.487386] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.904 [2024-10-01 08:41:00.487396] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473900, cid 3, qid 0 00:26:08.904 [2024-10-01 08:41:00.487531] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.904 [2024-10-01 08:41:00.487537] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.904 [2024-10-01 08:41:00.487541] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.904 [2024-10-01 08:41:00.487545] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2473900) on tqpair=0x2413760 00:26:08.904 [2024-10-01 08:41:00.487554] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.904 [2024-10-01 08:41:00.487558] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.904 [2024-10-01 08:41:00.487562] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2413760) 00:26:08.904 [2024-10-01 08:41:00.487569] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.904 [2024-10-01 08:41:00.487579] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473900, cid 3, qid 0 00:26:08.904 [2024-10-01 08:41:00.487747] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.904 [2024-10-01 08:41:00.487753] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.904 [2024-10-01 08:41:00.487757] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.904 [2024-10-01 08:41:00.487761] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2473900) on tqpair=0x2413760 00:26:08.904 [2024-10-01 08:41:00.487770] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.904 [2024-10-01 08:41:00.487774] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.904 [2024-10-01 08:41:00.487778] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2413760) 00:26:08.904 [2024-10-01 08:41:00.487784] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.904 [2024-10-01 08:41:00.487795] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473900, cid 3, qid 0 00:26:08.904 [2024-10-01 08:41:00.487933] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.904 [2024-10-01 08:41:00.487939] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.904 [2024-10-01 08:41:00.487943] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.904 [2024-10-01 08:41:00.487947] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2473900) on tqpair=0x2413760 00:26:08.904 [2024-10-01 08:41:00.487956] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.904 [2024-10-01 08:41:00.487960] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.904 [2024-10-01 08:41:00.487964] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2413760) 00:26:08.904 [2024-10-01 08:41:00.487970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.904 [2024-10-01 08:41:00.487980] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473900, cid 3, qid 0 00:26:08.904 [2024-10-01 08:41:00.488157] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.904 [2024-10-01 08:41:00.488163] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.904 [2024-10-01 08:41:00.488167] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.904 [2024-10-01 08:41:00.488170] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2473900) on tqpair=0x2413760 00:26:08.904 [2024-10-01 08:41:00.488180] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.904 [2024-10-01 08:41:00.488184] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.904 [2024-10-01 08:41:00.488187] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2413760) 00:26:08.904 [2024-10-01 08:41:00.488194] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.904 [2024-10-01 08:41:00.488204] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473900, cid 3, qid 0 00:26:08.904 [2024-10-01 08:41:00.488325] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.904 [2024-10-01 08:41:00.488331] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.904 [2024-10-01 08:41:00.488335] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.904 [2024-10-01 08:41:00.488339] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2473900) on tqpair=0x2413760 00:26:08.904 [2024-10-01 08:41:00.488348] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.904 [2024-10-01 08:41:00.488352] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.904 [2024-10-01 08:41:00.488356] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2413760) 00:26:08.904 [2024-10-01 08:41:00.488362] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.904 [2024-10-01 08:41:00.488372] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473900, cid 3, qid 0 00:26:08.904 [2024-10-01 08:41:00.488507] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.904 [2024-10-01 08:41:00.488515] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.904 [2024-10-01 08:41:00.488519] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.904 [2024-10-01 08:41:00.488523] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2473900) on tqpair=0x2413760 00:26:08.904 [2024-10-01 08:41:00.488532] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.904 [2024-10-01 08:41:00.488536] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.904 [2024-10-01 08:41:00.488540] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2413760) 00:26:08.904 [2024-10-01 08:41:00.488547] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.904 [2024-10-01 08:41:00.488557] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473900, cid 3, qid 0 00:26:08.904 [2024-10-01 08:41:00.488699] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.904 [2024-10-01 08:41:00.488705] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.904 [2024-10-01 08:41:00.488709] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.904 [2024-10-01 08:41:00.488713] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2473900) on tqpair=0x2413760 00:26:08.904 [2024-10-01 08:41:00.488722] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.904 [2024-10-01 08:41:00.488726] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.904 [2024-10-01 08:41:00.488730] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2413760) 00:26:08.904 [2024-10-01 08:41:00.488736] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.904 [2024-10-01 08:41:00.488746] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473900, cid 3, qid 0 00:26:08.904 [2024-10-01 08:41:00.488877] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.904 [2024-10-01 08:41:00.488884] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.904 [2024-10-01 08:41:00.488887] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.904 [2024-10-01 08:41:00.488891] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2473900) on tqpair=0x2413760 00:26:08.904 [2024-10-01 08:41:00.488901] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.904 [2024-10-01 08:41:00.488905] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.904 [2024-10-01 08:41:00.488908] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2413760) 00:26:08.904 [2024-10-01 08:41:00.488915] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.904 [2024-10-01 08:41:00.488925] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473900, cid 3, qid 0 00:26:08.904 [2024-10-01 08:41:00.489072] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.904 [2024-10-01 08:41:00.489078] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.904 [2024-10-01 08:41:00.489082] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.904 [2024-10-01 08:41:00.489086] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2473900) on tqpair=0x2413760 00:26:08.904 [2024-10-01 08:41:00.489095] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.904 [2024-10-01 08:41:00.489099] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.904 [2024-10-01 08:41:00.489103] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2413760) 00:26:08.904 [2024-10-01 08:41:00.489109] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.904 [2024-10-01 08:41:00.489120] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473900, cid 3, qid 0 00:26:08.904 [2024-10-01 08:41:00.489289] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.904 [2024-10-01 08:41:00.489296] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.904 [2024-10-01 08:41:00.489303] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.904 [2024-10-01 08:41:00.489307] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2473900) on tqpair=0x2413760 00:26:08.904 [2024-10-01 08:41:00.489317] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.904 [2024-10-01 08:41:00.489321] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.904 [2024-10-01 08:41:00.489325] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2413760) 00:26:08.904 [2024-10-01 08:41:00.489331] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.904 [2024-10-01 08:41:00.489341] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473900, cid 3, qid 0 00:26:08.904 [2024-10-01 08:41:00.489464] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.904 [2024-10-01 08:41:00.489471] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.904 [2024-10-01 08:41:00.489474] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.904 [2024-10-01 08:41:00.489478] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2473900) on tqpair=0x2413760 00:26:08.904 [2024-10-01 08:41:00.489488] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.905 [2024-10-01 08:41:00.489492] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.905 [2024-10-01 08:41:00.489495] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2413760) 00:26:08.905 [2024-10-01 08:41:00.489502] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.905 [2024-10-01 08:41:00.489512] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473900, cid 3, qid 0 00:26:08.905 [2024-10-01 08:41:00.489648] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.905 [2024-10-01 08:41:00.489654] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.905 [2024-10-01 08:41:00.489658] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.905 [2024-10-01 08:41:00.489662] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2473900) on tqpair=0x2413760 00:26:08.905 [2024-10-01 08:41:00.489671] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.905 [2024-10-01 08:41:00.489675] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.905 [2024-10-01 08:41:00.489679] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2413760) 00:26:08.905 [2024-10-01 08:41:00.489686] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.905 [2024-10-01 08:41:00.489696] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473900, cid 3, qid 0 00:26:08.905 [2024-10-01 08:41:00.489832] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.905 [2024-10-01 08:41:00.489838] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.905 [2024-10-01 08:41:00.489842] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.905 [2024-10-01 08:41:00.489845] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2473900) on tqpair=0x2413760 00:26:08.905 [2024-10-01 08:41:00.489855] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.905 [2024-10-01 08:41:00.489859] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.905 [2024-10-01 08:41:00.489862] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2413760) 00:26:08.905 [2024-10-01 08:41:00.489869] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.905 [2024-10-01 08:41:00.489879] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473900, cid 3, qid 0 00:26:08.905 [2024-10-01 08:41:00.494000] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.905 [2024-10-01 08:41:00.494008] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.905 [2024-10-01 08:41:00.494012] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.905 [2024-10-01 08:41:00.494018] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2473900) on tqpair=0x2413760 00:26:08.905 [2024-10-01 08:41:00.494029] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.905 [2024-10-01 08:41:00.494033] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.905 [2024-10-01 08:41:00.494036] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2413760) 00:26:08.905 [2024-10-01 08:41:00.494043] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.905 [2024-10-01 08:41:00.494054] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2473900, cid 3, qid 0 00:26:08.905 [2024-10-01 08:41:00.494235] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.905 [2024-10-01 08:41:00.494242] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.905 [2024-10-01 08:41:00.494245] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.905 [2024-10-01 08:41:00.494249] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2473900) on tqpair=0x2413760 00:26:08.905 [2024-10-01 08:41:00.494257] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:26:08.905 00:26:08.905 08:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:26:08.905 [2024-10-01 08:41:00.539176] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:26:08.905 [2024-10-01 08:41:00.539247] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3848315 ] 00:26:08.905 [2024-10-01 08:41:00.571533] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:26:08.905 [2024-10-01 08:41:00.571578] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:26:08.905 [2024-10-01 08:41:00.571583] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:26:08.905 [2024-10-01 08:41:00.571599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:26:08.905 [2024-10-01 08:41:00.571606] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:26:08.905 [2024-10-01 08:41:00.575189] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:26:08.905 [2024-10-01 08:41:00.575221] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x240c760 0 00:26:08.905 [2024-10-01 08:41:00.575295] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:26:08.905 [2024-10-01 08:41:00.575307] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:26:08.905 [2024-10-01 08:41:00.575313] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:26:08.905 [2024-10-01 08:41:00.575317] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:26:08.905 [2024-10-01 08:41:00.575338] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.905 [2024-10-01 08:41:00.575343] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.905 [2024-10-01 08:41:00.575349] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x240c760) 00:26:08.905 [2024-10-01 08:41:00.575360] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:26:08.905 [2024-10-01 08:41:00.575373] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x246c480, cid 0, qid 0 00:26:08.905 [2024-10-01 08:41:00.583004] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.905 [2024-10-01 08:41:00.583017] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.905 [2024-10-01 08:41:00.583021] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.905 [2024-10-01 08:41:00.583025] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x246c480) on tqpair=0x240c760 00:26:08.905 [2024-10-01 08:41:00.583037] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:26:08.905 [2024-10-01 08:41:00.583043] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:26:08.905 [2024-10-01 08:41:00.583048] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:26:08.905 [2024-10-01 08:41:00.583060] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.905 [2024-10-01 08:41:00.583064] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.905 [2024-10-01 08:41:00.583068] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x240c760) 00:26:08.905 [2024-10-01 08:41:00.583076] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.905 [2024-10-01 08:41:00.583089] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x246c480, cid 0, qid 0 00:26:08.905 [2024-10-01 08:41:00.583148] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.905 [2024-10-01 08:41:00.583155] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.905 [2024-10-01 08:41:00.583159] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.905 [2024-10-01 08:41:00.583163] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x246c480) on tqpair=0x240c760 00:26:08.905 [2024-10-01 08:41:00.583168] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:26:08.905 [2024-10-01 08:41:00.583175] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:26:08.905 [2024-10-01 08:41:00.583182] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.905 [2024-10-01 08:41:00.583185] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.905 [2024-10-01 08:41:00.583189] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x240c760) 00:26:08.905 [2024-10-01 08:41:00.583196] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.905 [2024-10-01 08:41:00.583207] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x246c480, cid 0, qid 0 00:26:08.905 [2024-10-01 08:41:00.583259] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.905 [2024-10-01 08:41:00.583265] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.905 [2024-10-01 08:41:00.583269] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.905 [2024-10-01 08:41:00.583273] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x246c480) on tqpair=0x240c760 00:26:08.905 [2024-10-01 08:41:00.583278] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:26:08.905 [2024-10-01 08:41:00.583286] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:26:08.905 [2024-10-01 08:41:00.583292] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.905 [2024-10-01 08:41:00.583296] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.905 [2024-10-01 08:41:00.583300] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x240c760) 00:26:08.905 [2024-10-01 08:41:00.583306] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.905 [2024-10-01 08:41:00.583317] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x246c480, cid 0, qid 0 00:26:08.905 [2024-10-01 08:41:00.583366] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.905 [2024-10-01 08:41:00.583373] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.905 [2024-10-01 08:41:00.583379] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.905 [2024-10-01 08:41:00.583383] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x246c480) on tqpair=0x240c760 00:26:08.905 [2024-10-01 08:41:00.583388] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:26:08.905 [2024-10-01 08:41:00.583397] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.905 [2024-10-01 08:41:00.583401] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.905 [2024-10-01 08:41:00.583404] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x240c760) 00:26:08.905 [2024-10-01 08:41:00.583411] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.905 [2024-10-01 08:41:00.583422] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x246c480, cid 0, qid 0 00:26:08.905 [2024-10-01 08:41:00.583470] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.905 [2024-10-01 08:41:00.583476] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.905 [2024-10-01 08:41:00.583480] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.905 [2024-10-01 08:41:00.583484] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x246c480) on tqpair=0x240c760 00:26:08.905 [2024-10-01 08:41:00.583488] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:26:08.905 [2024-10-01 08:41:00.583493] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:26:08.906 [2024-10-01 08:41:00.583500] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:26:08.906 [2024-10-01 08:41:00.583606] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:26:08.906 [2024-10-01 08:41:00.583610] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:26:08.906 [2024-10-01 08:41:00.583617] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.906 [2024-10-01 08:41:00.583621] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.906 [2024-10-01 08:41:00.583625] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x240c760) 00:26:08.906 [2024-10-01 08:41:00.583632] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.906 [2024-10-01 08:41:00.583642] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x246c480, cid 0, qid 0 00:26:08.906 [2024-10-01 08:41:00.583699] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.906 [2024-10-01 08:41:00.583705] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.906 [2024-10-01 08:41:00.583709] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.906 [2024-10-01 08:41:00.583714] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x246c480) on tqpair=0x240c760 00:26:08.906 [2024-10-01 08:41:00.583719] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:26:08.906 [2024-10-01 08:41:00.583730] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.906 [2024-10-01 08:41:00.583734] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.906 [2024-10-01 08:41:00.583738] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x240c760) 00:26:08.906 [2024-10-01 08:41:00.583744] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.906 [2024-10-01 08:41:00.583755] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x246c480, cid 0, qid 0 00:26:08.906 [2024-10-01 08:41:00.583807] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.906 [2024-10-01 08:41:00.583817] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.906 [2024-10-01 08:41:00.583821] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.906 [2024-10-01 08:41:00.583825] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x246c480) on tqpair=0x240c760 00:26:08.906 [2024-10-01 08:41:00.583829] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:26:08.906 [2024-10-01 08:41:00.583834] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:26:08.906 [2024-10-01 08:41:00.583841] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:26:08.906 [2024-10-01 08:41:00.583848] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:26:08.906 [2024-10-01 08:41:00.583857] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.906 [2024-10-01 08:41:00.583860] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x240c760) 00:26:08.906 [2024-10-01 08:41:00.583867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.906 [2024-10-01 08:41:00.583878] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x246c480, cid 0, qid 0 00:26:08.906 [2024-10-01 08:41:00.583959] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:08.906 [2024-10-01 08:41:00.583966] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:08.906 [2024-10-01 08:41:00.583969] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:08.906 [2024-10-01 08:41:00.583973] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x240c760): datao=0, datal=4096, cccid=0 00:26:08.906 [2024-10-01 08:41:00.583978] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x246c480) on tqpair(0x240c760): expected_datao=0, payload_size=4096 00:26:08.906 [2024-10-01 08:41:00.583982] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.906 [2024-10-01 08:41:00.584010] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:08.906 [2024-10-01 08:41:00.584021] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:08.906 [2024-10-01 08:41:00.584069] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.906 [2024-10-01 08:41:00.584076] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.906 [2024-10-01 08:41:00.584079] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.906 [2024-10-01 08:41:00.584083] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x246c480) on tqpair=0x240c760 00:26:08.906 [2024-10-01 08:41:00.584091] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:26:08.906 [2024-10-01 08:41:00.584095] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:26:08.906 [2024-10-01 08:41:00.584100] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:26:08.906 [2024-10-01 08:41:00.584104] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:26:08.906 [2024-10-01 08:41:00.584108] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:26:08.906 [2024-10-01 08:41:00.584113] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:26:08.906 [2024-10-01 08:41:00.584121] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:26:08.906 [2024-10-01 08:41:00.584128] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.906 [2024-10-01 08:41:00.584133] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.906 [2024-10-01 08:41:00.584137] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x240c760) 00:26:08.906 [2024-10-01 08:41:00.584146] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:08.906 [2024-10-01 08:41:00.584158] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x246c480, cid 0, qid 0 00:26:08.906 [2024-10-01 08:41:00.584208] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.906 [2024-10-01 08:41:00.584214] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.906 [2024-10-01 08:41:00.584218] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.906 [2024-10-01 08:41:00.584222] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x246c480) on tqpair=0x240c760 00:26:08.906 [2024-10-01 08:41:00.584228] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.906 [2024-10-01 08:41:00.584232] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.906 [2024-10-01 08:41:00.584236] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x240c760) 00:26:08.906 [2024-10-01 08:41:00.584242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:08.906 [2024-10-01 08:41:00.584249] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.906 [2024-10-01 08:41:00.584253] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.906 [2024-10-01 08:41:00.584256] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x240c760) 00:26:08.906 [2024-10-01 08:41:00.584262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:08.906 [2024-10-01 08:41:00.584268] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.906 [2024-10-01 08:41:00.584272] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.906 [2024-10-01 08:41:00.584276] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x240c760) 00:26:08.906 [2024-10-01 08:41:00.584281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:08.906 [2024-10-01 08:41:00.584287] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.906 [2024-10-01 08:41:00.584291] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.906 [2024-10-01 08:41:00.584295] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x240c760) 00:26:08.906 [2024-10-01 08:41:00.584301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:08.906 [2024-10-01 08:41:00.584306] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:26:08.906 [2024-10-01 08:41:00.584316] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:26:08.906 [2024-10-01 08:41:00.584322] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.906 [2024-10-01 08:41:00.584326] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x240c760) 00:26:08.906 [2024-10-01 08:41:00.584333] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.906 [2024-10-01 08:41:00.584345] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x246c480, cid 0, qid 0 00:26:08.906 [2024-10-01 08:41:00.584351] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x246c600, cid 1, qid 0 00:26:08.906 [2024-10-01 08:41:00.584359] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x246c780, cid 2, qid 0 00:26:08.906 [2024-10-01 08:41:00.584366] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x246c900, cid 3, qid 0 00:26:08.906 [2024-10-01 08:41:00.584371] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x246ca80, cid 4, qid 0 00:26:08.906 [2024-10-01 08:41:00.584447] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.906 [2024-10-01 08:41:00.584454] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.906 [2024-10-01 08:41:00.584459] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.906 [2024-10-01 08:41:00.584463] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x246ca80) on tqpair=0x240c760 00:26:08.907 [2024-10-01 08:41:00.584468] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:26:08.907 [2024-10-01 08:41:00.584473] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:26:08.907 [2024-10-01 08:41:00.584481] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:26:08.907 [2024-10-01 08:41:00.584489] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:26:08.907 [2024-10-01 08:41:00.584495] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.907 [2024-10-01 08:41:00.584499] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.907 [2024-10-01 08:41:00.584502] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x240c760) 00:26:08.907 [2024-10-01 08:41:00.584509] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:08.907 [2024-10-01 08:41:00.584519] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x246ca80, cid 4, qid 0 00:26:08.907 [2024-10-01 08:41:00.584570] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.907 [2024-10-01 08:41:00.584576] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.907 [2024-10-01 08:41:00.584580] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.907 [2024-10-01 08:41:00.584584] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x246ca80) on tqpair=0x240c760 00:26:08.907 [2024-10-01 08:41:00.584648] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:26:08.907 [2024-10-01 08:41:00.584657] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:26:08.907 [2024-10-01 08:41:00.584664] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.907 [2024-10-01 08:41:00.584668] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x240c760) 00:26:08.907 [2024-10-01 08:41:00.584674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.907 [2024-10-01 08:41:00.584685] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x246ca80, cid 4, qid 0 00:26:08.907 [2024-10-01 08:41:00.584744] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:08.907 [2024-10-01 08:41:00.584753] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:08.907 [2024-10-01 08:41:00.584759] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:08.907 [2024-10-01 08:41:00.584763] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x240c760): datao=0, datal=4096, cccid=4 00:26:08.907 [2024-10-01 08:41:00.584768] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x246ca80) on tqpair(0x240c760): expected_datao=0, payload_size=4096 00:26:08.907 [2024-10-01 08:41:00.584772] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.907 [2024-10-01 08:41:00.584786] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:08.907 [2024-10-01 08:41:00.584790] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:08.907 [2024-10-01 08:41:00.625034] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.907 [2024-10-01 08:41:00.625048] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.907 [2024-10-01 08:41:00.625052] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.907 [2024-10-01 08:41:00.625056] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x246ca80) on tqpair=0x240c760 00:26:08.907 [2024-10-01 08:41:00.625071] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:26:08.907 [2024-10-01 08:41:00.625084] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:26:08.907 [2024-10-01 08:41:00.625094] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:26:08.907 [2024-10-01 08:41:00.625101] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.907 [2024-10-01 08:41:00.625104] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x240c760) 00:26:08.907 [2024-10-01 08:41:00.625112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.907 [2024-10-01 08:41:00.625124] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x246ca80, cid 4, qid 0 00:26:08.907 [2024-10-01 08:41:00.625184] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:08.907 [2024-10-01 08:41:00.625191] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:08.907 [2024-10-01 08:41:00.625195] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:08.907 [2024-10-01 08:41:00.625198] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x240c760): datao=0, datal=4096, cccid=4 00:26:08.907 [2024-10-01 08:41:00.625203] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x246ca80) on tqpair(0x240c760): expected_datao=0, payload_size=4096 00:26:08.907 [2024-10-01 08:41:00.625207] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.907 [2024-10-01 08:41:00.625222] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:08.907 [2024-10-01 08:41:00.625227] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:08.907 [2024-10-01 08:41:00.625332] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.907 [2024-10-01 08:41:00.625339] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.907 [2024-10-01 08:41:00.625342] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.907 [2024-10-01 08:41:00.625346] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x246ca80) on tqpair=0x240c760 00:26:08.907 [2024-10-01 08:41:00.625357] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:26:08.907 [2024-10-01 08:41:00.625367] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:26:08.907 [2024-10-01 08:41:00.625374] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.907 [2024-10-01 08:41:00.625378] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x240c760) 00:26:08.907 [2024-10-01 08:41:00.625384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.907 [2024-10-01 08:41:00.625395] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x246ca80, cid 4, qid 0 00:26:08.907 [2024-10-01 08:41:00.625492] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:08.907 [2024-10-01 08:41:00.625502] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:08.907 [2024-10-01 08:41:00.625507] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:08.907 [2024-10-01 08:41:00.625511] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x240c760): datao=0, datal=4096, cccid=4 00:26:08.907 [2024-10-01 08:41:00.625515] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x246ca80) on tqpair(0x240c760): expected_datao=0, payload_size=4096 00:26:08.907 [2024-10-01 08:41:00.625520] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.907 [2024-10-01 08:41:00.625531] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:08.907 [2024-10-01 08:41:00.625535] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:08.907 [2024-10-01 08:41:00.666033] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.907 [2024-10-01 08:41:00.666050] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.907 [2024-10-01 08:41:00.666054] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.907 [2024-10-01 08:41:00.666058] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x246ca80) on tqpair=0x240c760 00:26:08.907 [2024-10-01 08:41:00.666066] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:26:08.907 [2024-10-01 08:41:00.666074] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:26:08.907 [2024-10-01 08:41:00.666082] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:26:08.907 [2024-10-01 08:41:00.666088] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:26:08.907 [2024-10-01 08:41:00.666093] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:26:08.907 [2024-10-01 08:41:00.666098] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:26:08.907 [2024-10-01 08:41:00.666103] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:26:08.907 [2024-10-01 08:41:00.666108] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:26:08.907 [2024-10-01 08:41:00.666113] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:26:08.907 [2024-10-01 08:41:00.666126] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.907 [2024-10-01 08:41:00.666130] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x240c760) 00:26:08.907 [2024-10-01 08:41:00.666137] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.907 [2024-10-01 08:41:00.666144] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.907 [2024-10-01 08:41:00.666148] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.907 [2024-10-01 08:41:00.666151] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x240c760) 00:26:08.907 [2024-10-01 08:41:00.666157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:26:08.907 [2024-10-01 08:41:00.666170] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x246ca80, cid 4, qid 0 00:26:08.907 [2024-10-01 08:41:00.666176] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x246cc00, cid 5, qid 0 00:26:08.907 [2024-10-01 08:41:00.666233] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.907 [2024-10-01 08:41:00.666240] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.907 [2024-10-01 08:41:00.666243] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.907 [2024-10-01 08:41:00.666247] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x246ca80) on tqpair=0x240c760 00:26:08.907 [2024-10-01 08:41:00.666254] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.907 [2024-10-01 08:41:00.666260] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.907 [2024-10-01 08:41:00.666263] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.907 [2024-10-01 08:41:00.666267] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x246cc00) on tqpair=0x240c760 00:26:08.907 [2024-10-01 08:41:00.666276] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.907 [2024-10-01 08:41:00.666280] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x240c760) 00:26:08.907 [2024-10-01 08:41:00.666286] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.907 [2024-10-01 08:41:00.666298] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x246cc00, cid 5, qid 0 00:26:08.907 [2024-10-01 08:41:00.666372] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.907 [2024-10-01 08:41:00.666379] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.907 [2024-10-01 08:41:00.666383] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.907 [2024-10-01 08:41:00.666387] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x246cc00) on tqpair=0x240c760 00:26:08.908 [2024-10-01 08:41:00.666396] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.908 [2024-10-01 08:41:00.666400] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x240c760) 00:26:08.908 [2024-10-01 08:41:00.666406] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.908 [2024-10-01 08:41:00.666416] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x246cc00, cid 5, qid 0 00:26:08.908 [2024-10-01 08:41:00.666464] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.908 [2024-10-01 08:41:00.666471] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.908 [2024-10-01 08:41:00.666474] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.908 [2024-10-01 08:41:00.666478] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x246cc00) on tqpair=0x240c760 00:26:08.908 [2024-10-01 08:41:00.666487] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.908 [2024-10-01 08:41:00.666491] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x240c760) 00:26:08.908 [2024-10-01 08:41:00.666497] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.908 [2024-10-01 08:41:00.666507] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x246cc00, cid 5, qid 0 00:26:08.908 [2024-10-01 08:41:00.666552] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.908 [2024-10-01 08:41:00.666558] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.908 [2024-10-01 08:41:00.666562] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.908 [2024-10-01 08:41:00.666566] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x246cc00) on tqpair=0x240c760 00:26:08.908 [2024-10-01 08:41:00.666580] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.908 [2024-10-01 08:41:00.666584] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x240c760) 00:26:08.908 [2024-10-01 08:41:00.666591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.908 [2024-10-01 08:41:00.666598] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.908 [2024-10-01 08:41:00.666601] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x240c760) 00:26:08.908 [2024-10-01 08:41:00.666608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.908 [2024-10-01 08:41:00.666615] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.908 [2024-10-01 08:41:00.666619] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x240c760) 00:26:08.908 [2024-10-01 08:41:00.666625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.908 [2024-10-01 08:41:00.666634] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.908 [2024-10-01 08:41:00.666638] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x240c760) 00:26:08.908 [2024-10-01 08:41:00.666644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.908 [2024-10-01 08:41:00.666657] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x246cc00, cid 5, qid 0 00:26:08.908 [2024-10-01 08:41:00.666663] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x246ca80, cid 4, qid 0 00:26:08.908 [2024-10-01 08:41:00.666671] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x246cd80, cid 6, qid 0 00:26:08.908 [2024-10-01 08:41:00.666677] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x246cf00, cid 7, qid 0 00:26:08.908 [2024-10-01 08:41:00.666789] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:08.908 [2024-10-01 08:41:00.666798] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:08.908 [2024-10-01 08:41:00.666804] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:08.908 [2024-10-01 08:41:00.666808] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x240c760): datao=0, datal=8192, cccid=5 00:26:08.908 [2024-10-01 08:41:00.666812] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x246cc00) on tqpair(0x240c760): expected_datao=0, payload_size=8192 00:26:08.908 [2024-10-01 08:41:00.666816] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.908 [2024-10-01 08:41:00.666878] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:08.908 [2024-10-01 08:41:00.666884] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:08.908 [2024-10-01 08:41:00.666893] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:08.908 [2024-10-01 08:41:00.666899] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:08.908 [2024-10-01 08:41:00.666902] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:08.908 [2024-10-01 08:41:00.666906] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x240c760): datao=0, datal=512, cccid=4 00:26:08.908 [2024-10-01 08:41:00.666910] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x246ca80) on tqpair(0x240c760): expected_datao=0, payload_size=512 00:26:08.908 [2024-10-01 08:41:00.666915] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.908 [2024-10-01 08:41:00.666921] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:08.908 [2024-10-01 08:41:00.666925] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:08.908 [2024-10-01 08:41:00.666930] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:08.908 [2024-10-01 08:41:00.666936] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:08.908 [2024-10-01 08:41:00.666940] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:08.908 [2024-10-01 08:41:00.666943] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x240c760): datao=0, datal=512, cccid=6 00:26:08.908 [2024-10-01 08:41:00.666948] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x246cd80) on tqpair(0x240c760): expected_datao=0, payload_size=512 00:26:08.908 [2024-10-01 08:41:00.666952] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.908 [2024-10-01 08:41:00.666958] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:08.908 [2024-10-01 08:41:00.666962] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:08.908 [2024-10-01 08:41:00.666968] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:08.908 [2024-10-01 08:41:00.666973] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:08.908 [2024-10-01 08:41:00.666977] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:08.908 [2024-10-01 08:41:00.666980] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x240c760): datao=0, datal=4096, cccid=7 00:26:08.908 [2024-10-01 08:41:00.666985] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x246cf00) on tqpair(0x240c760): expected_datao=0, payload_size=4096 00:26:08.908 [2024-10-01 08:41:00.666989] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.908 [2024-10-01 08:41:00.671009] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:08.908 [2024-10-01 08:41:00.671014] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:08.908 [2024-10-01 08:41:00.710005] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.908 [2024-10-01 08:41:00.710018] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.908 [2024-10-01 08:41:00.710025] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.908 [2024-10-01 08:41:00.710030] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x246cc00) on tqpair=0x240c760 00:26:08.908 [2024-10-01 08:41:00.710044] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.908 [2024-10-01 08:41:00.710049] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.908 [2024-10-01 08:41:00.710053] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.908 [2024-10-01 08:41:00.710057] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x246ca80) on tqpair=0x240c760 00:26:08.908 [2024-10-01 08:41:00.710067] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.908 [2024-10-01 08:41:00.710073] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.908 [2024-10-01 08:41:00.710076] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.908 [2024-10-01 08:41:00.710080] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x246cd80) on tqpair=0x240c760 00:26:08.908 [2024-10-01 08:41:00.710087] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.908 [2024-10-01 08:41:00.710093] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.908 [2024-10-01 08:41:00.710097] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.908 [2024-10-01 08:41:00.710100] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x246cf00) on tqpair=0x240c760 00:26:08.908 ===================================================== 00:26:08.908 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:08.908 ===================================================== 00:26:08.908 Controller Capabilities/Features 00:26:08.908 ================================ 00:26:08.908 Vendor ID: 8086 00:26:08.908 Subsystem Vendor ID: 8086 00:26:08.908 Serial Number: SPDK00000000000001 00:26:08.908 Model Number: SPDK bdev Controller 00:26:08.908 Firmware Version: 25.01 00:26:08.908 Recommended Arb Burst: 6 00:26:08.908 IEEE OUI Identifier: e4 d2 5c 00:26:08.908 Multi-path I/O 00:26:08.908 May have multiple subsystem ports: Yes 00:26:08.908 May have multiple controllers: Yes 00:26:08.908 Associated with SR-IOV VF: No 00:26:08.908 Max Data Transfer Size: 131072 00:26:08.908 Max Number of Namespaces: 32 00:26:08.908 Max Number of I/O Queues: 127 00:26:08.908 NVMe Specification Version (VS): 1.3 00:26:08.908 NVMe Specification Version (Identify): 1.3 00:26:08.908 Maximum Queue Entries: 128 00:26:08.908 Contiguous Queues Required: Yes 00:26:08.908 Arbitration Mechanisms Supported 00:26:08.908 Weighted Round Robin: Not Supported 00:26:08.908 Vendor Specific: Not Supported 00:26:08.908 Reset Timeout: 15000 ms 00:26:08.908 Doorbell Stride: 4 bytes 00:26:08.908 NVM Subsystem Reset: Not Supported 00:26:08.908 Command Sets Supported 00:26:08.908 NVM Command Set: Supported 00:26:08.908 Boot Partition: Not Supported 00:26:08.908 Memory Page Size Minimum: 4096 bytes 00:26:08.908 Memory Page Size Maximum: 4096 bytes 00:26:08.908 Persistent Memory Region: Not Supported 00:26:08.908 Optional Asynchronous Events Supported 00:26:08.908 Namespace Attribute Notices: Supported 00:26:08.908 Firmware Activation Notices: Not Supported 00:26:08.908 ANA Change Notices: Not Supported 00:26:08.908 PLE Aggregate Log Change Notices: Not Supported 00:26:08.908 LBA Status Info Alert Notices: Not Supported 00:26:08.908 EGE Aggregate Log Change Notices: Not Supported 00:26:08.908 Normal NVM Subsystem Shutdown event: Not Supported 00:26:08.908 Zone Descriptor Change Notices: Not Supported 00:26:08.908 Discovery Log Change Notices: Not Supported 00:26:08.908 Controller Attributes 00:26:08.908 128-bit Host Identifier: Supported 00:26:08.908 Non-Operational Permissive Mode: Not Supported 00:26:08.908 NVM Sets: Not Supported 00:26:08.908 Read Recovery Levels: Not Supported 00:26:08.909 Endurance Groups: Not Supported 00:26:08.909 Predictable Latency Mode: Not Supported 00:26:08.909 Traffic Based Keep ALive: Not Supported 00:26:08.909 Namespace Granularity: Not Supported 00:26:08.909 SQ Associations: Not Supported 00:26:08.909 UUID List: Not Supported 00:26:08.909 Multi-Domain Subsystem: Not Supported 00:26:08.909 Fixed Capacity Management: Not Supported 00:26:08.909 Variable Capacity Management: Not Supported 00:26:08.909 Delete Endurance Group: Not Supported 00:26:08.909 Delete NVM Set: Not Supported 00:26:08.909 Extended LBA Formats Supported: Not Supported 00:26:08.909 Flexible Data Placement Supported: Not Supported 00:26:08.909 00:26:08.909 Controller Memory Buffer Support 00:26:08.909 ================================ 00:26:08.909 Supported: No 00:26:08.909 00:26:08.909 Persistent Memory Region Support 00:26:08.909 ================================ 00:26:08.909 Supported: No 00:26:08.909 00:26:08.909 Admin Command Set Attributes 00:26:08.909 ============================ 00:26:08.909 Security Send/Receive: Not Supported 00:26:08.909 Format NVM: Not Supported 00:26:08.909 Firmware Activate/Download: Not Supported 00:26:08.909 Namespace Management: Not Supported 00:26:08.909 Device Self-Test: Not Supported 00:26:08.909 Directives: Not Supported 00:26:08.909 NVMe-MI: Not Supported 00:26:08.909 Virtualization Management: Not Supported 00:26:08.909 Doorbell Buffer Config: Not Supported 00:26:08.909 Get LBA Status Capability: Not Supported 00:26:08.909 Command & Feature Lockdown Capability: Not Supported 00:26:08.909 Abort Command Limit: 4 00:26:08.909 Async Event Request Limit: 4 00:26:08.909 Number of Firmware Slots: N/A 00:26:08.909 Firmware Slot 1 Read-Only: N/A 00:26:08.909 Firmware Activation Without Reset: N/A 00:26:08.909 Multiple Update Detection Support: N/A 00:26:08.909 Firmware Update Granularity: No Information Provided 00:26:08.909 Per-Namespace SMART Log: No 00:26:08.909 Asymmetric Namespace Access Log Page: Not Supported 00:26:08.909 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:26:08.909 Command Effects Log Page: Supported 00:26:08.909 Get Log Page Extended Data: Supported 00:26:08.909 Telemetry Log Pages: Not Supported 00:26:08.909 Persistent Event Log Pages: Not Supported 00:26:08.909 Supported Log Pages Log Page: May Support 00:26:08.909 Commands Supported & Effects Log Page: Not Supported 00:26:08.909 Feature Identifiers & Effects Log Page:May Support 00:26:08.909 NVMe-MI Commands & Effects Log Page: May Support 00:26:08.909 Data Area 4 for Telemetry Log: Not Supported 00:26:08.909 Error Log Page Entries Supported: 128 00:26:08.909 Keep Alive: Supported 00:26:08.909 Keep Alive Granularity: 10000 ms 00:26:08.909 00:26:08.909 NVM Command Set Attributes 00:26:08.909 ========================== 00:26:08.909 Submission Queue Entry Size 00:26:08.909 Max: 64 00:26:08.909 Min: 64 00:26:08.909 Completion Queue Entry Size 00:26:08.909 Max: 16 00:26:08.909 Min: 16 00:26:08.909 Number of Namespaces: 32 00:26:08.909 Compare Command: Supported 00:26:08.909 Write Uncorrectable Command: Not Supported 00:26:08.909 Dataset Management Command: Supported 00:26:08.909 Write Zeroes Command: Supported 00:26:08.909 Set Features Save Field: Not Supported 00:26:08.909 Reservations: Supported 00:26:08.909 Timestamp: Not Supported 00:26:08.909 Copy: Supported 00:26:08.909 Volatile Write Cache: Present 00:26:08.909 Atomic Write Unit (Normal): 1 00:26:08.909 Atomic Write Unit (PFail): 1 00:26:08.909 Atomic Compare & Write Unit: 1 00:26:08.909 Fused Compare & Write: Supported 00:26:08.909 Scatter-Gather List 00:26:08.909 SGL Command Set: Supported 00:26:08.909 SGL Keyed: Supported 00:26:08.909 SGL Bit Bucket Descriptor: Not Supported 00:26:08.909 SGL Metadata Pointer: Not Supported 00:26:08.909 Oversized SGL: Not Supported 00:26:08.909 SGL Metadata Address: Not Supported 00:26:08.909 SGL Offset: Supported 00:26:08.909 Transport SGL Data Block: Not Supported 00:26:08.909 Replay Protected Memory Block: Not Supported 00:26:08.909 00:26:08.909 Firmware Slot Information 00:26:08.909 ========================= 00:26:08.909 Active slot: 1 00:26:08.909 Slot 1 Firmware Revision: 25.01 00:26:08.909 00:26:08.909 00:26:08.909 Commands Supported and Effects 00:26:08.909 ============================== 00:26:08.909 Admin Commands 00:26:08.909 -------------- 00:26:08.909 Get Log Page (02h): Supported 00:26:08.909 Identify (06h): Supported 00:26:08.909 Abort (08h): Supported 00:26:08.909 Set Features (09h): Supported 00:26:08.909 Get Features (0Ah): Supported 00:26:08.909 Asynchronous Event Request (0Ch): Supported 00:26:08.909 Keep Alive (18h): Supported 00:26:08.909 I/O Commands 00:26:08.909 ------------ 00:26:08.909 Flush (00h): Supported LBA-Change 00:26:08.909 Write (01h): Supported LBA-Change 00:26:08.909 Read (02h): Supported 00:26:08.909 Compare (05h): Supported 00:26:08.909 Write Zeroes (08h): Supported LBA-Change 00:26:08.909 Dataset Management (09h): Supported LBA-Change 00:26:08.909 Copy (19h): Supported LBA-Change 00:26:08.909 00:26:08.909 Error Log 00:26:08.909 ========= 00:26:08.909 00:26:08.909 Arbitration 00:26:08.909 =========== 00:26:08.909 Arbitration Burst: 1 00:26:08.909 00:26:08.909 Power Management 00:26:08.909 ================ 00:26:08.909 Number of Power States: 1 00:26:08.909 Current Power State: Power State #0 00:26:08.909 Power State #0: 00:26:08.909 Max Power: 0.00 W 00:26:08.909 Non-Operational State: Operational 00:26:08.909 Entry Latency: Not Reported 00:26:08.909 Exit Latency: Not Reported 00:26:08.909 Relative Read Throughput: 0 00:26:08.909 Relative Read Latency: 0 00:26:08.909 Relative Write Throughput: 0 00:26:08.909 Relative Write Latency: 0 00:26:08.909 Idle Power: Not Reported 00:26:08.909 Active Power: Not Reported 00:26:08.909 Non-Operational Permissive Mode: Not Supported 00:26:08.909 00:26:08.909 Health Information 00:26:08.909 ================== 00:26:08.909 Critical Warnings: 00:26:08.909 Available Spare Space: OK 00:26:08.909 Temperature: OK 00:26:08.909 Device Reliability: OK 00:26:08.909 Read Only: No 00:26:08.909 Volatile Memory Backup: OK 00:26:08.909 Current Temperature: 0 Kelvin (-273 Celsius) 00:26:08.909 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:26:08.909 Available Spare: 0% 00:26:08.909 Available Spare Threshold: 0% 00:26:08.909 Life Percentage Used:[2024-10-01 08:41:00.710198] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.909 [2024-10-01 08:41:00.710203] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x240c760) 00:26:08.909 [2024-10-01 08:41:00.710211] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.909 [2024-10-01 08:41:00.710224] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x246cf00, cid 7, qid 0 00:26:08.909 [2024-10-01 08:41:00.710275] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.909 [2024-10-01 08:41:00.710282] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.909 [2024-10-01 08:41:00.710285] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.909 [2024-10-01 08:41:00.710289] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x246cf00) on tqpair=0x240c760 00:26:08.909 [2024-10-01 08:41:00.710319] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:26:08.909 [2024-10-01 08:41:00.710328] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x246c480) on tqpair=0x240c760 00:26:08.909 [2024-10-01 08:41:00.710334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.909 [2024-10-01 08:41:00.710340] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x246c600) on tqpair=0x240c760 00:26:08.909 [2024-10-01 08:41:00.710344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.909 [2024-10-01 08:41:00.710350] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x246c780) on tqpair=0x240c760 00:26:08.909 [2024-10-01 08:41:00.710354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.909 [2024-10-01 08:41:00.710359] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x246c900) on tqpair=0x240c760 00:26:08.909 [2024-10-01 08:41:00.710364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.909 [2024-10-01 08:41:00.710372] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.909 [2024-10-01 08:41:00.710376] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.909 [2024-10-01 08:41:00.710379] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x240c760) 00:26:08.909 [2024-10-01 08:41:00.710386] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.909 [2024-10-01 08:41:00.710400] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x246c900, cid 3, qid 0 00:26:08.909 [2024-10-01 08:41:00.710454] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.909 [2024-10-01 08:41:00.710461] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.909 [2024-10-01 08:41:00.710465] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.910 [2024-10-01 08:41:00.710469] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x246c900) on tqpair=0x240c760 00:26:08.910 [2024-10-01 08:41:00.710475] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.910 [2024-10-01 08:41:00.710479] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.910 [2024-10-01 08:41:00.710483] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x240c760) 00:26:08.910 [2024-10-01 08:41:00.710490] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.910 [2024-10-01 08:41:00.710502] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x246c900, cid 3, qid 0 00:26:08.910 [2024-10-01 08:41:00.710563] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.910 [2024-10-01 08:41:00.710570] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.910 [2024-10-01 08:41:00.710574] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.910 [2024-10-01 08:41:00.710578] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x246c900) on tqpair=0x240c760 00:26:08.910 [2024-10-01 08:41:00.710582] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:26:08.910 [2024-10-01 08:41:00.710587] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:26:08.910 [2024-10-01 08:41:00.710596] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.910 [2024-10-01 08:41:00.710600] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.910 [2024-10-01 08:41:00.710604] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x240c760) 00:26:08.910 [2024-10-01 08:41:00.710610] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.910 [2024-10-01 08:41:00.710621] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x246c900, cid 3, qid 0 00:26:08.910 [2024-10-01 08:41:00.710672] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.910 [2024-10-01 08:41:00.710679] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.910 [2024-10-01 08:41:00.710682] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.910 [2024-10-01 08:41:00.710686] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x246c900) on tqpair=0x240c760 00:26:08.910 [2024-10-01 08:41:00.710696] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.910 [2024-10-01 08:41:00.710700] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.910 [2024-10-01 08:41:00.710704] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x240c760) 00:26:08.910 [2024-10-01 08:41:00.710711] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.910 [2024-10-01 08:41:00.710721] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x246c900, cid 3, qid 0 00:26:08.910 [2024-10-01 08:41:00.710766] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.910 [2024-10-01 08:41:00.710773] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.910 [2024-10-01 08:41:00.710776] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.910 [2024-10-01 08:41:00.710780] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x246c900) on tqpair=0x240c760 00:26:08.910 [2024-10-01 08:41:00.710789] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.910 [2024-10-01 08:41:00.710793] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.910 [2024-10-01 08:41:00.710797] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x240c760) 00:26:08.910 [2024-10-01 08:41:00.710806] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.910 [2024-10-01 08:41:00.710816] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x246c900, cid 3, qid 0 00:26:08.910 [2024-10-01 08:41:00.710865] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.910 [2024-10-01 08:41:00.710872] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.910 [2024-10-01 08:41:00.710876] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.910 [2024-10-01 08:41:00.710880] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x246c900) on tqpair=0x240c760 00:26:08.910 [2024-10-01 08:41:00.710889] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.910 [2024-10-01 08:41:00.710893] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.910 [2024-10-01 08:41:00.710896] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x240c760) 00:26:08.910 [2024-10-01 08:41:00.710903] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.910 [2024-10-01 08:41:00.710913] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x246c900, cid 3, qid 0 00:26:08.910 [2024-10-01 08:41:00.710959] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.910 [2024-10-01 08:41:00.710966] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.910 [2024-10-01 08:41:00.710969] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.910 [2024-10-01 08:41:00.710973] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x246c900) on tqpair=0x240c760 00:26:08.910 [2024-10-01 08:41:00.710983] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.910 [2024-10-01 08:41:00.710986] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.910 [2024-10-01 08:41:00.710990] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x240c760) 00:26:08.910 [2024-10-01 08:41:00.711002] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.910 [2024-10-01 08:41:00.711013] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x246c900, cid 3, qid 0 00:26:08.910 [2024-10-01 08:41:00.711069] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.910 [2024-10-01 08:41:00.711076] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.910 [2024-10-01 08:41:00.711079] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.910 [2024-10-01 08:41:00.711083] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x246c900) on tqpair=0x240c760 00:26:08.910 [2024-10-01 08:41:00.711093] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.910 [2024-10-01 08:41:00.711097] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.910 [2024-10-01 08:41:00.711100] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x240c760) 00:26:08.910 [2024-10-01 08:41:00.711107] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.910 [2024-10-01 08:41:00.711117] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x246c900, cid 3, qid 0 00:26:08.910 [2024-10-01 08:41:00.711165] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.910 [2024-10-01 08:41:00.711172] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.910 [2024-10-01 08:41:00.711176] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.910 [2024-10-01 08:41:00.711179] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x246c900) on tqpair=0x240c760 00:26:08.910 [2024-10-01 08:41:00.711189] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.910 [2024-10-01 08:41:00.711193] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.910 [2024-10-01 08:41:00.711196] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x240c760) 00:26:08.910 [2024-10-01 08:41:00.711203] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.910 [2024-10-01 08:41:00.711215] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x246c900, cid 3, qid 0 00:26:08.910 [2024-10-01 08:41:00.711267] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.910 [2024-10-01 08:41:00.711274] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.910 [2024-10-01 08:41:00.711278] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.910 [2024-10-01 08:41:00.711281] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x246c900) on tqpair=0x240c760 00:26:08.910 [2024-10-01 08:41:00.711291] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.910 [2024-10-01 08:41:00.711295] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.910 [2024-10-01 08:41:00.711298] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x240c760) 00:26:08.910 [2024-10-01 08:41:00.711305] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.910 [2024-10-01 08:41:00.711315] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x246c900, cid 3, qid 0 00:26:08.910 [2024-10-01 08:41:00.711364] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.910 [2024-10-01 08:41:00.711371] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.910 [2024-10-01 08:41:00.711374] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.910 [2024-10-01 08:41:00.711378] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x246c900) on tqpair=0x240c760 00:26:08.910 [2024-10-01 08:41:00.711388] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.910 [2024-10-01 08:41:00.711392] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.910 [2024-10-01 08:41:00.711395] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x240c760) 00:26:08.910 [2024-10-01 08:41:00.711402] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.910 [2024-10-01 08:41:00.711412] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x246c900, cid 3, qid 0 00:26:08.910 [2024-10-01 08:41:00.711464] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.910 [2024-10-01 08:41:00.711470] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.910 [2024-10-01 08:41:00.711474] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.910 [2024-10-01 08:41:00.711478] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x246c900) on tqpair=0x240c760 00:26:08.910 [2024-10-01 08:41:00.711487] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.910 [2024-10-01 08:41:00.711491] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.910 [2024-10-01 08:41:00.711495] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x240c760) 00:26:08.910 [2024-10-01 08:41:00.711501] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.910 [2024-10-01 08:41:00.711511] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x246c900, cid 3, qid 0 00:26:08.910 [2024-10-01 08:41:00.711556] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.910 [2024-10-01 08:41:00.711563] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.910 [2024-10-01 08:41:00.711566] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.910 [2024-10-01 08:41:00.711570] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x246c900) on tqpair=0x240c760 00:26:08.910 [2024-10-01 08:41:00.711580] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.910 [2024-10-01 08:41:00.711584] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.910 [2024-10-01 08:41:00.711587] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x240c760) 00:26:08.910 [2024-10-01 08:41:00.711594] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.910 [2024-10-01 08:41:00.711606] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x246c900, cid 3, qid 0 00:26:08.910 [2024-10-01 08:41:00.711682] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.910 [2024-10-01 08:41:00.711689] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.910 [2024-10-01 08:41:00.711692] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.910 [2024-10-01 08:41:00.711696] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x246c900) on tqpair=0x240c760 00:26:08.910 [2024-10-01 08:41:00.711706] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.910 [2024-10-01 08:41:00.711710] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.910 [2024-10-01 08:41:00.711714] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x240c760) 00:26:08.910 [2024-10-01 08:41:00.711720] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.910 [2024-10-01 08:41:00.711731] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x246c900, cid 3, qid 0 00:26:08.910 [2024-10-01 08:41:00.711782] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.910 [2024-10-01 08:41:00.711789] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.910 [2024-10-01 08:41:00.711792] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.911 [2024-10-01 08:41:00.711796] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x246c900) on tqpair=0x240c760 00:26:08.911 [2024-10-01 08:41:00.711806] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.911 [2024-10-01 08:41:00.711810] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.911 [2024-10-01 08:41:00.711813] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x240c760) 00:26:08.911 [2024-10-01 08:41:00.711820] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.911 [2024-10-01 08:41:00.711830] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x246c900, cid 3, qid 0 00:26:08.911 [2024-10-01 08:41:00.711877] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.911 [2024-10-01 08:41:00.711883] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.911 [2024-10-01 08:41:00.711887] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.911 [2024-10-01 08:41:00.711891] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x246c900) on tqpair=0x240c760 00:26:08.911 [2024-10-01 08:41:00.711901] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.911 [2024-10-01 08:41:00.711905] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.911 [2024-10-01 08:41:00.711908] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x240c760) 00:26:08.911 [2024-10-01 08:41:00.711915] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.911 [2024-10-01 08:41:00.711925] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x246c900, cid 3, qid 0 00:26:08.911 [2024-10-01 08:41:00.711973] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.911 [2024-10-01 08:41:00.711980] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.911 [2024-10-01 08:41:00.711983] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.911 [2024-10-01 08:41:00.711987] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x246c900) on tqpair=0x240c760 00:26:08.911 [2024-10-01 08:41:00.712000] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.911 [2024-10-01 08:41:00.712004] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.911 [2024-10-01 08:41:00.712008] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x240c760) 00:26:08.911 [2024-10-01 08:41:00.712014] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.911 [2024-10-01 08:41:00.712025] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x246c900, cid 3, qid 0 00:26:08.911 [2024-10-01 08:41:00.712077] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.911 [2024-10-01 08:41:00.712084] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.911 [2024-10-01 08:41:00.712087] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.911 [2024-10-01 08:41:00.712091] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x246c900) on tqpair=0x240c760 00:26:08.911 [2024-10-01 08:41:00.712101] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.911 [2024-10-01 08:41:00.712105] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.911 [2024-10-01 08:41:00.712108] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x240c760) 00:26:08.911 [2024-10-01 08:41:00.712115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.911 [2024-10-01 08:41:00.712125] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x246c900, cid 3, qid 0 00:26:08.911 [2024-10-01 08:41:00.712171] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.911 [2024-10-01 08:41:00.712177] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.911 [2024-10-01 08:41:00.712181] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.911 [2024-10-01 08:41:00.712185] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x246c900) on tqpair=0x240c760 00:26:08.911 [2024-10-01 08:41:00.712194] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.911 [2024-10-01 08:41:00.712198] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.911 [2024-10-01 08:41:00.712202] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x240c760) 00:26:08.911 [2024-10-01 08:41:00.712208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.911 [2024-10-01 08:41:00.712218] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x246c900, cid 3, qid 0 00:26:08.911 [2024-10-01 08:41:00.712268] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.911 [2024-10-01 08:41:00.712274] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.911 [2024-10-01 08:41:00.712278] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.911 [2024-10-01 08:41:00.712282] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x246c900) on tqpair=0x240c760 00:26:08.911 [2024-10-01 08:41:00.712292] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.911 [2024-10-01 08:41:00.712296] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.911 [2024-10-01 08:41:00.712299] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x240c760) 00:26:08.911 [2024-10-01 08:41:00.712306] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.911 [2024-10-01 08:41:00.712316] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x246c900, cid 3, qid 0 00:26:08.911 [2024-10-01 08:41:00.712362] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.911 [2024-10-01 08:41:00.712369] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.911 [2024-10-01 08:41:00.712372] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.911 [2024-10-01 08:41:00.712376] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x246c900) on tqpair=0x240c760 00:26:08.911 [2024-10-01 08:41:00.712385] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.911 [2024-10-01 08:41:00.712389] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.911 [2024-10-01 08:41:00.712393] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x240c760) 00:26:08.911 [2024-10-01 08:41:00.712400] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.911 [2024-10-01 08:41:00.712410] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x246c900, cid 3, qid 0 00:26:08.911 [2024-10-01 08:41:00.712458] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.911 [2024-10-01 08:41:00.712467] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.911 [2024-10-01 08:41:00.712471] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.911 [2024-10-01 08:41:00.712475] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x246c900) on tqpair=0x240c760 00:26:08.911 [2024-10-01 08:41:00.712485] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.911 [2024-10-01 08:41:00.712488] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.911 [2024-10-01 08:41:00.712492] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x240c760) 00:26:08.911 [2024-10-01 08:41:00.712499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.911 [2024-10-01 08:41:00.712509] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x246c900, cid 3, qid 0 00:26:08.911 [2024-10-01 08:41:00.712556] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.911 [2024-10-01 08:41:00.712562] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.911 [2024-10-01 08:41:00.712566] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.911 [2024-10-01 08:41:00.712570] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x246c900) on tqpair=0x240c760 00:26:08.911 [2024-10-01 08:41:00.712579] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.911 [2024-10-01 08:41:00.712583] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.911 [2024-10-01 08:41:00.712587] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x240c760) 00:26:08.911 [2024-10-01 08:41:00.712593] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.911 [2024-10-01 08:41:00.712603] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x246c900, cid 3, qid 0 00:26:08.911 [2024-10-01 08:41:00.712652] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.911 [2024-10-01 08:41:00.712658] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.911 [2024-10-01 08:41:00.712662] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.911 [2024-10-01 08:41:00.712666] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x246c900) on tqpair=0x240c760 00:26:08.911 [2024-10-01 08:41:00.712675] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.911 [2024-10-01 08:41:00.712679] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.911 [2024-10-01 08:41:00.712683] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x240c760) 00:26:08.911 [2024-10-01 08:41:00.712689] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.911 [2024-10-01 08:41:00.712699] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x246c900, cid 3, qid 0 00:26:08.911 [2024-10-01 08:41:00.712745] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.911 [2024-10-01 08:41:00.712752] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.911 [2024-10-01 08:41:00.712755] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.911 [2024-10-01 08:41:00.712759] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x246c900) on tqpair=0x240c760 00:26:08.911 [2024-10-01 08:41:00.712769] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.911 [2024-10-01 08:41:00.712772] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.911 [2024-10-01 08:41:00.712776] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x240c760) 00:26:08.911 [2024-10-01 08:41:00.712783] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.911 [2024-10-01 08:41:00.712793] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x246c900, cid 3, qid 0 00:26:08.911 [2024-10-01 08:41:00.712838] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.911 [2024-10-01 08:41:00.712845] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.911 [2024-10-01 08:41:00.712850] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.911 [2024-10-01 08:41:00.712854] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x246c900) on tqpair=0x240c760 00:26:08.911 [2024-10-01 08:41:00.712864] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.911 [2024-10-01 08:41:00.712867] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.912 [2024-10-01 08:41:00.712871] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x240c760) 00:26:08.912 [2024-10-01 08:41:00.712878] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.912 [2024-10-01 08:41:00.712888] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x246c900, cid 3, qid 0 00:26:08.912 [2024-10-01 08:41:00.712934] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.912 [2024-10-01 08:41:00.712941] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.912 [2024-10-01 08:41:00.712944] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.912 [2024-10-01 08:41:00.712948] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x246c900) on tqpair=0x240c760 00:26:08.912 [2024-10-01 08:41:00.712957] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.912 [2024-10-01 08:41:00.712961] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.912 [2024-10-01 08:41:00.712965] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x240c760) 00:26:08.912 [2024-10-01 08:41:00.712971] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.912 [2024-10-01 08:41:00.712982] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x246c900, cid 3, qid 0 00:26:08.912 [2024-10-01 08:41:00.713054] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.912 [2024-10-01 08:41:00.713061] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.912 [2024-10-01 08:41:00.713064] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.912 [2024-10-01 08:41:00.713068] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x246c900) on tqpair=0x240c760 00:26:08.912 [2024-10-01 08:41:00.713078] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.912 [2024-10-01 08:41:00.713082] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.912 [2024-10-01 08:41:00.713085] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x240c760) 00:26:08.912 [2024-10-01 08:41:00.713092] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.912 [2024-10-01 08:41:00.713102] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x246c900, cid 3, qid 0 00:26:08.912 [2024-10-01 08:41:00.713151] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.912 [2024-10-01 08:41:00.713157] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.912 [2024-10-01 08:41:00.713161] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.912 [2024-10-01 08:41:00.713165] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x246c900) on tqpair=0x240c760 00:26:08.912 [2024-10-01 08:41:00.713174] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.912 [2024-10-01 08:41:00.713178] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.912 [2024-10-01 08:41:00.713182] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x240c760) 00:26:08.912 [2024-10-01 08:41:00.713188] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.912 [2024-10-01 08:41:00.713199] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x246c900, cid 3, qid 0 00:26:08.912 [2024-10-01 08:41:00.713246] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.912 [2024-10-01 08:41:00.713253] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.912 [2024-10-01 08:41:00.713257] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.912 [2024-10-01 08:41:00.713264] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x246c900) on tqpair=0x240c760 00:26:08.912 [2024-10-01 08:41:00.713274] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.912 [2024-10-01 08:41:00.713278] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.912 [2024-10-01 08:41:00.713281] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x240c760) 00:26:08.912 [2024-10-01 08:41:00.713288] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.912 [2024-10-01 08:41:00.713299] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x246c900, cid 3, qid 0 00:26:08.912 [2024-10-01 08:41:00.713345] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.912 [2024-10-01 08:41:00.713351] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.912 [2024-10-01 08:41:00.713355] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.912 [2024-10-01 08:41:00.713359] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x246c900) on tqpair=0x240c760 00:26:08.912 [2024-10-01 08:41:00.713368] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.912 [2024-10-01 08:41:00.713372] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.912 [2024-10-01 08:41:00.713376] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x240c760) 00:26:08.912 [2024-10-01 08:41:00.713382] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.912 [2024-10-01 08:41:00.713392] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x246c900, cid 3, qid 0 00:26:08.912 [2024-10-01 08:41:00.713440] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.912 [2024-10-01 08:41:00.713447] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.912 [2024-10-01 08:41:00.713450] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.912 [2024-10-01 08:41:00.713454] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x246c900) on tqpair=0x240c760 00:26:08.912 [2024-10-01 08:41:00.713464] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.912 [2024-10-01 08:41:00.713468] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.912 [2024-10-01 08:41:00.713472] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x240c760) 00:26:08.912 [2024-10-01 08:41:00.713478] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.912 [2024-10-01 08:41:00.713488] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x246c900, cid 3, qid 0 00:26:08.912 [2024-10-01 08:41:00.713536] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.912 [2024-10-01 08:41:00.713543] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.912 [2024-10-01 08:41:00.713546] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.912 [2024-10-01 08:41:00.713550] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x246c900) on tqpair=0x240c760 00:26:08.912 [2024-10-01 08:41:00.713560] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.912 [2024-10-01 08:41:00.713564] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.912 [2024-10-01 08:41:00.713567] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x240c760) 00:26:08.912 [2024-10-01 08:41:00.713574] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.912 [2024-10-01 08:41:00.713584] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x246c900, cid 3, qid 0 00:26:08.912 [2024-10-01 08:41:00.713663] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.912 [2024-10-01 08:41:00.713669] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.912 [2024-10-01 08:41:00.713672] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.912 [2024-10-01 08:41:00.713676] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x246c900) on tqpair=0x240c760 00:26:08.912 [2024-10-01 08:41:00.713688] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.912 [2024-10-01 08:41:00.713692] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.912 [2024-10-01 08:41:00.713695] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x240c760) 00:26:08.912 [2024-10-01 08:41:00.713702] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.912 [2024-10-01 08:41:00.713712] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x246c900, cid 3, qid 0 00:26:08.912 [2024-10-01 08:41:00.713760] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.912 [2024-10-01 08:41:00.713767] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.912 [2024-10-01 08:41:00.713770] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.912 [2024-10-01 08:41:00.713774] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x246c900) on tqpair=0x240c760 00:26:08.912 [2024-10-01 08:41:00.713784] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.912 [2024-10-01 08:41:00.713788] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.912 [2024-10-01 08:41:00.713791] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x240c760) 00:26:08.912 [2024-10-01 08:41:00.713798] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.912 [2024-10-01 08:41:00.713808] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x246c900, cid 3, qid 0 00:26:08.912 [2024-10-01 08:41:00.713849] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.912 [2024-10-01 08:41:00.713856] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.912 [2024-10-01 08:41:00.713859] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.912 [2024-10-01 08:41:00.713863] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x246c900) on tqpair=0x240c760 00:26:08.912 [2024-10-01 08:41:00.713873] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.912 [2024-10-01 08:41:00.713877] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.912 [2024-10-01 08:41:00.713880] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x240c760) 00:26:08.912 [2024-10-01 08:41:00.713887] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.912 [2024-10-01 08:41:00.713897] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x246c900, cid 3, qid 0 00:26:08.912 [2024-10-01 08:41:00.713945] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.912 [2024-10-01 08:41:00.713951] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.912 [2024-10-01 08:41:00.713955] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.912 [2024-10-01 08:41:00.713959] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x246c900) on tqpair=0x240c760 00:26:08.912 [2024-10-01 08:41:00.713968] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.912 [2024-10-01 08:41:00.713972] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.912 [2024-10-01 08:41:00.713975] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x240c760) 00:26:08.912 [2024-10-01 08:41:00.713982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.912 [2024-10-01 08:41:00.713992] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x246c900, cid 3, qid 0 00:26:08.912 [2024-10-01 08:41:00.718010] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.912 [2024-10-01 08:41:00.718017] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.912 [2024-10-01 08:41:00.718020] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.912 [2024-10-01 08:41:00.718024] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x246c900) on tqpair=0x240c760 00:26:08.912 [2024-10-01 08:41:00.718034] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.912 [2024-10-01 08:41:00.718040] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.912 [2024-10-01 08:41:00.718044] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x240c760) 00:26:08.912 [2024-10-01 08:41:00.718051] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.912 [2024-10-01 08:41:00.718072] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x246c900, cid 3, qid 0 00:26:08.912 [2024-10-01 08:41:00.718145] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.912 [2024-10-01 08:41:00.718152] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.912 [2024-10-01 08:41:00.718156] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.912 [2024-10-01 08:41:00.718160] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x246c900) on tqpair=0x240c760 00:26:08.912 [2024-10-01 08:41:00.718167] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:26:09.173 0% 00:26:09.173 Data Units Read: 0 00:26:09.173 Data Units Written: 0 00:26:09.173 Host Read Commands: 0 00:26:09.173 Host Write Commands: 0 00:26:09.173 Controller Busy Time: 0 minutes 00:26:09.173 Power Cycles: 0 00:26:09.173 Power On Hours: 0 hours 00:26:09.173 Unsafe Shutdowns: 0 00:26:09.173 Unrecoverable Media Errors: 0 00:26:09.173 Lifetime Error Log Entries: 0 00:26:09.173 Warning Temperature Time: 0 minutes 00:26:09.173 Critical Temperature Time: 0 minutes 00:26:09.173 00:26:09.173 Number of Queues 00:26:09.173 ================ 00:26:09.173 Number of I/O Submission Queues: 127 00:26:09.173 Number of I/O Completion Queues: 127 00:26:09.173 00:26:09.173 Active Namespaces 00:26:09.173 ================= 00:26:09.173 Namespace ID:1 00:26:09.173 Error Recovery Timeout: Unlimited 00:26:09.173 Command Set Identifier: NVM (00h) 00:26:09.173 Deallocate: Supported 00:26:09.173 Deallocated/Unwritten Error: Not Supported 00:26:09.173 Deallocated Read Value: Unknown 00:26:09.173 Deallocate in Write Zeroes: Not Supported 00:26:09.173 Deallocated Guard Field: 0xFFFF 00:26:09.173 Flush: Supported 00:26:09.173 Reservation: Supported 00:26:09.173 Namespace Sharing Capabilities: Multiple Controllers 00:26:09.173 Size (in LBAs): 131072 (0GiB) 00:26:09.173 Capacity (in LBAs): 131072 (0GiB) 00:26:09.173 Utilization (in LBAs): 131072 (0GiB) 00:26:09.173 NGUID: ABCDEF0123456789ABCDEF0123456789 00:26:09.173 EUI64: ABCDEF0123456789 00:26:09.173 UUID: 9c151ba5-5bc9-434c-abd9-9554e3694bf1 00:26:09.173 Thin Provisioning: Not Supported 00:26:09.173 Per-NS Atomic Units: Yes 00:26:09.173 Atomic Boundary Size (Normal): 0 00:26:09.173 Atomic Boundary Size (PFail): 0 00:26:09.173 Atomic Boundary Offset: 0 00:26:09.173 Maximum Single Source Range Length: 65535 00:26:09.173 Maximum Copy Length: 65535 00:26:09.173 Maximum Source Range Count: 1 00:26:09.173 NGUID/EUI64 Never Reused: No 00:26:09.173 Namespace Write Protected: No 00:26:09.173 Number of LBA Formats: 1 00:26:09.173 Current LBA Format: LBA Format #00 00:26:09.173 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:09.173 00:26:09.173 08:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:26:09.173 08:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:09.173 08:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.173 08:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:09.173 08:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.173 08:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:26:09.173 08:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:26:09.173 08:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@512 -- # nvmfcleanup 00:26:09.173 08:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:26:09.173 08:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:09.173 08:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:26:09.173 08:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:09.173 08:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:09.173 rmmod nvme_tcp 00:26:09.173 rmmod nvme_fabrics 00:26:09.173 rmmod nvme_keyring 00:26:09.173 08:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:09.173 08:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:26:09.173 08:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:26:09.173 08:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@513 -- # '[' -n 3847965 ']' 00:26:09.173 08:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # killprocess 3847965 00:26:09.173 08:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 3847965 ']' 00:26:09.173 08:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 3847965 00:26:09.173 08:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:26:09.173 08:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:09.173 08:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3847965 00:26:09.173 08:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:09.173 08:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:09.173 08:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3847965' 00:26:09.173 killing process with pid 3847965 00:26:09.173 08:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 3847965 00:26:09.173 08:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 3847965 00:26:09.434 08:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:26:09.434 08:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:26:09.434 08:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:26:09.434 08:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:26:09.434 08:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # iptables-save 00:26:09.434 08:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:26:09.434 08:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # iptables-restore 00:26:09.434 08:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:09.434 08:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:09.434 08:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:09.434 08:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:09.434 08:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:11.352 08:41:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:11.352 00:26:11.352 real 0m11.262s 00:26:11.352 user 0m8.196s 00:26:11.352 sys 0m5.905s 00:26:11.352 08:41:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:11.352 08:41:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:11.352 ************************************ 00:26:11.352 END TEST nvmf_identify 00:26:11.352 ************************************ 00:26:11.352 08:41:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:26:11.352 08:41:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:11.352 08:41:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:11.352 08:41:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.614 ************************************ 00:26:11.614 START TEST nvmf_perf 00:26:11.614 ************************************ 00:26:11.614 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:26:11.614 * Looking for test storage... 00:26:11.614 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:11.614 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:11.614 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lcov --version 00:26:11.614 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:11.614 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:11.614 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:11.614 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:11.614 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:11.614 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:26:11.614 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:26:11.614 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:26:11.614 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:26:11.614 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:26:11.614 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:26:11.614 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:26:11.614 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:11.614 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:26:11.614 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:26:11.614 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:11.614 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:11.614 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:26:11.614 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:26:11.614 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:11.614 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:26:11.614 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:26:11.614 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:26:11.614 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:26:11.614 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:11.614 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:26:11.614 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:26:11.614 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:11.614 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:11.614 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:26:11.614 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:11.614 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:11.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:11.614 --rc genhtml_branch_coverage=1 00:26:11.614 --rc genhtml_function_coverage=1 00:26:11.614 --rc genhtml_legend=1 00:26:11.614 --rc geninfo_all_blocks=1 00:26:11.614 --rc geninfo_unexecuted_blocks=1 00:26:11.614 00:26:11.614 ' 00:26:11.614 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:11.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:11.614 --rc genhtml_branch_coverage=1 00:26:11.614 --rc genhtml_function_coverage=1 00:26:11.614 --rc genhtml_legend=1 00:26:11.614 --rc geninfo_all_blocks=1 00:26:11.614 --rc geninfo_unexecuted_blocks=1 00:26:11.614 00:26:11.614 ' 00:26:11.614 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:11.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:11.614 --rc genhtml_branch_coverage=1 00:26:11.614 --rc genhtml_function_coverage=1 00:26:11.614 --rc genhtml_legend=1 00:26:11.614 --rc geninfo_all_blocks=1 00:26:11.614 --rc geninfo_unexecuted_blocks=1 00:26:11.614 00:26:11.614 ' 00:26:11.614 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:11.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:11.614 --rc genhtml_branch_coverage=1 00:26:11.614 --rc genhtml_function_coverage=1 00:26:11.614 --rc genhtml_legend=1 00:26:11.614 --rc geninfo_all_blocks=1 00:26:11.614 --rc geninfo_unexecuted_blocks=1 00:26:11.614 00:26:11.614 ' 00:26:11.614 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:11.614 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:26:11.614 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:11.614 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:11.614 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:11.614 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:11.615 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:11.615 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:11.615 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:11.615 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:11.615 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:11.615 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:11.615 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:11.615 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:11.615 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:11.615 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:11.615 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:11.615 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:11.615 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:11.615 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:26:11.615 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:11.615 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:11.615 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:11.615 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.615 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.615 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.615 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:26:11.615 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.615 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:26:11.615 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:11.615 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:11.615 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:11.615 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:11.615 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:11.615 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:11.615 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:11.615 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:11.615 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:11.615 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:11.615 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:11.615 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:11.615 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:11.615 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:26:11.615 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:26:11.615 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:11.615 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # prepare_net_devs 00:26:11.615 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:26:11.615 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:26:11.615 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:11.615 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:11.615 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:11.876 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:26:11.876 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:26:11.876 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:26:11.876 08:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:18.469 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:18.469 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:26:18.469 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:18.469 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:18.469 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:18.469 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:18.469 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:18.469 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:26:18.469 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:18.469 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:26:18.469 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:26:18.469 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:26:18.469 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:26:18.469 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:26:18.469 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:26:18.469 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:18.469 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:18.469 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:18.469 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:18.469 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:18.469 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:18.469 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:18.469 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:18.469 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:18.470 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:18.470 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:18.470 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:26:18.470 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:26:18.470 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:26:18.470 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:26:18.470 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:26:18.470 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:26:18.470 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:26:18.470 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:18.470 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:18.470 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:26:18.470 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:26:18.470 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:18.470 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:18.470 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:26:18.470 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:26:18.470 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:18.470 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:18.470 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:26:18.470 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:26:18.470 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:18.470 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:18.470 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:26:18.470 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:26:18.470 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:26:18.470 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:26:18.470 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:26:18.470 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:18.470 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:26:18.470 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:18.470 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:26:18.470 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:26:18.470 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:18.470 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:18.470 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:18.470 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:26:18.470 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:26:18.470 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:18.470 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:26:18.470 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:18.470 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:26:18.470 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:26:18.470 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:18.470 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:18.470 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:18.470 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:26:18.470 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:26:18.470 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # is_hw=yes 00:26:18.470 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:26:18.470 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:26:18.470 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:26:18.470 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:18.470 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:18.470 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:18.470 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:18.470 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:18.470 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:18.470 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:18.470 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:18.470 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:18.470 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:18.470 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:18.470 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:18.470 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:18.470 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:18.470 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:18.731 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:18.731 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:18.731 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:18.731 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:18.731 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:18.731 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:18.731 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:18.731 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:18.731 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:18.731 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.698 ms 00:26:18.731 00:26:18.731 --- 10.0.0.2 ping statistics --- 00:26:18.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:18.731 rtt min/avg/max/mdev = 0.698/0.698/0.698/0.000 ms 00:26:18.731 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:18.731 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:18.731 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:26:18.731 00:26:18.731 --- 10.0.0.1 ping statistics --- 00:26:18.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:18.731 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:26:18.731 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:18.731 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # return 0 00:26:18.731 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:26:18.731 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:18.731 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:26:18.731 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:26:18.731 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:18.731 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:26:18.731 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:26:18.992 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:26:18.992 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:26:18.992 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:18.992 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:18.992 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # nvmfpid=3852539 00:26:18.992 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # waitforlisten 3852539 00:26:18.992 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:18.992 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 3852539 ']' 00:26:18.992 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:18.992 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:18.992 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:18.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:18.992 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:18.992 08:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:18.992 [2024-10-01 08:41:10.665814] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:26:18.992 [2024-10-01 08:41:10.665890] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:18.992 [2024-10-01 08:41:10.733909] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:18.992 [2024-10-01 08:41:10.798374] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:18.992 [2024-10-01 08:41:10.798411] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:18.992 [2024-10-01 08:41:10.798419] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:18.992 [2024-10-01 08:41:10.798430] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:18.992 [2024-10-01 08:41:10.798436] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:18.992 [2024-10-01 08:41:10.799959] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:18.992 [2024-10-01 08:41:10.800097] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:26:18.992 [2024-10-01 08:41:10.800157] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:18.992 [2024-10-01 08:41:10.800158] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:26:19.933 08:41:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:19.933 08:41:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:26:19.933 08:41:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:26:19.933 08:41:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:19.933 08:41:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:19.933 08:41:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:19.933 08:41:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:26:19.933 08:41:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:26:20.194 08:41:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:26:20.194 08:41:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:26:20.456 08:41:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:26:20.456 08:41:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:20.717 08:41:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:26:20.717 08:41:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:26:20.717 08:41:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:26:20.717 08:41:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:26:20.717 08:41:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:26:20.717 [2024-10-01 08:41:12.512389] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:20.717 08:41:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:20.977 08:41:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:20.977 08:41:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:21.238 08:41:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:21.238 08:41:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:26:21.498 08:41:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:21.498 [2024-10-01 08:41:13.243112] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:21.498 08:41:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:21.759 08:41:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:26:21.759 08:41:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:26:21.759 08:41:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:26:21.759 08:41:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:26:23.144 Initializing NVMe Controllers 00:26:23.144 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:26:23.144 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:26:23.144 Initialization complete. Launching workers. 00:26:23.144 ======================================================== 00:26:23.144 Latency(us) 00:26:23.144 Device Information : IOPS MiB/s Average min max 00:26:23.144 PCIE (0000:65:00.0) NSID 1 from core 0: 78336.21 306.00 407.72 13.36 4905.29 00:26:23.144 ======================================================== 00:26:23.144 Total : 78336.21 306.00 407.72 13.36 4905.29 00:26:23.144 00:26:23.144 08:41:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:24.528 Initializing NVMe Controllers 00:26:24.528 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:24.528 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:24.528 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:24.528 Initialization complete. Launching workers. 00:26:24.528 ======================================================== 00:26:24.528 Latency(us) 00:26:24.528 Device Information : IOPS MiB/s Average min max 00:26:24.528 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 81.87 0.32 12409.41 125.65 45967.17 00:26:24.528 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 70.88 0.28 14781.77 7958.27 51878.19 00:26:24.528 ======================================================== 00:26:24.528 Total : 152.75 0.60 13510.31 125.65 51878.19 00:26:24.528 00:26:24.528 08:41:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:25.912 Initializing NVMe Controllers 00:26:25.912 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:25.913 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:25.913 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:25.913 Initialization complete. Launching workers. 00:26:25.913 ======================================================== 00:26:25.913 Latency(us) 00:26:25.913 Device Information : IOPS MiB/s Average min max 00:26:25.913 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10388.09 40.58 3080.79 500.60 6556.88 00:26:25.913 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3719.67 14.53 8648.65 6098.74 16357.83 00:26:25.913 ======================================================== 00:26:25.913 Total : 14107.76 55.11 4548.82 500.60 16357.83 00:26:25.913 00:26:25.913 08:41:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:26:25.913 08:41:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:26:25.913 08:41:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:28.461 Initializing NVMe Controllers 00:26:28.461 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:28.461 Controller IO queue size 128, less than required. 00:26:28.461 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:28.461 Controller IO queue size 128, less than required. 00:26:28.461 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:28.461 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:28.461 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:28.461 Initialization complete. Launching workers. 00:26:28.461 ======================================================== 00:26:28.461 Latency(us) 00:26:28.461 Device Information : IOPS MiB/s Average min max 00:26:28.461 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1615.84 403.96 80850.62 44559.89 133311.09 00:26:28.461 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 583.08 145.77 233286.97 62229.58 362994.76 00:26:28.461 ======================================================== 00:26:28.461 Total : 2198.92 549.73 121271.71 44559.89 362994.76 00:26:28.461 00:26:28.461 08:41:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:26:28.721 No valid NVMe controllers or AIO or URING devices found 00:26:28.721 Initializing NVMe Controllers 00:26:28.721 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:28.721 Controller IO queue size 128, less than required. 00:26:28.721 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:28.721 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:26:28.721 Controller IO queue size 128, less than required. 00:26:28.721 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:28.721 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:26:28.721 WARNING: Some requested NVMe devices were skipped 00:26:28.721 08:41:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:26:31.264 Initializing NVMe Controllers 00:26:31.264 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:31.264 Controller IO queue size 128, less than required. 00:26:31.264 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:31.264 Controller IO queue size 128, less than required. 00:26:31.264 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:31.264 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:31.264 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:31.264 Initialization complete. Launching workers. 00:26:31.264 00:26:31.264 ==================== 00:26:31.264 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:26:31.264 TCP transport: 00:26:31.264 polls: 21674 00:26:31.264 idle_polls: 8592 00:26:31.264 sock_completions: 13082 00:26:31.264 nvme_completions: 6473 00:26:31.264 submitted_requests: 9748 00:26:31.264 queued_requests: 1 00:26:31.264 00:26:31.264 ==================== 00:26:31.264 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:26:31.264 TCP transport: 00:26:31.264 polls: 21810 00:26:31.264 idle_polls: 8381 00:26:31.264 sock_completions: 13429 00:26:31.264 nvme_completions: 6767 00:26:31.264 submitted_requests: 10122 00:26:31.264 queued_requests: 1 00:26:31.264 ======================================================== 00:26:31.264 Latency(us) 00:26:31.264 Device Information : IOPS MiB/s Average min max 00:26:31.264 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1617.69 404.42 80585.49 39747.09 142690.71 00:26:31.264 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1691.17 422.79 76636.92 36575.64 126561.11 00:26:31.264 ======================================================== 00:26:31.264 Total : 3308.86 827.21 78567.36 36575.64 142690.71 00:26:31.264 00:26:31.265 08:41:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:26:31.265 08:41:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:31.265 08:41:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:26:31.265 08:41:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:26:31.265 08:41:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:26:31.265 08:41:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # nvmfcleanup 00:26:31.265 08:41:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:26:31.265 08:41:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:31.265 08:41:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:26:31.265 08:41:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:31.265 08:41:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:31.265 rmmod nvme_tcp 00:26:31.525 rmmod nvme_fabrics 00:26:31.525 rmmod nvme_keyring 00:26:31.525 08:41:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:31.525 08:41:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:26:31.525 08:41:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:26:31.525 08:41:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@513 -- # '[' -n 3852539 ']' 00:26:31.525 08:41:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # killprocess 3852539 00:26:31.525 08:41:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 3852539 ']' 00:26:31.525 08:41:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 3852539 00:26:31.525 08:41:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:26:31.525 08:41:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:31.525 08:41:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3852539 00:26:31.525 08:41:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:31.525 08:41:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:31.525 08:41:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3852539' 00:26:31.525 killing process with pid 3852539 00:26:31.525 08:41:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 3852539 00:26:31.525 08:41:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 3852539 00:26:33.436 08:41:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:26:33.436 08:41:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:26:33.436 08:41:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:26:33.436 08:41:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:26:33.436 08:41:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # iptables-save 00:26:33.436 08:41:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:26:33.436 08:41:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # iptables-restore 00:26:33.436 08:41:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:33.436 08:41:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:33.436 08:41:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:33.436 08:41:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:33.436 08:41:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:35.977 08:41:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:35.977 00:26:35.977 real 0m24.082s 00:26:35.977 user 0m58.927s 00:26:35.977 sys 0m8.186s 00:26:35.977 08:41:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:35.977 08:41:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:35.977 ************************************ 00:26:35.977 END TEST nvmf_perf 00:26:35.977 ************************************ 00:26:35.977 08:41:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:26:35.977 08:41:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:35.977 08:41:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:35.977 08:41:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.977 ************************************ 00:26:35.977 START TEST nvmf_fio_host 00:26:35.977 ************************************ 00:26:35.977 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:26:35.977 * Looking for test storage... 00:26:35.977 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:35.977 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:35.977 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lcov --version 00:26:35.977 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:35.977 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:35.977 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:35.977 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:35.978 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:35.978 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:26:35.978 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:26:35.978 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:26:35.978 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:26:35.978 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:26:35.978 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:26:35.978 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:26:35.978 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:35.978 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:26:35.978 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:26:35.978 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:35.978 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:35.978 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:26:35.978 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:26:35.978 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:35.978 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:26:35.978 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:26:35.978 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:26:35.978 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:26:35.978 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:35.978 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:26:35.978 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:26:35.978 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:35.978 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:35.978 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:26:35.978 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:35.978 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:35.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:35.978 --rc genhtml_branch_coverage=1 00:26:35.978 --rc genhtml_function_coverage=1 00:26:35.978 --rc genhtml_legend=1 00:26:35.978 --rc geninfo_all_blocks=1 00:26:35.978 --rc geninfo_unexecuted_blocks=1 00:26:35.978 00:26:35.978 ' 00:26:35.978 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:35.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:35.978 --rc genhtml_branch_coverage=1 00:26:35.978 --rc genhtml_function_coverage=1 00:26:35.978 --rc genhtml_legend=1 00:26:35.978 --rc geninfo_all_blocks=1 00:26:35.978 --rc geninfo_unexecuted_blocks=1 00:26:35.978 00:26:35.978 ' 00:26:35.978 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:35.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:35.978 --rc genhtml_branch_coverage=1 00:26:35.978 --rc genhtml_function_coverage=1 00:26:35.978 --rc genhtml_legend=1 00:26:35.978 --rc geninfo_all_blocks=1 00:26:35.978 --rc geninfo_unexecuted_blocks=1 00:26:35.978 00:26:35.978 ' 00:26:35.978 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:35.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:35.978 --rc genhtml_branch_coverage=1 00:26:35.978 --rc genhtml_function_coverage=1 00:26:35.978 --rc genhtml_legend=1 00:26:35.978 --rc geninfo_all_blocks=1 00:26:35.978 --rc geninfo_unexecuted_blocks=1 00:26:35.978 00:26:35.978 ' 00:26:35.978 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:35.978 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:35.978 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:35.978 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:35.978 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:35.978 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.978 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.978 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.978 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:26:35.978 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.978 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:35.978 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:26:35.978 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:35.978 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:35.978 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:35.978 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:35.978 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:35.978 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:35.978 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:35.978 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:35.978 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:35.978 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:35.978 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:35.978 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:35.978 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:35.978 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:35.978 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:35.978 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:35.978 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:35.978 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:35.978 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:35.978 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:35.978 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:35.978 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.978 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.978 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.978 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:26:35.979 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.979 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:26:35.979 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:35.979 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:35.979 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:35.979 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:35.979 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:35.979 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:35.979 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:35.979 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:35.979 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:35.979 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:35.979 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:35.979 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:26:35.979 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:26:35.979 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:35.979 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # prepare_net_devs 00:26:35.979 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@434 -- # local -g is_hw=no 00:26:35.979 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # remove_spdk_ns 00:26:35.979 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:35.979 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:35.979 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:35.979 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:26:35.979 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:26:35.979 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:26:35.979 08:41:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.128 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:44.128 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:26:44.128 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:44.128 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:44.128 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:44.128 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:44.128 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:44.128 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:26:44.128 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:44.128 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:26:44.128 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:26:44.128 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:26:44.128 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:26:44.128 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:26:44.128 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:26:44.128 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:44.128 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:44.128 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:44.128 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:44.128 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:44.128 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:44.128 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:44.128 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:44.128 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:44.128 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:44.129 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:44.129 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ up == up ]] 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:44.129 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ up == up ]] 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:44.129 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # is_hw=yes 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:44.129 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:44.129 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.588 ms 00:26:44.129 00:26:44.129 --- 10.0.0.2 ping statistics --- 00:26:44.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:44.129 rtt min/avg/max/mdev = 0.588/0.588/0.588/0.000 ms 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:44.129 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:44.129 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:26:44.129 00:26:44.129 --- 10.0.0.1 ping statistics --- 00:26:44.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:44.129 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # return 0 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3859424 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3859424 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 3859424 ']' 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:44.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:44.129 08:41:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.129 [2024-10-01 08:41:34.898867] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:26:44.129 [2024-10-01 08:41:34.898937] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:44.129 [2024-10-01 08:41:34.970734] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:44.129 [2024-10-01 08:41:35.044322] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:44.129 [2024-10-01 08:41:35.044361] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:44.129 [2024-10-01 08:41:35.044369] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:44.129 [2024-10-01 08:41:35.044376] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:44.130 [2024-10-01 08:41:35.044382] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:44.130 [2024-10-01 08:41:35.046040] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:44.130 [2024-10-01 08:41:35.046250] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:44.130 [2024-10-01 08:41:35.046250] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:26:44.130 [2024-10-01 08:41:35.046103] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:26:44.130 08:41:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:44.130 08:41:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:26:44.130 08:41:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:44.130 [2024-10-01 08:41:35.858460] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:44.130 08:41:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:26:44.130 08:41:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:44.130 08:41:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.130 08:41:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:26:44.390 Malloc1 00:26:44.390 08:41:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:44.651 08:41:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:44.912 08:41:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:44.912 [2024-10-01 08:41:36.632061] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:44.912 08:41:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:45.173 08:41:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:26:45.173 08:41:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:45.173 08:41:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:45.173 08:41:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:45.173 08:41:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:45.173 08:41:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:45.173 08:41:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:45.173 08:41:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:26:45.173 08:41:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:45.173 08:41:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:45.173 08:41:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:45.173 08:41:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:26:45.173 08:41:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:45.173 08:41:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:45.173 08:41:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:45.173 08:41:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:45.173 08:41:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:45.173 08:41:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:45.173 08:41:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:45.173 08:41:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:45.173 08:41:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:45.173 08:41:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:26:45.173 08:41:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:45.433 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:26:45.433 fio-3.35 00:26:45.433 Starting 1 thread 00:26:48.006 00:26:48.006 test: (groupid=0, jobs=1): err= 0: pid=3860236: Tue Oct 1 08:41:39 2024 00:26:48.006 read: IOPS=10.3k, BW=40.2MiB/s (42.2MB/s)(80.6MiB/2005msec) 00:26:48.006 slat (usec): min=2, max=277, avg= 2.17, stdev= 2.74 00:26:48.006 clat (usec): min=4083, max=9418, avg=6860.58, stdev=1045.14 00:26:48.006 lat (usec): min=4085, max=9420, avg=6862.75, stdev=1045.09 00:26:48.006 clat percentiles (usec): 00:26:48.006 | 1.00th=[ 4555], 5.00th=[ 4883], 10.00th=[ 5014], 20.00th=[ 5538], 00:26:48.006 | 30.00th=[ 6783], 40.00th=[ 7046], 50.00th=[ 7177], 60.00th=[ 7308], 00:26:48.006 | 70.00th=[ 7504], 80.00th=[ 7701], 90.00th=[ 7898], 95.00th=[ 8094], 00:26:48.006 | 99.00th=[ 8455], 99.50th=[ 8586], 99.90th=[ 8717], 99.95th=[ 8848], 00:26:48.006 | 99.99th=[ 9110] 00:26:48.006 bw ( KiB/s): min=37480, max=49424, per=99.96%, avg=41158.00, stdev=5550.69, samples=4 00:26:48.006 iops : min= 9370, max=12356, avg=10289.50, stdev=1387.67, samples=4 00:26:48.006 write: IOPS=10.3k, BW=40.3MiB/s (42.2MB/s)(80.7MiB/2005msec); 0 zone resets 00:26:48.006 slat (usec): min=2, max=265, avg= 2.24, stdev= 2.04 00:26:48.006 clat (usec): min=2885, max=8153, avg=5507.27, stdev=835.54 00:26:48.006 lat (usec): min=2903, max=8155, avg=5509.52, stdev=835.52 00:26:48.006 clat percentiles (usec): 00:26:48.006 | 1.00th=[ 3654], 5.00th=[ 3916], 10.00th=[ 4080], 20.00th=[ 4424], 00:26:48.006 | 30.00th=[ 5407], 40.00th=[ 5604], 50.00th=[ 5735], 60.00th=[ 5866], 00:26:48.006 | 70.00th=[ 5997], 80.00th=[ 6194], 90.00th=[ 6325], 95.00th=[ 6521], 00:26:48.006 | 99.00th=[ 6849], 99.50th=[ 6980], 99.90th=[ 7504], 99.95th=[ 7701], 00:26:48.006 | 99.99th=[ 8094] 00:26:48.006 bw ( KiB/s): min=38216, max=49792, per=99.93%, avg=41202.00, stdev=5728.28, samples=4 00:26:48.006 iops : min= 9554, max=12448, avg=10300.50, stdev=1432.07, samples=4 00:26:48.006 lat (msec) : 4=3.72%, 10=96.28% 00:26:48.006 cpu : usr=72.75%, sys=26.15%, ctx=47, majf=0, minf=9 00:26:48.006 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:26:48.006 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.006 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:48.006 issued rwts: total=20639,20668,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:48.006 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:48.006 00:26:48.006 Run status group 0 (all jobs): 00:26:48.006 READ: bw=40.2MiB/s (42.2MB/s), 40.2MiB/s-40.2MiB/s (42.2MB/s-42.2MB/s), io=80.6MiB (84.5MB), run=2005-2005msec 00:26:48.006 WRITE: bw=40.3MiB/s (42.2MB/s), 40.3MiB/s-40.3MiB/s (42.2MB/s-42.2MB/s), io=80.7MiB (84.7MB), run=2005-2005msec 00:26:48.007 08:41:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:48.007 08:41:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:48.007 08:41:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:48.007 08:41:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:48.007 08:41:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:48.007 08:41:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:48.007 08:41:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:26:48.007 08:41:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:48.007 08:41:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:48.007 08:41:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:48.007 08:41:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:26:48.007 08:41:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:48.007 08:41:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:48.007 08:41:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:48.007 08:41:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:48.007 08:41:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:48.007 08:41:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:48.007 08:41:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:48.007 08:41:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:48.007 08:41:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:48.007 08:41:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:26:48.007 08:41:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:48.294 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:26:48.294 fio-3.35 00:26:48.294 Starting 1 thread 00:26:50.910 00:26:50.910 test: (groupid=0, jobs=1): err= 0: pid=3860843: Tue Oct 1 08:41:42 2024 00:26:50.910 read: IOPS=9323, BW=146MiB/s (153MB/s)(292MiB/2007msec) 00:26:50.910 slat (usec): min=3, max=110, avg= 3.61, stdev= 1.63 00:26:50.910 clat (usec): min=2087, max=17937, avg=8278.95, stdev=2013.66 00:26:50.910 lat (usec): min=2090, max=17941, avg=8282.57, stdev=2013.79 00:26:50.910 clat percentiles (usec): 00:26:50.910 | 1.00th=[ 4228], 5.00th=[ 5211], 10.00th=[ 5735], 20.00th=[ 6456], 00:26:50.910 | 30.00th=[ 7046], 40.00th=[ 7570], 50.00th=[ 8160], 60.00th=[ 8717], 00:26:50.910 | 70.00th=[ 9503], 80.00th=[10290], 90.00th=[10945], 95.00th=[11338], 00:26:50.910 | 99.00th=[12780], 99.50th=[13304], 99.90th=[15139], 99.95th=[16712], 00:26:50.910 | 99.99th=[16909] 00:26:50.910 bw ( KiB/s): min=67232, max=86336, per=49.45%, avg=73768.00, stdev=8763.30, samples=4 00:26:50.910 iops : min= 4202, max= 5396, avg=4610.50, stdev=547.71, samples=4 00:26:50.910 write: IOPS=5518, BW=86.2MiB/s (90.4MB/s)(151MiB/1754msec); 0 zone resets 00:26:50.910 slat (usec): min=39, max=405, avg=40.98, stdev= 7.97 00:26:50.910 clat (usec): min=2034, max=19424, avg=9493.10, stdev=1582.83 00:26:50.910 lat (usec): min=2074, max=19464, avg=9534.08, stdev=1584.30 00:26:50.910 clat percentiles (usec): 00:26:50.910 | 1.00th=[ 6521], 5.00th=[ 7308], 10.00th=[ 7701], 20.00th=[ 8225], 00:26:50.910 | 30.00th=[ 8586], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[ 9765], 00:26:50.910 | 70.00th=[10159], 80.00th=[10683], 90.00th=[11469], 95.00th=[12125], 00:26:50.910 | 99.00th=[14222], 99.50th=[15533], 99.90th=[18744], 99.95th=[19268], 00:26:50.910 | 99.99th=[19530] 00:26:50.910 bw ( KiB/s): min=69920, max=89824, per=87.07%, avg=76888.00, stdev=9055.43, samples=4 00:26:50.910 iops : min= 4370, max= 5614, avg=4805.50, stdev=565.96, samples=4 00:26:50.910 lat (msec) : 4=0.42%, 10=73.32%, 20=26.27% 00:26:50.910 cpu : usr=83.95%, sys=14.61%, ctx=17, majf=0, minf=27 00:26:50.910 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:26:50.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.910 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:50.910 issued rwts: total=18712,9680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.910 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:50.910 00:26:50.910 Run status group 0 (all jobs): 00:26:50.910 READ: bw=146MiB/s (153MB/s), 146MiB/s-146MiB/s (153MB/s-153MB/s), io=292MiB (307MB), run=2007-2007msec 00:26:50.910 WRITE: bw=86.2MiB/s (90.4MB/s), 86.2MiB/s-86.2MiB/s (90.4MB/s-90.4MB/s), io=151MiB (159MB), run=1754-1754msec 00:26:50.910 08:41:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:50.910 08:41:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:26:50.910 08:41:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:26:50.910 08:41:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:26:50.910 08:41:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:26:50.910 08:41:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@512 -- # nvmfcleanup 00:26:50.910 08:41:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:26:50.910 08:41:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:50.910 08:41:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:26:50.910 08:41:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:50.910 08:41:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:50.910 rmmod nvme_tcp 00:26:50.910 rmmod nvme_fabrics 00:26:50.910 rmmod nvme_keyring 00:26:50.910 08:41:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:50.910 08:41:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:26:50.910 08:41:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:26:50.910 08:41:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@513 -- # '[' -n 3859424 ']' 00:26:50.910 08:41:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # killprocess 3859424 00:26:50.910 08:41:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 3859424 ']' 00:26:50.910 08:41:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 3859424 00:26:50.910 08:41:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:26:50.910 08:41:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:50.910 08:41:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3859424 00:26:50.910 08:41:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:50.910 08:41:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:50.910 08:41:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3859424' 00:26:50.910 killing process with pid 3859424 00:26:50.910 08:41:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 3859424 00:26:50.910 08:41:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 3859424 00:26:51.172 08:41:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:26:51.172 08:41:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:26:51.172 08:41:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:26:51.172 08:41:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:26:51.172 08:41:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:26:51.172 08:41:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # iptables-save 00:26:51.172 08:41:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # iptables-restore 00:26:51.172 08:41:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:51.172 08:41:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:51.172 08:41:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:51.172 08:41:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:51.172 08:41:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:53.721 08:41:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:53.721 00:26:53.721 real 0m17.569s 00:26:53.721 user 1m5.557s 00:26:53.721 sys 0m7.529s 00:26:53.721 08:41:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:53.721 08:41:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.721 ************************************ 00:26:53.721 END TEST nvmf_fio_host 00:26:53.721 ************************************ 00:26:53.721 08:41:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:26:53.721 08:41:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:53.721 08:41:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:53.721 08:41:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.721 ************************************ 00:26:53.721 START TEST nvmf_failover 00:26:53.721 ************************************ 00:26:53.721 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:26:53.721 * Looking for test storage... 00:26:53.721 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:53.721 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:53.721 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lcov --version 00:26:53.721 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:53.721 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:53.721 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:53.721 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:53.721 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:53.721 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:26:53.721 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:26:53.721 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:26:53.721 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:26:53.721 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:26:53.721 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:26:53.721 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:26:53.721 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:53.721 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:26:53.721 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:26:53.721 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:53.721 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:53.721 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:26:53.721 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:26:53.721 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:53.721 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:26:53.721 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:26:53.721 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:26:53.721 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:26:53.721 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:53.721 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:26:53.721 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:26:53.721 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:53.721 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:53.722 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:26:53.722 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:53.722 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:53.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:53.722 --rc genhtml_branch_coverage=1 00:26:53.722 --rc genhtml_function_coverage=1 00:26:53.722 --rc genhtml_legend=1 00:26:53.722 --rc geninfo_all_blocks=1 00:26:53.722 --rc geninfo_unexecuted_blocks=1 00:26:53.722 00:26:53.722 ' 00:26:53.722 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:53.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:53.722 --rc genhtml_branch_coverage=1 00:26:53.722 --rc genhtml_function_coverage=1 00:26:53.722 --rc genhtml_legend=1 00:26:53.722 --rc geninfo_all_blocks=1 00:26:53.722 --rc geninfo_unexecuted_blocks=1 00:26:53.722 00:26:53.722 ' 00:26:53.722 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:53.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:53.722 --rc genhtml_branch_coverage=1 00:26:53.722 --rc genhtml_function_coverage=1 00:26:53.722 --rc genhtml_legend=1 00:26:53.722 --rc geninfo_all_blocks=1 00:26:53.722 --rc geninfo_unexecuted_blocks=1 00:26:53.722 00:26:53.722 ' 00:26:53.722 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:53.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:53.722 --rc genhtml_branch_coverage=1 00:26:53.722 --rc genhtml_function_coverage=1 00:26:53.722 --rc genhtml_legend=1 00:26:53.722 --rc geninfo_all_blocks=1 00:26:53.722 --rc geninfo_unexecuted_blocks=1 00:26:53.722 00:26:53.722 ' 00:26:53.722 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:53.722 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:26:53.722 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:53.722 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:53.722 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:53.722 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:53.722 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:53.722 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:53.722 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:53.722 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:53.722 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:53.722 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:53.722 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:53.722 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:53.722 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:53.722 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:53.722 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:53.722 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:53.722 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:53.722 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:26:53.722 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:53.722 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:53.722 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:53.722 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.722 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.722 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.722 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:26:53.722 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.722 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:26:53.722 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:53.722 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:53.722 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:53.722 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:53.722 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:53.722 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:53.722 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:53.722 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:53.722 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:53.722 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:53.722 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:53.722 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:53.722 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:53.722 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:53.722 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:26:53.722 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:26:53.722 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:53.722 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # prepare_net_devs 00:26:53.722 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@434 -- # local -g is_hw=no 00:26:53.722 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # remove_spdk_ns 00:26:53.722 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:53.722 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:53.722 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:53.722 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:26:53.722 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:26:53.722 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:26:53.722 08:41:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:01.869 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:01.869 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:27:01.869 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:01.869 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:01.869 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:01.869 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:01.869 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:01.869 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:27:01.869 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:01.869 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:27:01.869 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:27:01.869 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:27:01.869 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:27:01.869 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:27:01.869 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:27:01.869 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:01.869 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:01.869 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:01.869 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:01.869 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:01.869 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:01.869 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:01.869 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:01.869 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:01.870 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:01.870 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ up == up ]] 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:01.870 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ up == up ]] 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:01.870 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # is_hw=yes 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:01.870 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:01.870 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.642 ms 00:27:01.870 00:27:01.870 --- 10.0.0.2 ping statistics --- 00:27:01.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:01.870 rtt min/avg/max/mdev = 0.642/0.642/0.642/0.000 ms 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:01.870 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:01.870 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.261 ms 00:27:01.870 00:27:01.870 --- 10.0.0.1 ping statistics --- 00:27:01.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:01.870 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # return 0 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # nvmfpid=3865424 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # waitforlisten 3865424 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 3865424 ']' 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:01.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:01.870 08:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:01.870 [2024-10-01 08:41:52.565871] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:27:01.870 [2024-10-01 08:41:52.565927] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:01.870 [2024-10-01 08:41:52.651494] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:01.870 [2024-10-01 08:41:52.743611] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:01.870 [2024-10-01 08:41:52.743669] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:01.871 [2024-10-01 08:41:52.743678] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:01.871 [2024-10-01 08:41:52.743685] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:01.871 [2024-10-01 08:41:52.743691] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:01.871 [2024-10-01 08:41:52.744990] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:27:01.871 [2024-10-01 08:41:52.745158] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:27:01.871 [2024-10-01 08:41:52.745276] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:01.871 08:41:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:01.871 08:41:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:27:01.871 08:41:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:27:01.871 08:41:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:01.871 08:41:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:01.871 08:41:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:01.871 08:41:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:01.871 [2024-10-01 08:41:53.548470] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:01.871 08:41:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:02.131 Malloc0 00:27:02.131 08:41:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:02.391 08:41:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:02.391 08:41:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:02.652 [2024-10-01 08:41:54.278003] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:02.652 08:41:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:02.652 [2024-10-01 08:41:54.450406] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:02.912 08:41:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:27:02.912 [2024-10-01 08:41:54.634959] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:27:02.912 08:41:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:27:02.912 08:41:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3865956 00:27:02.912 08:41:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:02.912 08:41:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3865956 /var/tmp/bdevperf.sock 00:27:02.912 08:41:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 3865956 ']' 00:27:02.912 08:41:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:02.912 08:41:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:02.912 08:41:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:02.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:02.912 08:41:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:02.912 08:41:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:03.851 08:41:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:03.851 08:41:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:27:03.851 08:41:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:04.111 NVMe0n1 00:27:04.111 08:41:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:04.371 00:27:04.371 08:41:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3866185 00:27:04.371 08:41:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:04.371 08:41:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:27:05.752 08:41:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:05.752 [2024-10-01 08:41:57.323031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d1210 is same with the state(6) to be set 00:27:05.752 [2024-10-01 08:41:57.323075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d1210 is same with the state(6) to be set 00:27:05.752 [2024-10-01 08:41:57.323081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d1210 is same with the state(6) to be set 00:27:05.752 [2024-10-01 08:41:57.323087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d1210 is same with the state(6) to be set 00:27:05.752 [2024-10-01 08:41:57.323091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d1210 is same with the state(6) to be set 00:27:05.752 [2024-10-01 08:41:57.323096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d1210 is same with the state(6) to be set 00:27:05.752 [2024-10-01 08:41:57.323101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d1210 is same with the state(6) to be set 00:27:05.752 [2024-10-01 08:41:57.323105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d1210 is same with the state(6) to be set 00:27:05.752 [2024-10-01 08:41:57.323110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d1210 is same with the state(6) to be set 00:27:05.752 [2024-10-01 08:41:57.323115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d1210 is same with the state(6) to be set 00:27:05.752 [2024-10-01 08:41:57.323119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d1210 is same with the state(6) to be set 00:27:05.752 [2024-10-01 08:41:57.323124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d1210 is same with the state(6) to be set 00:27:05.752 [2024-10-01 08:41:57.323128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d1210 is same with the state(6) to be set 00:27:05.752 [2024-10-01 08:41:57.323133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d1210 is same with the state(6) to be set 00:27:05.752 [2024-10-01 08:41:57.323143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d1210 is same with the state(6) to be set 00:27:05.752 [2024-10-01 08:41:57.323147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d1210 is same with the state(6) to be set 00:27:05.752 [2024-10-01 08:41:57.323152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d1210 is same with the state(6) to be set 00:27:05.752 [2024-10-01 08:41:57.323156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d1210 is same with the state(6) to be set 00:27:05.752 [2024-10-01 08:41:57.323161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d1210 is same with the state(6) to be set 00:27:05.752 [2024-10-01 08:41:57.323166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d1210 is same with the state(6) to be set 00:27:05.752 [2024-10-01 08:41:57.323170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d1210 is same with the state(6) to be set 00:27:05.752 [2024-10-01 08:41:57.323175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d1210 is same with the state(6) to be set 00:27:05.752 [2024-10-01 08:41:57.323179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d1210 is same with the state(6) to be set 00:27:05.752 [2024-10-01 08:41:57.323183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d1210 is same with the state(6) to be set 00:27:05.752 [2024-10-01 08:41:57.323188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d1210 is same with the state(6) to be set 00:27:05.752 [2024-10-01 08:41:57.323192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d1210 is same with the state(6) to be set 00:27:05.752 [2024-10-01 08:41:57.323197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d1210 is same with the state(6) to be set 00:27:05.752 [2024-10-01 08:41:57.323201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d1210 is same with the state(6) to be set 00:27:05.752 [2024-10-01 08:41:57.323205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d1210 is same with the state(6) to be set 00:27:05.752 [2024-10-01 08:41:57.323210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d1210 is same with the state(6) to be set 00:27:05.752 [2024-10-01 08:41:57.323214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d1210 is same with the state(6) to be set 00:27:05.752 [2024-10-01 08:41:57.323219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d1210 is same with the state(6) to be set 00:27:05.752 [2024-10-01 08:41:57.323223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d1210 is same with the state(6) to be set 00:27:05.752 [2024-10-01 08:41:57.323229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d1210 is same with the state(6) to be set 00:27:05.752 [2024-10-01 08:41:57.323233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d1210 is same with the state(6) to be set 00:27:05.752 [2024-10-01 08:41:57.323238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d1210 is same with the state(6) to be set 00:27:05.752 [2024-10-01 08:41:57.323242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d1210 is same with the state(6) to be set 00:27:05.752 [2024-10-01 08:41:57.323247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d1210 is same with the state(6) to be set 00:27:05.752 08:41:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:27:09.051 08:42:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:09.051 00:27:09.051 08:42:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:09.052 [2024-10-01 08:42:00.797574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2010 is same with the state(6) to be set 00:27:09.052 [2024-10-01 08:42:00.797609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2010 is same with the state(6) to be set 00:27:09.052 [2024-10-01 08:42:00.797616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2010 is same with the state(6) to be set 00:27:09.052 [2024-10-01 08:42:00.797621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2010 is same with the state(6) to be set 00:27:09.052 [2024-10-01 08:42:00.797625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2010 is same with the state(6) to be set 00:27:09.052 08:42:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:27:12.348 08:42:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:12.348 [2024-10-01 08:42:03.981663] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:12.348 08:42:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:27:13.290 08:42:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:27:13.552 [2024-10-01 08:42:05.169927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2f80 is same with the state(6) to be set 00:27:13.552 [2024-10-01 08:42:05.169970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2f80 is same with the state(6) to be set 00:27:13.552 [2024-10-01 08:42:05.169976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2f80 is same with the state(6) to be set 00:27:13.552 [2024-10-01 08:42:05.169981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2f80 is same with the state(6) to be set 00:27:13.552 [2024-10-01 08:42:05.169986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2f80 is same with the state(6) to be set 00:27:13.552 [2024-10-01 08:42:05.169991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2f80 is same with the state(6) to be set 00:27:13.552 [2024-10-01 08:42:05.170000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2f80 is same with the state(6) to be set 00:27:13.552 [2024-10-01 08:42:05.170005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2f80 is same with the state(6) to be set 00:27:13.552 [2024-10-01 08:42:05.170010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2f80 is same with the state(6) to be set 00:27:13.552 [2024-10-01 08:42:05.170015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2f80 is same with the state(6) to be set 00:27:13.552 [2024-10-01 08:42:05.170019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2f80 is same with the state(6) to be set 00:27:13.552 [2024-10-01 08:42:05.170024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2f80 is same with the state(6) to be set 00:27:13.552 [2024-10-01 08:42:05.170028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2f80 is same with the state(6) to be set 00:27:13.552 [2024-10-01 08:42:05.170033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2f80 is same with the state(6) to be set 00:27:13.552 [2024-10-01 08:42:05.170038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2f80 is same with the state(6) to be set 00:27:13.552 [2024-10-01 08:42:05.170042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2f80 is same with the state(6) to be set 00:27:13.552 [2024-10-01 08:42:05.170047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2f80 is same with the state(6) to be set 00:27:13.552 [2024-10-01 08:42:05.170060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2f80 is same with the state(6) to be set 00:27:13.552 [2024-10-01 08:42:05.170065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2f80 is same with the state(6) to be set 00:27:13.552 [2024-10-01 08:42:05.170070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2f80 is same with the state(6) to be set 00:27:13.552 [2024-10-01 08:42:05.170074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2f80 is same with the state(6) to be set 00:27:13.552 [2024-10-01 08:42:05.170079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2f80 is same with the state(6) to be set 00:27:13.552 [2024-10-01 08:42:05.170083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2f80 is same with the state(6) to be set 00:27:13.552 [2024-10-01 08:42:05.170088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2f80 is same with the state(6) to be set 00:27:13.552 [2024-10-01 08:42:05.170092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2f80 is same with the state(6) to be set 00:27:13.552 [2024-10-01 08:42:05.170097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2f80 is same with the state(6) to be set 00:27:13.552 [2024-10-01 08:42:05.170102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2f80 is same with the state(6) to be set 00:27:13.552 [2024-10-01 08:42:05.170107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2f80 is same with the state(6) to be set 00:27:13.552 [2024-10-01 08:42:05.170111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2f80 is same with the state(6) to be set 00:27:13.552 [2024-10-01 08:42:05.170116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2f80 is same with the state(6) to be set 00:27:13.552 08:42:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3866185 00:27:20.149 { 00:27:20.149 "results": [ 00:27:20.149 { 00:27:20.149 "job": "NVMe0n1", 00:27:20.149 "core_mask": "0x1", 00:27:20.149 "workload": "verify", 00:27:20.149 "status": "finished", 00:27:20.149 "verify_range": { 00:27:20.149 "start": 0, 00:27:20.149 "length": 16384 00:27:20.149 }, 00:27:20.149 "queue_depth": 128, 00:27:20.149 "io_size": 4096, 00:27:20.149 "runtime": 15.00618, 00:27:20.149 "iops": 11353.522348792298, 00:27:20.149 "mibps": 44.34969667496991, 00:27:20.149 "io_failed": 7397, 00:27:20.149 "io_timeout": 0, 00:27:20.149 "avg_latency_us": 10777.504543023757, 00:27:20.149 "min_latency_us": 532.48, 00:27:20.149 "max_latency_us": 31457.28 00:27:20.149 } 00:27:20.149 ], 00:27:20.149 "core_count": 1 00:27:20.149 } 00:27:20.149 08:42:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3865956 00:27:20.149 08:42:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 3865956 ']' 00:27:20.149 08:42:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 3865956 00:27:20.149 08:42:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:27:20.149 08:42:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:20.149 08:42:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3865956 00:27:20.149 08:42:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:20.149 08:42:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:20.149 08:42:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3865956' 00:27:20.149 killing process with pid 3865956 00:27:20.149 08:42:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 3865956 00:27:20.149 08:42:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 3865956 00:27:20.149 08:42:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:20.149 [2024-10-01 08:41:54.723611] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:27:20.149 [2024-10-01 08:41:54.723677] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3865956 ] 00:27:20.149 [2024-10-01 08:41:54.784242] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:20.149 [2024-10-01 08:41:54.848645] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:20.149 Running I/O for 15 seconds... 00:27:20.149 11106.00 IOPS, 43.38 MiB/s [2024-10-01 08:41:57.323534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:96104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.149 [2024-10-01 08:41:57.323568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.149 [2024-10-01 08:41:57.323586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:96112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.149 [2024-10-01 08:41:57.323595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.149 [2024-10-01 08:41:57.323605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:96120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.149 [2024-10-01 08:41:57.323613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.149 [2024-10-01 08:41:57.323623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:96128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.149 [2024-10-01 08:41:57.323632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.149 [2024-10-01 08:41:57.323641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:96136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.149 [2024-10-01 08:41:57.323649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.149 [2024-10-01 08:41:57.323659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.149 [2024-10-01 08:41:57.323667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.149 [2024-10-01 08:41:57.323676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.149 [2024-10-01 08:41:57.323684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.149 [2024-10-01 08:41:57.323693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.149 [2024-10-01 08:41:57.323701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.149 [2024-10-01 08:41:57.323711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:96168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.149 [2024-10-01 08:41:57.323720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.149 [2024-10-01 08:41:57.323730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.149 [2024-10-01 08:41:57.323738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.149 [2024-10-01 08:41:57.323748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.149 [2024-10-01 08:41:57.323756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.149 [2024-10-01 08:41:57.323770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:96192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.149 [2024-10-01 08:41:57.323778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.149 [2024-10-01 08:41:57.323787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.149 [2024-10-01 08:41:57.323795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.149 [2024-10-01 08:41:57.323804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.149 [2024-10-01 08:41:57.323811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.149 [2024-10-01 08:41:57.323821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:96216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.149 [2024-10-01 08:41:57.323828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.149 [2024-10-01 08:41:57.323838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.150 [2024-10-01 08:41:57.323845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.150 [2024-10-01 08:41:57.323854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.150 [2024-10-01 08:41:57.323862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.150 [2024-10-01 08:41:57.323872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.150 [2024-10-01 08:41:57.323879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.150 [2024-10-01 08:41:57.323889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.150 [2024-10-01 08:41:57.323896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.150 [2024-10-01 08:41:57.323906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:96256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.150 [2024-10-01 08:41:57.323914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.150 [2024-10-01 08:41:57.323923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:96264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.150 [2024-10-01 08:41:57.323930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.150 [2024-10-01 08:41:57.323939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.150 [2024-10-01 08:41:57.323947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.150 [2024-10-01 08:41:57.323957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:96280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.150 [2024-10-01 08:41:57.323964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.150 [2024-10-01 08:41:57.323973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:96288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.150 [2024-10-01 08:41:57.323983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.150 [2024-10-01 08:41:57.323993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.150 [2024-10-01 08:41:57.324006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.150 [2024-10-01 08:41:57.324016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.150 [2024-10-01 08:41:57.324023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.150 [2024-10-01 08:41:57.324032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.150 [2024-10-01 08:41:57.324039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.150 [2024-10-01 08:41:57.324048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.150 [2024-10-01 08:41:57.324056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.150 [2024-10-01 08:41:57.324066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:96328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.150 [2024-10-01 08:41:57.324073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.150 [2024-10-01 08:41:57.324084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.150 [2024-10-01 08:41:57.324092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.150 [2024-10-01 08:41:57.324101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.150 [2024-10-01 08:41:57.324109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.150 [2024-10-01 08:41:57.324119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:96352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.150 [2024-10-01 08:41:57.324126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.150 [2024-10-01 08:41:57.324136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:96360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.150 [2024-10-01 08:41:57.324143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.150 [2024-10-01 08:41:57.324153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.150 [2024-10-01 08:41:57.324161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.150 [2024-10-01 08:41:57.324170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.150 [2024-10-01 08:41:57.324178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.150 [2024-10-01 08:41:57.324187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.150 [2024-10-01 08:41:57.324195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.150 [2024-10-01 08:41:57.324209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.150 [2024-10-01 08:41:57.324216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.150 [2024-10-01 08:41:57.324226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.150 [2024-10-01 08:41:57.324233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.150 [2024-10-01 08:41:57.324243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.150 [2024-10-01 08:41:57.324250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.150 [2024-10-01 08:41:57.324259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:96416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.150 [2024-10-01 08:41:57.324267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.150 [2024-10-01 08:41:57.324276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.150 [2024-10-01 08:41:57.324283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.150 [2024-10-01 08:41:57.324293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.150 [2024-10-01 08:41:57.324300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.150 [2024-10-01 08:41:57.324309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.150 [2024-10-01 08:41:57.324317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.150 [2024-10-01 08:41:57.324326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:96448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.150 [2024-10-01 08:41:57.324334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.150 [2024-10-01 08:41:57.324343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.150 [2024-10-01 08:41:57.324350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.150 [2024-10-01 08:41:57.324359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.150 [2024-10-01 08:41:57.324367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.150 [2024-10-01 08:41:57.324376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.150 [2024-10-01 08:41:57.324384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.150 [2024-10-01 08:41:57.324393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.150 [2024-10-01 08:41:57.324400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.150 [2024-10-01 08:41:57.324410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:96488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.150 [2024-10-01 08:41:57.324417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.150 [2024-10-01 08:41:57.324428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.150 [2024-10-01 08:41:57.324436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.150 [2024-10-01 08:41:57.324445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.150 [2024-10-01 08:41:57.324452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.150 [2024-10-01 08:41:57.324461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.150 [2024-10-01 08:41:57.324469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.150 [2024-10-01 08:41:57.324478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.150 [2024-10-01 08:41:57.324485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.150 [2024-10-01 08:41:57.324495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.150 [2024-10-01 08:41:57.324502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.150 [2024-10-01 08:41:57.324512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.150 [2024-10-01 08:41:57.324519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.150 [2024-10-01 08:41:57.324529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.151 [2024-10-01 08:41:57.324536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.151 [2024-10-01 08:41:57.324545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.151 [2024-10-01 08:41:57.324553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.151 [2024-10-01 08:41:57.324562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.151 [2024-10-01 08:41:57.324569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.151 [2024-10-01 08:41:57.324578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.151 [2024-10-01 08:41:57.324586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.151 [2024-10-01 08:41:57.324595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.151 [2024-10-01 08:41:57.324602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.151 [2024-10-01 08:41:57.324612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.151 [2024-10-01 08:41:57.324619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.151 [2024-10-01 08:41:57.324628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.151 [2024-10-01 08:41:57.324638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.151 [2024-10-01 08:41:57.324647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.151 [2024-10-01 08:41:57.324655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.151 [2024-10-01 08:41:57.324664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.151 [2024-10-01 08:41:57.324671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.151 [2024-10-01 08:41:57.324681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.151 [2024-10-01 08:41:57.324690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.151 [2024-10-01 08:41:57.324699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.151 [2024-10-01 08:41:57.324707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.151 [2024-10-01 08:41:57.324716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.151 [2024-10-01 08:41:57.324723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.151 [2024-10-01 08:41:57.324733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.151 [2024-10-01 08:41:57.324741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.151 [2024-10-01 08:41:57.324751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.151 [2024-10-01 08:41:57.324758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.151 [2024-10-01 08:41:57.324767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.151 [2024-10-01 08:41:57.324774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.151 [2024-10-01 08:41:57.324784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.151 [2024-10-01 08:41:57.324792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.151 [2024-10-01 08:41:57.324801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.151 [2024-10-01 08:41:57.324809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.151 [2024-10-01 08:41:57.324818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.151 [2024-10-01 08:41:57.324826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.151 [2024-10-01 08:41:57.324835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:96688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.151 [2024-10-01 08:41:57.324843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.151 [2024-10-01 08:41:57.324854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:96696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.151 [2024-10-01 08:41:57.324862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.151 [2024-10-01 08:41:57.324871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.151 [2024-10-01 08:41:57.324878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.151 [2024-10-01 08:41:57.324888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.151 [2024-10-01 08:41:57.324895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.151 [2024-10-01 08:41:57.324905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.151 [2024-10-01 08:41:57.324912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.151 [2024-10-01 08:41:57.324921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:96728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.151 [2024-10-01 08:41:57.324928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.151 [2024-10-01 08:41:57.324938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:96736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.151 [2024-10-01 08:41:57.324946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.151 [2024-10-01 08:41:57.324955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.151 [2024-10-01 08:41:57.324962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.151 [2024-10-01 08:41:57.324972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.151 [2024-10-01 08:41:57.324979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.151 [2024-10-01 08:41:57.324989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:96760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.151 [2024-10-01 08:41:57.325000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.151 [2024-10-01 08:41:57.325010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.151 [2024-10-01 08:41:57.325017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.151 [2024-10-01 08:41:57.325026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.151 [2024-10-01 08:41:57.325033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.151 [2024-10-01 08:41:57.325042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.151 [2024-10-01 08:41:57.325049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.151 [2024-10-01 08:41:57.325059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.151 [2024-10-01 08:41:57.325069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.151 [2024-10-01 08:41:57.325078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.151 [2024-10-01 08:41:57.325085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.151 [2024-10-01 08:41:57.325095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.151 [2024-10-01 08:41:57.325102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.151 [2024-10-01 08:41:57.325112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.151 [2024-10-01 08:41:57.325119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.151 [2024-10-01 08:41:57.325129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.151 [2024-10-01 08:41:57.325136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.151 [2024-10-01 08:41:57.325145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.151 [2024-10-01 08:41:57.325153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.151 [2024-10-01 08:41:57.325162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.151 [2024-10-01 08:41:57.325170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.151 [2024-10-01 08:41:57.325179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.151 [2024-10-01 08:41:57.325187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.151 [2024-10-01 08:41:57.325196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.151 [2024-10-01 08:41:57.325203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.151 [2024-10-01 08:41:57.325213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.151 [2024-10-01 08:41:57.325220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.152 [2024-10-01 08:41:57.325229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.152 [2024-10-01 08:41:57.325237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.152 [2024-10-01 08:41:57.325246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.152 [2024-10-01 08:41:57.325255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.152 [2024-10-01 08:41:57.325264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.152 [2024-10-01 08:41:57.325272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.152 [2024-10-01 08:41:57.325281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.152 [2024-10-01 08:41:57.325290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.152 [2024-10-01 08:41:57.325299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.152 [2024-10-01 08:41:57.325308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.152 [2024-10-01 08:41:57.325317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:96912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.152 [2024-10-01 08:41:57.325325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.152 [2024-10-01 08:41:57.325334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.152 [2024-10-01 08:41:57.325341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.152 [2024-10-01 08:41:57.325351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.152 [2024-10-01 08:41:57.325360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.152 [2024-10-01 08:41:57.325370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.152 [2024-10-01 08:41:57.325377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.152 [2024-10-01 08:41:57.325386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.152 [2024-10-01 08:41:57.325394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.152 [2024-10-01 08:41:57.325403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.152 [2024-10-01 08:41:57.325411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.152 [2024-10-01 08:41:57.325420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.152 [2024-10-01 08:41:57.325428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.152 [2024-10-01 08:41:57.325437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.152 [2024-10-01 08:41:57.325445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.152 [2024-10-01 08:41:57.325454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.152 [2024-10-01 08:41:57.325461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.152 [2024-10-01 08:41:57.325471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.152 [2024-10-01 08:41:57.325478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.152 [2024-10-01 08:41:57.325488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.152 [2024-10-01 08:41:57.325495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.152 [2024-10-01 08:41:57.325506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.152 [2024-10-01 08:41:57.325514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.152 [2024-10-01 08:41:57.325523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.152 [2024-10-01 08:41:57.325530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.152 [2024-10-01 08:41:57.325540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.152 [2024-10-01 08:41:57.325547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.152 [2024-10-01 08:41:57.325556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.152 [2024-10-01 08:41:57.325564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.152 [2024-10-01 08:41:57.325573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.152 [2024-10-01 08:41:57.325581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.152 [2024-10-01 08:41:57.325590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.152 [2024-10-01 08:41:57.325598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.152 [2024-10-01 08:41:57.325608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:97048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.152 [2024-10-01 08:41:57.325615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.152 [2024-10-01 08:41:57.325625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.152 [2024-10-01 08:41:57.325632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.152 [2024-10-01 08:41:57.325641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.152 [2024-10-01 08:41:57.325648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.152 [2024-10-01 08:41:57.325657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.152 [2024-10-01 08:41:57.325665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.152 [2024-10-01 08:41:57.325675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.152 [2024-10-01 08:41:57.325682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.152 [2024-10-01 08:41:57.325691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.152 [2024-10-01 08:41:57.325698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.152 [2024-10-01 08:41:57.325709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:97096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.152 [2024-10-01 08:41:57.325718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.152 [2024-10-01 08:41:57.325727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:97104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.152 [2024-10-01 08:41:57.325735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.152 [2024-10-01 08:41:57.325744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.152 [2024-10-01 08:41:57.325751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.152 [2024-10-01 08:41:57.325772] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.152 [2024-10-01 08:41:57.325780] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.152 [2024-10-01 08:41:57.325787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97120 len:8 PRP1 0x0 PRP2 0x0 00:27:20.152 [2024-10-01 08:41:57.325795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.152 [2024-10-01 08:41:57.325832] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1528920 was disconnected and freed. reset controller. 00:27:20.152 [2024-10-01 08:41:57.325842] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:27:20.152 [2024-10-01 08:41:57.325861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:20.152 [2024-10-01 08:41:57.325870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.152 [2024-10-01 08:41:57.325880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:20.152 [2024-10-01 08:41:57.325888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.152 [2024-10-01 08:41:57.325897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:20.152 [2024-10-01 08:41:57.325907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.152 [2024-10-01 08:41:57.325916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:20.152 [2024-10-01 08:41:57.325924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.152 [2024-10-01 08:41:57.325932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:20.152 [2024-10-01 08:41:57.329509] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:20.152 [2024-10-01 08:41:57.329535] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1507ff0 (9): Bad file descriptor 00:27:20.152 [2024-10-01 08:41:57.453749] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:20.153 10896.50 IOPS, 42.56 MiB/s 11131.33 IOPS, 43.48 MiB/s 11113.00 IOPS, 43.41 MiB/s [2024-10-01 08:42:00.800716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:47208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.153 [2024-10-01 08:42:00.800755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.153 [2024-10-01 08:42:00.800770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:47288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.153 [2024-10-01 08:42:00.800779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.153 [2024-10-01 08:42:00.800796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:47296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.153 [2024-10-01 08:42:00.800804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.153 [2024-10-01 08:42:00.800813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:47304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.153 [2024-10-01 08:42:00.800821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.153 [2024-10-01 08:42:00.800830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:47312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.153 [2024-10-01 08:42:00.800838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.153 [2024-10-01 08:42:00.800848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:47320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.153 [2024-10-01 08:42:00.800855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.153 [2024-10-01 08:42:00.800865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:47328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.153 [2024-10-01 08:42:00.800873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.153 [2024-10-01 08:42:00.800882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:47336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.153 [2024-10-01 08:42:00.800890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.153 [2024-10-01 08:42:00.800900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:47216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.153 [2024-10-01 08:42:00.800909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.153 [2024-10-01 08:42:00.800918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:47224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.153 [2024-10-01 08:42:00.800926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.153 [2024-10-01 08:42:00.800936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:47232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.153 [2024-10-01 08:42:00.800944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.153 [2024-10-01 08:42:00.800956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:47240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.153 [2024-10-01 08:42:00.800964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.153 [2024-10-01 08:42:00.800975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:47248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.153 [2024-10-01 08:42:00.800982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.153 [2024-10-01 08:42:00.800992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:47256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.153 [2024-10-01 08:42:00.801004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.153 [2024-10-01 08:42:00.801013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:47344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.153 [2024-10-01 08:42:00.801023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.153 [2024-10-01 08:42:00.801033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:47352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.153 [2024-10-01 08:42:00.801043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.153 [2024-10-01 08:42:00.801053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:47360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.153 [2024-10-01 08:42:00.801061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.153 [2024-10-01 08:42:00.801070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:47368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.153 [2024-10-01 08:42:00.801078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.153 [2024-10-01 08:42:00.801090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:47376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.153 [2024-10-01 08:42:00.801099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.153 [2024-10-01 08:42:00.801109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:47384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.153 [2024-10-01 08:42:00.801118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.153 [2024-10-01 08:42:00.801127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:47392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.153 [2024-10-01 08:42:00.801136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.153 [2024-10-01 08:42:00.801146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:47400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.153 [2024-10-01 08:42:00.801154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.153 [2024-10-01 08:42:00.801163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:47408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.153 [2024-10-01 08:42:00.801171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.153 [2024-10-01 08:42:00.801181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:47416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.153 [2024-10-01 08:42:00.801188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.153 [2024-10-01 08:42:00.801198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:47424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.153 [2024-10-01 08:42:00.801205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.153 [2024-10-01 08:42:00.801215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:47432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.153 [2024-10-01 08:42:00.801223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.153 [2024-10-01 08:42:00.801232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:47440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.153 [2024-10-01 08:42:00.801239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.153 [2024-10-01 08:42:00.801250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:47448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.153 [2024-10-01 08:42:00.801257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.153 [2024-10-01 08:42:00.801266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:47456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.153 [2024-10-01 08:42:00.801274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.153 [2024-10-01 08:42:00.801283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:47464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.153 [2024-10-01 08:42:00.801290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.153 [2024-10-01 08:42:00.801299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:47472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.153 [2024-10-01 08:42:00.801306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.153 [2024-10-01 08:42:00.801316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:47264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.153 [2024-10-01 08:42:00.801324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.153 [2024-10-01 08:42:00.801334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:47272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.153 [2024-10-01 08:42:00.801341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.153 [2024-10-01 08:42:00.801351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:47280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.153 [2024-10-01 08:42:00.801359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.154 [2024-10-01 08:42:00.801369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:47480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.154 [2024-10-01 08:42:00.801377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.154 [2024-10-01 08:42:00.801387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:47488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.154 [2024-10-01 08:42:00.801395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.154 [2024-10-01 08:42:00.801404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:47496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.154 [2024-10-01 08:42:00.801412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.154 [2024-10-01 08:42:00.801421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:47504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.154 [2024-10-01 08:42:00.801429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.154 [2024-10-01 08:42:00.801438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:47512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.154 [2024-10-01 08:42:00.801446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.154 [2024-10-01 08:42:00.801455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:47520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.154 [2024-10-01 08:42:00.801464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.154 [2024-10-01 08:42:00.801473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:47528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.154 [2024-10-01 08:42:00.801481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.154 [2024-10-01 08:42:00.801492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:47536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.154 [2024-10-01 08:42:00.801501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.154 [2024-10-01 08:42:00.801510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:47544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.154 [2024-10-01 08:42:00.801517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.154 [2024-10-01 08:42:00.801527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:47552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.154 [2024-10-01 08:42:00.801534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.154 [2024-10-01 08:42:00.801543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:47560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.154 [2024-10-01 08:42:00.801551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.154 [2024-10-01 08:42:00.801561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:47568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.154 [2024-10-01 08:42:00.801568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.154 [2024-10-01 08:42:00.801577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:47576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.154 [2024-10-01 08:42:00.801584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.154 [2024-10-01 08:42:00.801594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:47584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.154 [2024-10-01 08:42:00.801601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.154 [2024-10-01 08:42:00.801612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:47592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.154 [2024-10-01 08:42:00.801619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.154 [2024-10-01 08:42:00.801628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:47600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.154 [2024-10-01 08:42:00.801636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.154 [2024-10-01 08:42:00.801645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:47608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.154 [2024-10-01 08:42:00.801652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.154 [2024-10-01 08:42:00.801662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:47616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.154 [2024-10-01 08:42:00.801670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.154 [2024-10-01 08:42:00.801679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:47624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.154 [2024-10-01 08:42:00.801688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.154 [2024-10-01 08:42:00.801697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:47632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.154 [2024-10-01 08:42:00.801704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.154 [2024-10-01 08:42:00.801713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:47640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.154 [2024-10-01 08:42:00.801721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.154 [2024-10-01 08:42:00.801731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:47648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.154 [2024-10-01 08:42:00.801738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.154 [2024-10-01 08:42:00.801747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:47656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.154 [2024-10-01 08:42:00.801754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.154 [2024-10-01 08:42:00.801763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:47664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.154 [2024-10-01 08:42:00.801770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.154 [2024-10-01 08:42:00.801781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:47672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.154 [2024-10-01 08:42:00.801788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.154 [2024-10-01 08:42:00.801797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:47680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.154 [2024-10-01 08:42:00.801805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.154 [2024-10-01 08:42:00.801814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:47688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.154 [2024-10-01 08:42:00.801821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.154 [2024-10-01 08:42:00.801831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:47696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.154 [2024-10-01 08:42:00.801839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.154 [2024-10-01 08:42:00.801848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:47704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.154 [2024-10-01 08:42:00.801855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.154 [2024-10-01 08:42:00.801864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:47712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.154 [2024-10-01 08:42:00.801872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.154 [2024-10-01 08:42:00.801882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:47720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.154 [2024-10-01 08:42:00.801891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.154 [2024-10-01 08:42:00.801901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:47728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.154 [2024-10-01 08:42:00.801909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.154 [2024-10-01 08:42:00.801931] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.154 [2024-10-01 08:42:00.801939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47736 len:8 PRP1 0x0 PRP2 0x0 00:27:20.154 [2024-10-01 08:42:00.801947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.154 [2024-10-01 08:42:00.801957] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.154 [2024-10-01 08:42:00.801964] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.154 [2024-10-01 08:42:00.801970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47744 len:8 PRP1 0x0 PRP2 0x0 00:27:20.154 [2024-10-01 08:42:00.801977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.154 [2024-10-01 08:42:00.801985] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.154 [2024-10-01 08:42:00.801991] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.154 [2024-10-01 08:42:00.802008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47752 len:8 PRP1 0x0 PRP2 0x0 00:27:20.154 [2024-10-01 08:42:00.802016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.154 [2024-10-01 08:42:00.802024] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.154 [2024-10-01 08:42:00.802030] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.154 [2024-10-01 08:42:00.802036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47760 len:8 PRP1 0x0 PRP2 0x0 00:27:20.154 [2024-10-01 08:42:00.802044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.154 [2024-10-01 08:42:00.802052] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.154 [2024-10-01 08:42:00.802057] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.154 [2024-10-01 08:42:00.802066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47768 len:8 PRP1 0x0 PRP2 0x0 00:27:20.154 [2024-10-01 08:42:00.802074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.154 [2024-10-01 08:42:00.802082] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.155 [2024-10-01 08:42:00.802088] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.155 [2024-10-01 08:42:00.802094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47776 len:8 PRP1 0x0 PRP2 0x0 00:27:20.155 [2024-10-01 08:42:00.802101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.155 [2024-10-01 08:42:00.802109] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.155 [2024-10-01 08:42:00.802115] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.155 [2024-10-01 08:42:00.802122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47784 len:8 PRP1 0x0 PRP2 0x0 00:27:20.155 [2024-10-01 08:42:00.802129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.155 [2024-10-01 08:42:00.802137] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.155 [2024-10-01 08:42:00.802145] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.155 [2024-10-01 08:42:00.802151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47792 len:8 PRP1 0x0 PRP2 0x0 00:27:20.155 [2024-10-01 08:42:00.802158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.155 [2024-10-01 08:42:00.802166] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.155 [2024-10-01 08:42:00.802172] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.155 [2024-10-01 08:42:00.802178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47800 len:8 PRP1 0x0 PRP2 0x0 00:27:20.155 [2024-10-01 08:42:00.802185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.155 [2024-10-01 08:42:00.802192] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.155 [2024-10-01 08:42:00.802198] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.155 [2024-10-01 08:42:00.802204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47808 len:8 PRP1 0x0 PRP2 0x0 00:27:20.155 [2024-10-01 08:42:00.802211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.155 [2024-10-01 08:42:00.802220] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.155 [2024-10-01 08:42:00.802227] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.155 [2024-10-01 08:42:00.802233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47816 len:8 PRP1 0x0 PRP2 0x0 00:27:20.155 [2024-10-01 08:42:00.802240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.155 [2024-10-01 08:42:00.802248] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.155 [2024-10-01 08:42:00.802254] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.155 [2024-10-01 08:42:00.802260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47824 len:8 PRP1 0x0 PRP2 0x0 00:27:20.155 [2024-10-01 08:42:00.802268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.155 [2024-10-01 08:42:00.802276] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.155 [2024-10-01 08:42:00.802281] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.155 [2024-10-01 08:42:00.802287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47832 len:8 PRP1 0x0 PRP2 0x0 00:27:20.155 [2024-10-01 08:42:00.802294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.155 [2024-10-01 08:42:00.802302] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.155 [2024-10-01 08:42:00.802307] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.155 [2024-10-01 08:42:00.802313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47840 len:8 PRP1 0x0 PRP2 0x0 00:27:20.155 [2024-10-01 08:42:00.802320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.155 [2024-10-01 08:42:00.802328] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.155 [2024-10-01 08:42:00.802333] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.155 [2024-10-01 08:42:00.802340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47848 len:8 PRP1 0x0 PRP2 0x0 00:27:20.155 [2024-10-01 08:42:00.802347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.155 [2024-10-01 08:42:00.802356] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.155 [2024-10-01 08:42:00.802362] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.155 [2024-10-01 08:42:00.802368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47856 len:8 PRP1 0x0 PRP2 0x0 00:27:20.155 [2024-10-01 08:42:00.802375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.155 [2024-10-01 08:42:00.802384] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.155 [2024-10-01 08:42:00.802390] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.155 [2024-10-01 08:42:00.802396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47864 len:8 PRP1 0x0 PRP2 0x0 00:27:20.155 [2024-10-01 08:42:00.802403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.155 [2024-10-01 08:42:00.802411] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.155 [2024-10-01 08:42:00.802417] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.155 [2024-10-01 08:42:00.802423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47872 len:8 PRP1 0x0 PRP2 0x0 00:27:20.155 [2024-10-01 08:42:00.802430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.155 [2024-10-01 08:42:00.802438] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.155 [2024-10-01 08:42:00.802444] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.155 [2024-10-01 08:42:00.802451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47880 len:8 PRP1 0x0 PRP2 0x0 00:27:20.155 [2024-10-01 08:42:00.802458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.155 [2024-10-01 08:42:00.802466] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.155 [2024-10-01 08:42:00.802472] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.155 [2024-10-01 08:42:00.802478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47888 len:8 PRP1 0x0 PRP2 0x0 00:27:20.155 [2024-10-01 08:42:00.802486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.155 [2024-10-01 08:42:00.802494] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.155 [2024-10-01 08:42:00.802500] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.155 [2024-10-01 08:42:00.802506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47896 len:8 PRP1 0x0 PRP2 0x0 00:27:20.155 [2024-10-01 08:42:00.802513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.155 [2024-10-01 08:42:00.802521] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.155 [2024-10-01 08:42:00.802526] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.155 [2024-10-01 08:42:00.802533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47904 len:8 PRP1 0x0 PRP2 0x0 00:27:20.155 [2024-10-01 08:42:00.802540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.155 [2024-10-01 08:42:00.802548] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.155 [2024-10-01 08:42:00.802554] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.155 [2024-10-01 08:42:00.802560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47912 len:8 PRP1 0x0 PRP2 0x0 00:27:20.155 [2024-10-01 08:42:00.802569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.155 [2024-10-01 08:42:00.802577] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.155 [2024-10-01 08:42:00.802583] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.155 [2024-10-01 08:42:00.802590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47920 len:8 PRP1 0x0 PRP2 0x0 00:27:20.155 [2024-10-01 08:42:00.802598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.155 [2024-10-01 08:42:00.802606] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.155 [2024-10-01 08:42:00.802612] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.155 [2024-10-01 08:42:00.802618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47928 len:8 PRP1 0x0 PRP2 0x0 00:27:20.155 [2024-10-01 08:42:00.802626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.155 [2024-10-01 08:42:00.802634] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.155 [2024-10-01 08:42:00.802639] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.155 [2024-10-01 08:42:00.802648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47936 len:8 PRP1 0x0 PRP2 0x0 00:27:20.155 [2024-10-01 08:42:00.802655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.155 [2024-10-01 08:42:00.802663] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.155 [2024-10-01 08:42:00.802668] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.155 [2024-10-01 08:42:00.802674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47944 len:8 PRP1 0x0 PRP2 0x0 00:27:20.155 [2024-10-01 08:42:00.802683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.155 [2024-10-01 08:42:00.802691] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.155 [2024-10-01 08:42:00.802698] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.155 [2024-10-01 08:42:00.802704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47952 len:8 PRP1 0x0 PRP2 0x0 00:27:20.155 [2024-10-01 08:42:00.802716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.155 [2024-10-01 08:42:00.802725] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.155 [2024-10-01 08:42:00.802731] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.155 [2024-10-01 08:42:00.802737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47960 len:8 PRP1 0x0 PRP2 0x0 00:27:20.156 [2024-10-01 08:42:00.802744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.156 [2024-10-01 08:42:00.802753] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.156 [2024-10-01 08:42:00.802759] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.156 [2024-10-01 08:42:00.802766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47968 len:8 PRP1 0x0 PRP2 0x0 00:27:20.156 [2024-10-01 08:42:00.802772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.156 [2024-10-01 08:42:00.802780] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.156 [2024-10-01 08:42:00.802786] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.156 [2024-10-01 08:42:00.802794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47976 len:8 PRP1 0x0 PRP2 0x0 00:27:20.156 [2024-10-01 08:42:00.802802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.156 [2024-10-01 08:42:00.802810] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.156 [2024-10-01 08:42:00.802817] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.156 [2024-10-01 08:42:00.802824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47984 len:8 PRP1 0x0 PRP2 0x0 00:27:20.156 [2024-10-01 08:42:00.802831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.156 [2024-10-01 08:42:00.802840] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.156 [2024-10-01 08:42:00.802847] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.156 [2024-10-01 08:42:00.802855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47992 len:8 PRP1 0x0 PRP2 0x0 00:27:20.156 [2024-10-01 08:42:00.802863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.156 [2024-10-01 08:42:00.802871] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.156 [2024-10-01 08:42:00.802876] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.156 [2024-10-01 08:42:00.802882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48000 len:8 PRP1 0x0 PRP2 0x0 00:27:20.156 [2024-10-01 08:42:00.802889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.156 [2024-10-01 08:42:00.802899] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.156 [2024-10-01 08:42:00.802905] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.156 [2024-10-01 08:42:00.802911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48008 len:8 PRP1 0x0 PRP2 0x0 00:27:20.156 [2024-10-01 08:42:00.802919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.156 [2024-10-01 08:42:00.802927] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.156 [2024-10-01 08:42:00.802934] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.156 [2024-10-01 08:42:00.802941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48016 len:8 PRP1 0x0 PRP2 0x0 00:27:20.156 [2024-10-01 08:42:00.802949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.156 [2024-10-01 08:42:00.802957] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.156 [2024-10-01 08:42:00.802963] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.156 [2024-10-01 08:42:00.802969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48024 len:8 PRP1 0x0 PRP2 0x0 00:27:20.156 [2024-10-01 08:42:00.802977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.156 [2024-10-01 08:42:00.802987] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.156 [2024-10-01 08:42:00.802997] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.156 [2024-10-01 08:42:00.803003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48032 len:8 PRP1 0x0 PRP2 0x0 00:27:20.156 [2024-10-01 08:42:00.803010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.156 [2024-10-01 08:42:00.803018] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.156 [2024-10-01 08:42:00.803026] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.156 [2024-10-01 08:42:00.803034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48040 len:8 PRP1 0x0 PRP2 0x0 00:27:20.156 [2024-10-01 08:42:00.803041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.156 [2024-10-01 08:42:00.803049] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.156 [2024-10-01 08:42:00.803055] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.156 [2024-10-01 08:42:00.803061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48048 len:8 PRP1 0x0 PRP2 0x0 00:27:20.156 [2024-10-01 08:42:00.803071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.156 [2024-10-01 08:42:00.803080] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.156 [2024-10-01 08:42:00.803087] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.156 [2024-10-01 08:42:00.803093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48056 len:8 PRP1 0x0 PRP2 0x0 00:27:20.156 [2024-10-01 08:42:00.803100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.156 [2024-10-01 08:42:00.803108] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.156 [2024-10-01 08:42:00.803113] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.156 [2024-10-01 08:42:00.803119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48064 len:8 PRP1 0x0 PRP2 0x0 00:27:20.156 [2024-10-01 08:42:00.803126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.156 [2024-10-01 08:42:00.803134] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.156 [2024-10-01 08:42:00.803140] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.156 [2024-10-01 08:42:00.803146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48072 len:8 PRP1 0x0 PRP2 0x0 00:27:20.156 [2024-10-01 08:42:00.803154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.156 [2024-10-01 08:42:00.803161] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.156 [2024-10-01 08:42:00.803167] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.156 [2024-10-01 08:42:00.803173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48080 len:8 PRP1 0x0 PRP2 0x0 00:27:20.156 [2024-10-01 08:42:00.803182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.156 [2024-10-01 08:42:00.803191] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.156 [2024-10-01 08:42:00.803196] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.156 [2024-10-01 08:42:00.803203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48088 len:8 PRP1 0x0 PRP2 0x0 00:27:20.156 [2024-10-01 08:42:00.803210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.156 [2024-10-01 08:42:00.803217] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.156 [2024-10-01 08:42:00.803223] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.156 [2024-10-01 08:42:00.803228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48096 len:8 PRP1 0x0 PRP2 0x0 00:27:20.156 [2024-10-01 08:42:00.803235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.156 [2024-10-01 08:42:00.813775] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.156 [2024-10-01 08:42:00.813802] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.156 [2024-10-01 08:42:00.813814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48104 len:8 PRP1 0x0 PRP2 0x0 00:27:20.156 [2024-10-01 08:42:00.813825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.156 [2024-10-01 08:42:00.813835] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.156 [2024-10-01 08:42:00.813841] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.156 [2024-10-01 08:42:00.813850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48112 len:8 PRP1 0x0 PRP2 0x0 00:27:20.156 [2024-10-01 08:42:00.813860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.156 [2024-10-01 08:42:00.813870] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.156 [2024-10-01 08:42:00.813877] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.156 [2024-10-01 08:42:00.813885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48120 len:8 PRP1 0x0 PRP2 0x0 00:27:20.156 [2024-10-01 08:42:00.813894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.156 [2024-10-01 08:42:00.813902] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.156 [2024-10-01 08:42:00.813908] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.156 [2024-10-01 08:42:00.813914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48128 len:8 PRP1 0x0 PRP2 0x0 00:27:20.156 [2024-10-01 08:42:00.813921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.156 [2024-10-01 08:42:00.813930] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.156 [2024-10-01 08:42:00.813936] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.156 [2024-10-01 08:42:00.813942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48136 len:8 PRP1 0x0 PRP2 0x0 00:27:20.156 [2024-10-01 08:42:00.813949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.156 [2024-10-01 08:42:00.813957] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.156 [2024-10-01 08:42:00.813964] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.156 [2024-10-01 08:42:00.813970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48144 len:8 PRP1 0x0 PRP2 0x0 00:27:20.156 [2024-10-01 08:42:00.813977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.157 [2024-10-01 08:42:00.813985] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.157 [2024-10-01 08:42:00.813991] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.157 [2024-10-01 08:42:00.814005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48152 len:8 PRP1 0x0 PRP2 0x0 00:27:20.157 [2024-10-01 08:42:00.814013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.157 [2024-10-01 08:42:00.814021] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.157 [2024-10-01 08:42:00.814027] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.157 [2024-10-01 08:42:00.814033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48160 len:8 PRP1 0x0 PRP2 0x0 00:27:20.157 [2024-10-01 08:42:00.814045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.157 [2024-10-01 08:42:00.814053] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.157 [2024-10-01 08:42:00.814059] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.157 [2024-10-01 08:42:00.814066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48168 len:8 PRP1 0x0 PRP2 0x0 00:27:20.157 [2024-10-01 08:42:00.814073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.157 [2024-10-01 08:42:00.814080] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.157 [2024-10-01 08:42:00.814087] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.157 [2024-10-01 08:42:00.814094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48176 len:8 PRP1 0x0 PRP2 0x0 00:27:20.157 [2024-10-01 08:42:00.814101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.157 [2024-10-01 08:42:00.814109] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.157 [2024-10-01 08:42:00.814114] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.157 [2024-10-01 08:42:00.814120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48184 len:8 PRP1 0x0 PRP2 0x0 00:27:20.157 [2024-10-01 08:42:00.814129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.157 [2024-10-01 08:42:00.814137] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.157 [2024-10-01 08:42:00.814142] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.157 [2024-10-01 08:42:00.814148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48192 len:8 PRP1 0x0 PRP2 0x0 00:27:20.157 [2024-10-01 08:42:00.814155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.157 [2024-10-01 08:42:00.814163] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.157 [2024-10-01 08:42:00.814169] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.157 [2024-10-01 08:42:00.814176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48200 len:8 PRP1 0x0 PRP2 0x0 00:27:20.157 [2024-10-01 08:42:00.814183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.157 [2024-10-01 08:42:00.814191] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.157 [2024-10-01 08:42:00.814197] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.157 [2024-10-01 08:42:00.814204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48208 len:8 PRP1 0x0 PRP2 0x0 00:27:20.157 [2024-10-01 08:42:00.814213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.157 [2024-10-01 08:42:00.814221] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.157 [2024-10-01 08:42:00.814226] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.157 [2024-10-01 08:42:00.814233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48216 len:8 PRP1 0x0 PRP2 0x0 00:27:20.157 [2024-10-01 08:42:00.814241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.157 [2024-10-01 08:42:00.814249] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.157 [2024-10-01 08:42:00.814258] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.157 [2024-10-01 08:42:00.814264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48224 len:8 PRP1 0x0 PRP2 0x0 00:27:20.157 [2024-10-01 08:42:00.814272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.157 [2024-10-01 08:42:00.814311] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x152aa90 was disconnected and freed. reset controller. 00:27:20.157 [2024-10-01 08:42:00.814321] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:27:20.157 [2024-10-01 08:42:00.814349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:20.157 [2024-10-01 08:42:00.814358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.157 [2024-10-01 08:42:00.814368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:20.157 [2024-10-01 08:42:00.814375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.157 [2024-10-01 08:42:00.814383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:20.157 [2024-10-01 08:42:00.814391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.157 [2024-10-01 08:42:00.814400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:20.157 [2024-10-01 08:42:00.814408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.157 [2024-10-01 08:42:00.814415] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:20.157 [2024-10-01 08:42:00.814456] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1507ff0 (9): Bad file descriptor 00:27:20.157 [2024-10-01 08:42:00.817990] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:20.157 [2024-10-01 08:42:00.866814] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:20.157 11016.40 IOPS, 43.03 MiB/s 11079.17 IOPS, 43.28 MiB/s 11112.57 IOPS, 43.41 MiB/s 11156.88 IOPS, 43.58 MiB/s [2024-10-01 08:42:05.172496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:57592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.157 [2024-10-01 08:42:05.172535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.157 [2024-10-01 08:42:05.172551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:57600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.157 [2024-10-01 08:42:05.172560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.157 [2024-10-01 08:42:05.172570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:57608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.157 [2024-10-01 08:42:05.172578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.157 [2024-10-01 08:42:05.172588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:57616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.157 [2024-10-01 08:42:05.172595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.157 [2024-10-01 08:42:05.172605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:57624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.157 [2024-10-01 08:42:05.172617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.157 [2024-10-01 08:42:05.172627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:57632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.157 [2024-10-01 08:42:05.172635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.157 [2024-10-01 08:42:05.172644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:57640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.157 [2024-10-01 08:42:05.172652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.157 [2024-10-01 08:42:05.172662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:57896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.157 [2024-10-01 08:42:05.172670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.157 [2024-10-01 08:42:05.172679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:57904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.157 [2024-10-01 08:42:05.172688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.157 [2024-10-01 08:42:05.172697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:57912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.157 [2024-10-01 08:42:05.172705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.157 [2024-10-01 08:42:05.172716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:57920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.157 [2024-10-01 08:42:05.172724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.157 [2024-10-01 08:42:05.172735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:57928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.157 [2024-10-01 08:42:05.172743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.157 [2024-10-01 08:42:05.172754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:57936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.157 [2024-10-01 08:42:05.172761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.157 [2024-10-01 08:42:05.172771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:57944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.157 [2024-10-01 08:42:05.172778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.157 [2024-10-01 08:42:05.172787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:57952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.157 [2024-10-01 08:42:05.172796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.157 [2024-10-01 08:42:05.172808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:57960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.157 [2024-10-01 08:42:05.172816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.158 [2024-10-01 08:42:05.172825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:57968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.158 [2024-10-01 08:42:05.172833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.158 [2024-10-01 08:42:05.172842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:57976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.158 [2024-10-01 08:42:05.172853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.158 [2024-10-01 08:42:05.172864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:57984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.158 [2024-10-01 08:42:05.172871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.158 [2024-10-01 08:42:05.172882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:57992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.158 [2024-10-01 08:42:05.172890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.158 [2024-10-01 08:42:05.172902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.158 [2024-10-01 08:42:05.172909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.158 [2024-10-01 08:42:05.172918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:58008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.158 [2024-10-01 08:42:05.172926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.158 [2024-10-01 08:42:05.172935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:58016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.158 [2024-10-01 08:42:05.172943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.158 [2024-10-01 08:42:05.172952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:58024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.158 [2024-10-01 08:42:05.172959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.158 [2024-10-01 08:42:05.172969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:58032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.158 [2024-10-01 08:42:05.172976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.158 [2024-10-01 08:42:05.172986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:58040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.158 [2024-10-01 08:42:05.172997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.158 [2024-10-01 08:42:05.173007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:58048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.158 [2024-10-01 08:42:05.173014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.158 [2024-10-01 08:42:05.173024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:58056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.158 [2024-10-01 08:42:05.173031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.158 [2024-10-01 08:42:05.173040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:58064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.158 [2024-10-01 08:42:05.173048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.158 [2024-10-01 08:42:05.173057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:58072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.158 [2024-10-01 08:42:05.173064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.158 [2024-10-01 08:42:05.173075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:58080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.158 [2024-10-01 08:42:05.173082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.158 [2024-10-01 08:42:05.173093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:58088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.158 [2024-10-01 08:42:05.173100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.158 [2024-10-01 08:42:05.173110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:58096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.158 [2024-10-01 08:42:05.173117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.158 [2024-10-01 08:42:05.173127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:58104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.158 [2024-10-01 08:42:05.173136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.158 [2024-10-01 08:42:05.173145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:58112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.158 [2024-10-01 08:42:05.173153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.158 [2024-10-01 08:42:05.173162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:58120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.158 [2024-10-01 08:42:05.173170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.158 [2024-10-01 08:42:05.173180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:58128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.158 [2024-10-01 08:42:05.173187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.158 [2024-10-01 08:42:05.173196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:58136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.158 [2024-10-01 08:42:05.173204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.158 [2024-10-01 08:42:05.173213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:58144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.158 [2024-10-01 08:42:05.173221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.158 [2024-10-01 08:42:05.173231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:58152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.158 [2024-10-01 08:42:05.173238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.158 [2024-10-01 08:42:05.173248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:58160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.158 [2024-10-01 08:42:05.173255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.158 [2024-10-01 08:42:05.173264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:58168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.158 [2024-10-01 08:42:05.173271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.158 [2024-10-01 08:42:05.173280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:58176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.158 [2024-10-01 08:42:05.173290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.158 [2024-10-01 08:42:05.173299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:58184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.158 [2024-10-01 08:42:05.173306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.158 [2024-10-01 08:42:05.173316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:58192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.158 [2024-10-01 08:42:05.173323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.158 [2024-10-01 08:42:05.173332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:58200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.158 [2024-10-01 08:42:05.173340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.158 [2024-10-01 08:42:05.173349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:58208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.158 [2024-10-01 08:42:05.173357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.158 [2024-10-01 08:42:05.173366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:58216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.158 [2024-10-01 08:42:05.173374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.158 [2024-10-01 08:42:05.173383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:58224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.158 [2024-10-01 08:42:05.173391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.158 [2024-10-01 08:42:05.173401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:58232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.158 [2024-10-01 08:42:05.173409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.158 [2024-10-01 08:42:05.173419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:58240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.158 [2024-10-01 08:42:05.173426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.158 [2024-10-01 08:42:05.173435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:58248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.159 [2024-10-01 08:42:05.173443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.159 [2024-10-01 08:42:05.173452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:58256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.159 [2024-10-01 08:42:05.173460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.159 [2024-10-01 08:42:05.173470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:58264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.159 [2024-10-01 08:42:05.173477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.159 [2024-10-01 08:42:05.173486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:58272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.159 [2024-10-01 08:42:05.173495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.159 [2024-10-01 08:42:05.173507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:58280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.159 [2024-10-01 08:42:05.173514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.159 [2024-10-01 08:42:05.173524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:58288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.159 [2024-10-01 08:42:05.173531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.159 [2024-10-01 08:42:05.173540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:58296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.159 [2024-10-01 08:42:05.173548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.159 [2024-10-01 08:42:05.173558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:58304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.159 [2024-10-01 08:42:05.173565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.159 [2024-10-01 08:42:05.173575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:58312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.159 [2024-10-01 08:42:05.173582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.159 [2024-10-01 08:42:05.173591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:58320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.159 [2024-10-01 08:42:05.173600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.159 [2024-10-01 08:42:05.173610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:58328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.159 [2024-10-01 08:42:05.173617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.159 [2024-10-01 08:42:05.173626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:58336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.159 [2024-10-01 08:42:05.173633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.159 [2024-10-01 08:42:05.173643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:58344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.159 [2024-10-01 08:42:05.173650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.159 [2024-10-01 08:42:05.173660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:58352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.159 [2024-10-01 08:42:05.173668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.159 [2024-10-01 08:42:05.173677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:58360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.159 [2024-10-01 08:42:05.173685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.159 [2024-10-01 08:42:05.173694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:58368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.159 [2024-10-01 08:42:05.173702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.159 [2024-10-01 08:42:05.173712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:58376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.159 [2024-10-01 08:42:05.173720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.159 [2024-10-01 08:42:05.173731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:58384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.159 [2024-10-01 08:42:05.173739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.159 [2024-10-01 08:42:05.173748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:58392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.159 [2024-10-01 08:42:05.173756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.159 [2024-10-01 08:42:05.173776] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.159 [2024-10-01 08:42:05.173785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58400 len:8 PRP1 0x0 PRP2 0x0 00:27:20.159 [2024-10-01 08:42:05.173793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.159 [2024-10-01 08:42:05.173829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:20.159 [2024-10-01 08:42:05.173840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.159 [2024-10-01 08:42:05.173848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:20.159 [2024-10-01 08:42:05.173855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.159 [2024-10-01 08:42:05.173863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:20.159 [2024-10-01 08:42:05.173871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.159 [2024-10-01 08:42:05.173880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:20.159 [2024-10-01 08:42:05.173887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.159 [2024-10-01 08:42:05.173895] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1507ff0 is same with the state(6) to be set 00:27:20.159 [2024-10-01 08:42:05.174072] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.159 [2024-10-01 08:42:05.174083] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.159 [2024-10-01 08:42:05.174090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58408 len:8 PRP1 0x0 PRP2 0x0 00:27:20.159 [2024-10-01 08:42:05.174098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.159 [2024-10-01 08:42:05.174107] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.159 [2024-10-01 08:42:05.174113] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.159 [2024-10-01 08:42:05.174119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58416 len:8 PRP1 0x0 PRP2 0x0 00:27:20.159 [2024-10-01 08:42:05.174126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.159 [2024-10-01 08:42:05.174134] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.159 [2024-10-01 08:42:05.174141] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.159 [2024-10-01 08:42:05.174147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58424 len:8 PRP1 0x0 PRP2 0x0 00:27:20.159 [2024-10-01 08:42:05.174159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.159 [2024-10-01 08:42:05.174167] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.159 [2024-10-01 08:42:05.174172] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.159 [2024-10-01 08:42:05.174178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58432 len:8 PRP1 0x0 PRP2 0x0 00:27:20.159 [2024-10-01 08:42:05.174185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.159 [2024-10-01 08:42:05.174194] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.159 [2024-10-01 08:42:05.174200] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.159 [2024-10-01 08:42:05.174206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58440 len:8 PRP1 0x0 PRP2 0x0 00:27:20.159 [2024-10-01 08:42:05.174213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.159 [2024-10-01 08:42:05.174221] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.159 [2024-10-01 08:42:05.174227] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.159 [2024-10-01 08:42:05.174233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58448 len:8 PRP1 0x0 PRP2 0x0 00:27:20.159 [2024-10-01 08:42:05.174240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.159 [2024-10-01 08:42:05.174249] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.159 [2024-10-01 08:42:05.174256] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.159 [2024-10-01 08:42:05.174262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58456 len:8 PRP1 0x0 PRP2 0x0 00:27:20.159 [2024-10-01 08:42:05.174269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.159 [2024-10-01 08:42:05.174277] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.159 [2024-10-01 08:42:05.174282] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.159 [2024-10-01 08:42:05.174288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58464 len:8 PRP1 0x0 PRP2 0x0 00:27:20.159 [2024-10-01 08:42:05.174295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.159 [2024-10-01 08:42:05.174304] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.159 [2024-10-01 08:42:05.174310] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.159 [2024-10-01 08:42:05.174316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58472 len:8 PRP1 0x0 PRP2 0x0 00:27:20.159 [2024-10-01 08:42:05.174324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.159 [2024-10-01 08:42:05.174331] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.159 [2024-10-01 08:42:05.174337] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.160 [2024-10-01 08:42:05.174344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58480 len:8 PRP1 0x0 PRP2 0x0 00:27:20.160 [2024-10-01 08:42:05.174351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.160 [2024-10-01 08:42:05.174359] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.160 [2024-10-01 08:42:05.174365] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.160 [2024-10-01 08:42:05.174373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58488 len:8 PRP1 0x0 PRP2 0x0 00:27:20.160 [2024-10-01 08:42:05.174380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.160 [2024-10-01 08:42:05.174388] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.160 [2024-10-01 08:42:05.174394] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.160 [2024-10-01 08:42:05.174400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58496 len:8 PRP1 0x0 PRP2 0x0 00:27:20.160 [2024-10-01 08:42:05.174409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.160 [2024-10-01 08:42:05.174417] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.160 [2024-10-01 08:42:05.174423] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.160 [2024-10-01 08:42:05.174430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58504 len:8 PRP1 0x0 PRP2 0x0 00:27:20.160 [2024-10-01 08:42:05.174437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.160 [2024-10-01 08:42:05.174445] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.160 [2024-10-01 08:42:05.174451] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.160 [2024-10-01 08:42:05.174457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58512 len:8 PRP1 0x0 PRP2 0x0 00:27:20.160 [2024-10-01 08:42:05.174465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.160 [2024-10-01 08:42:05.174473] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.160 [2024-10-01 08:42:05.174479] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.160 [2024-10-01 08:42:05.174486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58520 len:8 PRP1 0x0 PRP2 0x0 00:27:20.160 [2024-10-01 08:42:05.174492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.160 [2024-10-01 08:42:05.174500] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.160 [2024-10-01 08:42:05.174506] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.160 [2024-10-01 08:42:05.174512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58528 len:8 PRP1 0x0 PRP2 0x0 00:27:20.160 [2024-10-01 08:42:05.174520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.160 [2024-10-01 08:42:05.174528] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.160 [2024-10-01 08:42:05.174535] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.160 [2024-10-01 08:42:05.174541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58536 len:8 PRP1 0x0 PRP2 0x0 00:27:20.160 [2024-10-01 08:42:05.174548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.160 [2024-10-01 08:42:05.174556] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.160 [2024-10-01 08:42:05.174565] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.160 [2024-10-01 08:42:05.174572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58544 len:8 PRP1 0x0 PRP2 0x0 00:27:20.160 [2024-10-01 08:42:05.174580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.160 [2024-10-01 08:42:05.174588] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.160 [2024-10-01 08:42:05.174594] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.160 [2024-10-01 08:42:05.174601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58552 len:8 PRP1 0x0 PRP2 0x0 00:27:20.160 [2024-10-01 08:42:05.174608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.160 [2024-10-01 08:42:05.174615] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.160 [2024-10-01 08:42:05.174621] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.160 [2024-10-01 08:42:05.174627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58560 len:8 PRP1 0x0 PRP2 0x0 00:27:20.160 [2024-10-01 08:42:05.174635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.160 [2024-10-01 08:42:05.174642] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.160 [2024-10-01 08:42:05.174648] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.160 [2024-10-01 08:42:05.174654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58568 len:8 PRP1 0x0 PRP2 0x0 00:27:20.160 [2024-10-01 08:42:05.174661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.160 [2024-10-01 08:42:05.174669] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.160 [2024-10-01 08:42:05.174675] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.160 [2024-10-01 08:42:05.174681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58576 len:8 PRP1 0x0 PRP2 0x0 00:27:20.160 [2024-10-01 08:42:05.174689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.160 [2024-10-01 08:42:05.174696] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.160 [2024-10-01 08:42:05.174702] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.160 [2024-10-01 08:42:05.174708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58584 len:8 PRP1 0x0 PRP2 0x0 00:27:20.160 [2024-10-01 08:42:05.174715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.160 [2024-10-01 08:42:05.174722] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.160 [2024-10-01 08:42:05.174728] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.160 [2024-10-01 08:42:05.174734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58592 len:8 PRP1 0x0 PRP2 0x0 00:27:20.160 [2024-10-01 08:42:05.174742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.160 [2024-10-01 08:42:05.174749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.160 [2024-10-01 08:42:05.174755] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.160 [2024-10-01 08:42:05.174761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58600 len:8 PRP1 0x0 PRP2 0x0 00:27:20.160 [2024-10-01 08:42:05.174768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.160 [2024-10-01 08:42:05.174776] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.160 [2024-10-01 08:42:05.174782] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.160 [2024-10-01 08:42:05.174788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57648 len:8 PRP1 0x0 PRP2 0x0 00:27:20.160 [2024-10-01 08:42:05.174795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.160 [2024-10-01 08:42:05.174804] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.160 [2024-10-01 08:42:05.174809] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.160 [2024-10-01 08:42:05.174816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57656 len:8 PRP1 0x0 PRP2 0x0 00:27:20.160 [2024-10-01 08:42:05.174824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.160 [2024-10-01 08:42:05.174834] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.160 [2024-10-01 08:42:05.174840] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.160 [2024-10-01 08:42:05.174846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57664 len:8 PRP1 0x0 PRP2 0x0 00:27:20.160 [2024-10-01 08:42:05.174854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.160 [2024-10-01 08:42:05.174862] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.160 [2024-10-01 08:42:05.174867] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.160 [2024-10-01 08:42:05.174874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57672 len:8 PRP1 0x0 PRP2 0x0 00:27:20.160 [2024-10-01 08:42:05.174881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.160 [2024-10-01 08:42:05.174890] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.160 [2024-10-01 08:42:05.174896] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.160 [2024-10-01 08:42:05.174903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57680 len:8 PRP1 0x0 PRP2 0x0 00:27:20.160 [2024-10-01 08:42:05.174909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.160 [2024-10-01 08:42:05.174917] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.160 [2024-10-01 08:42:05.174923] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.160 [2024-10-01 08:42:05.174928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57688 len:8 PRP1 0x0 PRP2 0x0 00:27:20.160 [2024-10-01 08:42:05.174935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.160 [2024-10-01 08:42:05.174944] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.160 [2024-10-01 08:42:05.174950] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.160 [2024-10-01 08:42:05.174956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57696 len:8 PRP1 0x0 PRP2 0x0 00:27:20.160 [2024-10-01 08:42:05.174963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.160 [2024-10-01 08:42:05.174971] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.160 [2024-10-01 08:42:05.174976] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.160 [2024-10-01 08:42:05.174983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57704 len:8 PRP1 0x0 PRP2 0x0 00:27:20.160 [2024-10-01 08:42:05.174990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.161 [2024-10-01 08:42:05.175002] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.161 [2024-10-01 08:42:05.175011] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.161 [2024-10-01 08:42:05.175017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57712 len:8 PRP1 0x0 PRP2 0x0 00:27:20.161 [2024-10-01 08:42:05.175026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.161 [2024-10-01 08:42:05.175034] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.161 [2024-10-01 08:42:05.175039] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.161 [2024-10-01 08:42:05.175045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57720 len:8 PRP1 0x0 PRP2 0x0 00:27:20.161 [2024-10-01 08:42:05.175053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.161 [2024-10-01 08:42:05.175061] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.161 [2024-10-01 08:42:05.175067] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.161 [2024-10-01 08:42:05.175073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57728 len:8 PRP1 0x0 PRP2 0x0 00:27:20.161 [2024-10-01 08:42:05.175080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.161 [2024-10-01 08:42:05.175087] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.161 [2024-10-01 08:42:05.175093] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.161 [2024-10-01 08:42:05.175100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57736 len:8 PRP1 0x0 PRP2 0x0 00:27:20.161 [2024-10-01 08:42:05.175108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.161 [2024-10-01 08:42:05.175116] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.161 [2024-10-01 08:42:05.175121] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.161 [2024-10-01 08:42:05.175127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57744 len:8 PRP1 0x0 PRP2 0x0 00:27:20.161 [2024-10-01 08:42:05.175134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.161 [2024-10-01 08:42:05.175142] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.161 [2024-10-01 08:42:05.175148] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.161 [2024-10-01 08:42:05.175154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57752 len:8 PRP1 0x0 PRP2 0x0 00:27:20.161 [2024-10-01 08:42:05.175162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.161 [2024-10-01 08:42:05.175169] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.161 [2024-10-01 08:42:05.175175] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.161 [2024-10-01 08:42:05.175181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57760 len:8 PRP1 0x0 PRP2 0x0 00:27:20.161 [2024-10-01 08:42:05.175188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.161 [2024-10-01 08:42:05.175196] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.161 [2024-10-01 08:42:05.175201] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.161 [2024-10-01 08:42:05.175208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57768 len:8 PRP1 0x0 PRP2 0x0 00:27:20.161 [2024-10-01 08:42:05.175215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.161 [2024-10-01 08:42:05.175223] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.161 [2024-10-01 08:42:05.175228] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.161 [2024-10-01 08:42:05.175236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57776 len:8 PRP1 0x0 PRP2 0x0 00:27:20.161 [2024-10-01 08:42:05.175243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.161 [2024-10-01 08:42:05.188247] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.161 [2024-10-01 08:42:05.188274] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.161 [2024-10-01 08:42:05.188286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57784 len:8 PRP1 0x0 PRP2 0x0 00:27:20.161 [2024-10-01 08:42:05.188297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.161 [2024-10-01 08:42:05.188305] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.161 [2024-10-01 08:42:05.188311] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.161 [2024-10-01 08:42:05.188318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57792 len:8 PRP1 0x0 PRP2 0x0 00:27:20.161 [2024-10-01 08:42:05.188326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.161 [2024-10-01 08:42:05.188334] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.161 [2024-10-01 08:42:05.188340] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.161 [2024-10-01 08:42:05.188347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57800 len:8 PRP1 0x0 PRP2 0x0 00:27:20.161 [2024-10-01 08:42:05.188355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.161 [2024-10-01 08:42:05.188364] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.161 [2024-10-01 08:42:05.188370] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.161 [2024-10-01 08:42:05.188376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57808 len:8 PRP1 0x0 PRP2 0x0 00:27:20.161 [2024-10-01 08:42:05.188383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.161 [2024-10-01 08:42:05.188391] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.161 [2024-10-01 08:42:05.188398] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.161 [2024-10-01 08:42:05.188405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57816 len:8 PRP1 0x0 PRP2 0x0 00:27:20.161 [2024-10-01 08:42:05.188413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.161 [2024-10-01 08:42:05.188421] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.161 [2024-10-01 08:42:05.188427] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.161 [2024-10-01 08:42:05.188433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57824 len:8 PRP1 0x0 PRP2 0x0 00:27:20.161 [2024-10-01 08:42:05.188441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.161 [2024-10-01 08:42:05.188449] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.161 [2024-10-01 08:42:05.188455] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.161 [2024-10-01 08:42:05.188462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58608 len:8 PRP1 0x0 PRP2 0x0 00:27:20.161 [2024-10-01 08:42:05.188470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.161 [2024-10-01 08:42:05.188479] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.161 [2024-10-01 08:42:05.188491] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.161 [2024-10-01 08:42:05.188498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57832 len:8 PRP1 0x0 PRP2 0x0 00:27:20.161 [2024-10-01 08:42:05.188506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.161 [2024-10-01 08:42:05.188514] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.161 [2024-10-01 08:42:05.188521] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.161 [2024-10-01 08:42:05.188528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57840 len:8 PRP1 0x0 PRP2 0x0 00:27:20.161 [2024-10-01 08:42:05.188536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.161 [2024-10-01 08:42:05.188544] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.161 [2024-10-01 08:42:05.188550] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.161 [2024-10-01 08:42:05.188556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57848 len:8 PRP1 0x0 PRP2 0x0 00:27:20.161 [2024-10-01 08:42:05.188564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.161 [2024-10-01 08:42:05.188572] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.161 [2024-10-01 08:42:05.188578] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.161 [2024-10-01 08:42:05.188585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57856 len:8 PRP1 0x0 PRP2 0x0 00:27:20.161 [2024-10-01 08:42:05.188594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.161 [2024-10-01 08:42:05.188602] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.161 [2024-10-01 08:42:05.188608] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.161 [2024-10-01 08:42:05.188615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57864 len:8 PRP1 0x0 PRP2 0x0 00:27:20.161 [2024-10-01 08:42:05.188622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.161 [2024-10-01 08:42:05.188630] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.161 [2024-10-01 08:42:05.188637] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.161 [2024-10-01 08:42:05.188643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57872 len:8 PRP1 0x0 PRP2 0x0 00:27:20.161 [2024-10-01 08:42:05.188651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.161 [2024-10-01 08:42:05.188659] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.161 [2024-10-01 08:42:05.188665] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.161 [2024-10-01 08:42:05.188672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57880 len:8 PRP1 0x0 PRP2 0x0 00:27:20.161 [2024-10-01 08:42:05.188679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.161 [2024-10-01 08:42:05.188686] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.161 [2024-10-01 08:42:05.188693] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.161 [2024-10-01 08:42:05.188700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57888 len:8 PRP1 0x0 PRP2 0x0 00:27:20.162 [2024-10-01 08:42:05.188708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.162 [2024-10-01 08:42:05.188718] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.162 [2024-10-01 08:42:05.188723] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.162 [2024-10-01 08:42:05.188729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57592 len:8 PRP1 0x0 PRP2 0x0 00:27:20.162 [2024-10-01 08:42:05.188737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.162 [2024-10-01 08:42:05.188745] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.162 [2024-10-01 08:42:05.188753] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.162 [2024-10-01 08:42:05.188760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57600 len:8 PRP1 0x0 PRP2 0x0 00:27:20.162 [2024-10-01 08:42:05.188768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.162 [2024-10-01 08:42:05.188776] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.162 [2024-10-01 08:42:05.188782] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.162 [2024-10-01 08:42:05.188788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57608 len:8 PRP1 0x0 PRP2 0x0 00:27:20.162 [2024-10-01 08:42:05.188796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.162 [2024-10-01 08:42:05.188804] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.162 [2024-10-01 08:42:05.188811] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.162 [2024-10-01 08:42:05.188817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57616 len:8 PRP1 0x0 PRP2 0x0 00:27:20.162 [2024-10-01 08:42:05.188826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.162 [2024-10-01 08:42:05.188834] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.162 [2024-10-01 08:42:05.188840] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.162 [2024-10-01 08:42:05.188847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57624 len:8 PRP1 0x0 PRP2 0x0 00:27:20.162 [2024-10-01 08:42:05.188854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.162 [2024-10-01 08:42:05.188862] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.162 [2024-10-01 08:42:05.188868] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.162 [2024-10-01 08:42:05.188875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57632 len:8 PRP1 0x0 PRP2 0x0 00:27:20.162 [2024-10-01 08:42:05.188882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.162 [2024-10-01 08:42:05.188890] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.162 [2024-10-01 08:42:05.188897] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.162 [2024-10-01 08:42:05.188904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57640 len:8 PRP1 0x0 PRP2 0x0 00:27:20.162 [2024-10-01 08:42:05.188912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.162 [2024-10-01 08:42:05.188920] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.162 [2024-10-01 08:42:05.188926] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.162 [2024-10-01 08:42:05.188932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57896 len:8 PRP1 0x0 PRP2 0x0 00:27:20.162 [2024-10-01 08:42:05.188943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.162 [2024-10-01 08:42:05.188952] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.162 [2024-10-01 08:42:05.188958] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.162 [2024-10-01 08:42:05.188965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57904 len:8 PRP1 0x0 PRP2 0x0 00:27:20.162 [2024-10-01 08:42:05.188973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.162 [2024-10-01 08:42:05.188980] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.162 [2024-10-01 08:42:05.188986] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.162 [2024-10-01 08:42:05.189001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57912 len:8 PRP1 0x0 PRP2 0x0 00:27:20.162 [2024-10-01 08:42:05.189009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.162 [2024-10-01 08:42:05.189017] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.162 [2024-10-01 08:42:05.189023] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.162 [2024-10-01 08:42:05.189030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57920 len:8 PRP1 0x0 PRP2 0x0 00:27:20.162 [2024-10-01 08:42:05.189038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.162 [2024-10-01 08:42:05.189047] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.162 [2024-10-01 08:42:05.189052] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.162 [2024-10-01 08:42:05.189059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57928 len:8 PRP1 0x0 PRP2 0x0 00:27:20.162 [2024-10-01 08:42:05.189066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.162 [2024-10-01 08:42:05.189074] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.162 [2024-10-01 08:42:05.189079] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.162 [2024-10-01 08:42:05.189086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57936 len:8 PRP1 0x0 PRP2 0x0 00:27:20.162 [2024-10-01 08:42:05.189095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.162 [2024-10-01 08:42:05.189104] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.162 [2024-10-01 08:42:05.189110] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.162 [2024-10-01 08:42:05.189116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57944 len:8 PRP1 0x0 PRP2 0x0 00:27:20.162 [2024-10-01 08:42:05.189123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.162 [2024-10-01 08:42:05.189132] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.162 [2024-10-01 08:42:05.189137] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.162 [2024-10-01 08:42:05.189143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57952 len:8 PRP1 0x0 PRP2 0x0 00:27:20.162 [2024-10-01 08:42:05.189151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.162 [2024-10-01 08:42:05.189160] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.162 [2024-10-01 08:42:05.189168] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.162 [2024-10-01 08:42:05.189175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57960 len:8 PRP1 0x0 PRP2 0x0 00:27:20.162 [2024-10-01 08:42:05.189182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.162 [2024-10-01 08:42:05.189190] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.162 [2024-10-01 08:42:05.189196] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.162 [2024-10-01 08:42:05.189202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57968 len:8 PRP1 0x0 PRP2 0x0 00:27:20.162 [2024-10-01 08:42:05.189211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.162 [2024-10-01 08:42:05.189219] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.162 [2024-10-01 08:42:05.189225] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.162 [2024-10-01 08:42:05.189232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57976 len:8 PRP1 0x0 PRP2 0x0 00:27:20.162 [2024-10-01 08:42:05.189240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.162 [2024-10-01 08:42:05.189248] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.163 [2024-10-01 08:42:05.189255] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.163 [2024-10-01 08:42:05.189261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57984 len:8 PRP1 0x0 PRP2 0x0 00:27:20.163 [2024-10-01 08:42:05.189269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.163 [2024-10-01 08:42:05.189278] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.163 [2024-10-01 08:42:05.189283] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.163 [2024-10-01 08:42:05.189291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57992 len:8 PRP1 0x0 PRP2 0x0 00:27:20.163 [2024-10-01 08:42:05.189299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.163 [2024-10-01 08:42:05.189307] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.163 [2024-10-01 08:42:05.189313] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.163 [2024-10-01 08:42:05.189319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58000 len:8 PRP1 0x0 PRP2 0x0 00:27:20.163 [2024-10-01 08:42:05.189327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.163 [2024-10-01 08:42:05.189336] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.163 [2024-10-01 08:42:05.189342] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.163 [2024-10-01 08:42:05.189348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58008 len:8 PRP1 0x0 PRP2 0x0 00:27:20.163 [2024-10-01 08:42:05.189355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.163 [2024-10-01 08:42:05.189363] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.163 [2024-10-01 08:42:05.189369] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.163 [2024-10-01 08:42:05.189375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58016 len:8 PRP1 0x0 PRP2 0x0 00:27:20.163 [2024-10-01 08:42:05.189384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.163 [2024-10-01 08:42:05.189394] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.163 [2024-10-01 08:42:05.189400] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.163 [2024-10-01 08:42:05.189406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58024 len:8 PRP1 0x0 PRP2 0x0 00:27:20.163 [2024-10-01 08:42:05.189413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.163 [2024-10-01 08:42:05.189422] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.163 [2024-10-01 08:42:05.189428] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.163 [2024-10-01 08:42:05.189434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58032 len:8 PRP1 0x0 PRP2 0x0 00:27:20.163 [2024-10-01 08:42:05.189442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.163 [2024-10-01 08:42:05.189450] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.163 [2024-10-01 08:42:05.189456] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.163 [2024-10-01 08:42:05.189465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58040 len:8 PRP1 0x0 PRP2 0x0 00:27:20.163 [2024-10-01 08:42:05.189472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.163 [2024-10-01 08:42:05.189480] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.163 [2024-10-01 08:42:05.189485] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.163 [2024-10-01 08:42:05.189492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58048 len:8 PRP1 0x0 PRP2 0x0 00:27:20.163 [2024-10-01 08:42:05.189499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.163 [2024-10-01 08:42:05.189507] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.163 [2024-10-01 08:42:05.189513] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.163 [2024-10-01 08:42:05.189519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58056 len:8 PRP1 0x0 PRP2 0x0 00:27:20.163 [2024-10-01 08:42:05.189526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.163 [2024-10-01 08:42:05.189534] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.163 [2024-10-01 08:42:05.189541] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.163 [2024-10-01 08:42:05.189548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58064 len:8 PRP1 0x0 PRP2 0x0 00:27:20.163 [2024-10-01 08:42:05.189556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.163 [2024-10-01 08:42:05.189563] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.163 [2024-10-01 08:42:05.189568] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.163 [2024-10-01 08:42:05.189575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58072 len:8 PRP1 0x0 PRP2 0x0 00:27:20.163 [2024-10-01 08:42:05.189583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.163 [2024-10-01 08:42:05.189592] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.163 [2024-10-01 08:42:05.189598] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.163 [2024-10-01 08:42:05.189605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58080 len:8 PRP1 0x0 PRP2 0x0 00:27:20.163 [2024-10-01 08:42:05.189613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.163 [2024-10-01 08:42:05.189621] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.163 [2024-10-01 08:42:05.189627] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.163 [2024-10-01 08:42:05.189634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58088 len:8 PRP1 0x0 PRP2 0x0 00:27:20.163 [2024-10-01 08:42:05.189642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.163 [2024-10-01 08:42:05.189650] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.163 [2024-10-01 08:42:05.189656] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.163 [2024-10-01 08:42:05.189663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58096 len:8 PRP1 0x0 PRP2 0x0 00:27:20.163 [2024-10-01 08:42:05.189670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.163 [2024-10-01 08:42:05.189678] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.163 [2024-10-01 08:42:05.189684] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.163 [2024-10-01 08:42:05.189690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58104 len:8 PRP1 0x0 PRP2 0x0 00:27:20.163 [2024-10-01 08:42:05.189698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.163 [2024-10-01 08:42:05.189707] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.163 [2024-10-01 08:42:05.189714] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.163 [2024-10-01 08:42:05.189721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58112 len:8 PRP1 0x0 PRP2 0x0 00:27:20.163 [2024-10-01 08:42:05.189728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.163 [2024-10-01 08:42:05.189736] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.163 [2024-10-01 08:42:05.189742] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.163 [2024-10-01 08:42:05.189748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58120 len:8 PRP1 0x0 PRP2 0x0 00:27:20.163 [2024-10-01 08:42:05.189756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.163 [2024-10-01 08:42:05.189765] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.163 [2024-10-01 08:42:05.189771] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.163 [2024-10-01 08:42:05.189778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58128 len:8 PRP1 0x0 PRP2 0x0 00:27:20.163 [2024-10-01 08:42:05.189785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.163 [2024-10-01 08:42:05.189793] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.163 [2024-10-01 08:42:05.189799] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.163 [2024-10-01 08:42:05.189805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58136 len:8 PRP1 0x0 PRP2 0x0 00:27:20.163 [2024-10-01 08:42:05.189813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.163 [2024-10-01 08:42:05.189822] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.163 [2024-10-01 08:42:05.189828] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.163 [2024-10-01 08:42:05.189836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58144 len:8 PRP1 0x0 PRP2 0x0 00:27:20.163 [2024-10-01 08:42:05.189844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.163 [2024-10-01 08:42:05.189852] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.163 [2024-10-01 08:42:05.189858] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.163 [2024-10-01 08:42:05.189864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58152 len:8 PRP1 0x0 PRP2 0x0 00:27:20.163 [2024-10-01 08:42:05.189872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.163 [2024-10-01 08:42:05.189881] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.163 [2024-10-01 08:42:05.189888] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.163 [2024-10-01 08:42:05.189895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58160 len:8 PRP1 0x0 PRP2 0x0 00:27:20.163 [2024-10-01 08:42:05.189903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.163 [2024-10-01 08:42:05.189910] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.163 [2024-10-01 08:42:05.189916] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.163 [2024-10-01 08:42:05.189922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58168 len:8 PRP1 0x0 PRP2 0x0 00:27:20.164 [2024-10-01 08:42:05.189930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.164 [2024-10-01 08:42:05.189939] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.164 [2024-10-01 08:42:05.189946] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.164 [2024-10-01 08:42:05.189952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58176 len:8 PRP1 0x0 PRP2 0x0 00:27:20.164 [2024-10-01 08:42:05.189960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.164 [2024-10-01 08:42:05.189968] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.164 [2024-10-01 08:42:05.189974] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.164 [2024-10-01 08:42:05.189980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58184 len:8 PRP1 0x0 PRP2 0x0 00:27:20.164 [2024-10-01 08:42:05.189987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.164 [2024-10-01 08:42:05.190000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.164 [2024-10-01 08:42:05.190006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.164 [2024-10-01 08:42:05.190013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58192 len:8 PRP1 0x0 PRP2 0x0 00:27:20.164 [2024-10-01 08:42:05.190022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.164 [2024-10-01 08:42:05.190030] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.164 [2024-10-01 08:42:05.190035] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.164 [2024-10-01 08:42:05.190041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58200 len:8 PRP1 0x0 PRP2 0x0 00:27:20.164 [2024-10-01 08:42:05.190048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.164 [2024-10-01 08:42:05.190057] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.164 [2024-10-01 08:42:05.190065] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.164 [2024-10-01 08:42:05.190072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58208 len:8 PRP1 0x0 PRP2 0x0 00:27:20.164 [2024-10-01 08:42:05.190079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.164 [2024-10-01 08:42:05.190087] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.164 [2024-10-01 08:42:05.190093] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.164 [2024-10-01 08:42:05.190099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58216 len:8 PRP1 0x0 PRP2 0x0 00:27:20.164 [2024-10-01 08:42:05.190106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.164 [2024-10-01 08:42:05.190114] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.164 [2024-10-01 08:42:05.190121] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.164 [2024-10-01 08:42:05.190127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58224 len:8 PRP1 0x0 PRP2 0x0 00:27:20.164 [2024-10-01 08:42:05.197107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.164 [2024-10-01 08:42:05.197138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.164 [2024-10-01 08:42:05.197148] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.164 [2024-10-01 08:42:05.197157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58232 len:8 PRP1 0x0 PRP2 0x0 00:27:20.164 [2024-10-01 08:42:05.197168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.164 [2024-10-01 08:42:05.197177] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.164 [2024-10-01 08:42:05.197183] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.164 [2024-10-01 08:42:05.197190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58240 len:8 PRP1 0x0 PRP2 0x0 00:27:20.164 [2024-10-01 08:42:05.197199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.164 [2024-10-01 08:42:05.197208] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.164 [2024-10-01 08:42:05.197214] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.164 [2024-10-01 08:42:05.197221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58248 len:8 PRP1 0x0 PRP2 0x0 00:27:20.164 [2024-10-01 08:42:05.197230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.164 [2024-10-01 08:42:05.197238] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.164 [2024-10-01 08:42:05.197243] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.164 [2024-10-01 08:42:05.197249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58256 len:8 PRP1 0x0 PRP2 0x0 00:27:20.164 [2024-10-01 08:42:05.197257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.164 [2024-10-01 08:42:05.197266] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.164 [2024-10-01 08:42:05.197273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.164 [2024-10-01 08:42:05.197280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58264 len:8 PRP1 0x0 PRP2 0x0 00:27:20.164 [2024-10-01 08:42:05.197288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.164 [2024-10-01 08:42:05.197301] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.164 [2024-10-01 08:42:05.197308] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.164 [2024-10-01 08:42:05.197315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58272 len:8 PRP1 0x0 PRP2 0x0 00:27:20.164 [2024-10-01 08:42:05.197323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.164 [2024-10-01 08:42:05.197331] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.164 [2024-10-01 08:42:05.197337] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.164 [2024-10-01 08:42:05.197344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58280 len:8 PRP1 0x0 PRP2 0x0 00:27:20.164 [2024-10-01 08:42:05.197351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.164 [2024-10-01 08:42:05.197360] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.164 [2024-10-01 08:42:05.197367] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.164 [2024-10-01 08:42:05.197374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58288 len:8 PRP1 0x0 PRP2 0x0 00:27:20.164 [2024-10-01 08:42:05.197382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.164 [2024-10-01 08:42:05.197389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.164 [2024-10-01 08:42:05.197395] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.164 [2024-10-01 08:42:05.197402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58296 len:8 PRP1 0x0 PRP2 0x0 00:27:20.164 [2024-10-01 08:42:05.197411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.164 [2024-10-01 08:42:05.197419] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.164 [2024-10-01 08:42:05.197425] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.164 [2024-10-01 08:42:05.197432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58304 len:8 PRP1 0x0 PRP2 0x0 00:27:20.164 [2024-10-01 08:42:05.197439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.164 [2024-10-01 08:42:05.197447] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.164 [2024-10-01 08:42:05.197453] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.164 [2024-10-01 08:42:05.197460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58312 len:8 PRP1 0x0 PRP2 0x0 00:27:20.164 [2024-10-01 08:42:05.197468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.164 [2024-10-01 08:42:05.197476] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.164 [2024-10-01 08:42:05.197481] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.164 [2024-10-01 08:42:05.197488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58320 len:8 PRP1 0x0 PRP2 0x0 00:27:20.164 [2024-10-01 08:42:05.197496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.164 [2024-10-01 08:42:05.197504] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.164 [2024-10-01 08:42:05.197511] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.164 [2024-10-01 08:42:05.197518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58328 len:8 PRP1 0x0 PRP2 0x0 00:27:20.164 [2024-10-01 08:42:05.197527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.164 [2024-10-01 08:42:05.197535] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.164 [2024-10-01 08:42:05.197540] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.164 [2024-10-01 08:42:05.197548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58336 len:8 PRP1 0x0 PRP2 0x0 00:27:20.164 [2024-10-01 08:42:05.197557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.164 [2024-10-01 08:42:05.197566] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.164 [2024-10-01 08:42:05.197573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.164 [2024-10-01 08:42:05.197579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58344 len:8 PRP1 0x0 PRP2 0x0 00:27:20.164 [2024-10-01 08:42:05.197587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.164 [2024-10-01 08:42:05.197595] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.164 [2024-10-01 08:42:05.197602] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.164 [2024-10-01 08:42:05.197609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58352 len:8 PRP1 0x0 PRP2 0x0 00:27:20.164 [2024-10-01 08:42:05.197616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.164 [2024-10-01 08:42:05.197624] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.165 [2024-10-01 08:42:05.197630] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.165 [2024-10-01 08:42:05.197638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58360 len:8 PRP1 0x0 PRP2 0x0 00:27:20.165 [2024-10-01 08:42:05.197648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.165 [2024-10-01 08:42:05.197657] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.165 [2024-10-01 08:42:05.197662] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.165 [2024-10-01 08:42:05.197669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58368 len:8 PRP1 0x0 PRP2 0x0 00:27:20.165 [2024-10-01 08:42:05.197676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.165 [2024-10-01 08:42:05.197684] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.165 [2024-10-01 08:42:05.197691] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.165 [2024-10-01 08:42:05.197697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58376 len:8 PRP1 0x0 PRP2 0x0 00:27:20.165 [2024-10-01 08:42:05.197705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.165 [2024-10-01 08:42:05.197713] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.165 [2024-10-01 08:42:05.197720] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.165 [2024-10-01 08:42:05.197727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58384 len:8 PRP1 0x0 PRP2 0x0 00:27:20.165 [2024-10-01 08:42:05.197735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.165 [2024-10-01 08:42:05.197742] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.165 [2024-10-01 08:42:05.197749] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.165 [2024-10-01 08:42:05.197756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58392 len:8 PRP1 0x0 PRP2 0x0 00:27:20.165 [2024-10-01 08:42:05.197764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.165 [2024-10-01 08:42:05.197773] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:20.165 [2024-10-01 08:42:05.197779] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:20.165 [2024-10-01 08:42:05.197785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58400 len:8 PRP1 0x0 PRP2 0x0 00:27:20.165 [2024-10-01 08:42:05.197792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:20.165 [2024-10-01 08:42:05.197831] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x152a750 was disconnected and freed. reset controller. 00:27:20.165 [2024-10-01 08:42:05.197842] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:27:20.165 [2024-10-01 08:42:05.197851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:20.165 [2024-10-01 08:42:05.197896] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1507ff0 (9): Bad file descriptor 00:27:20.165 [2024-10-01 08:42:05.201467] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:20.165 [2024-10-01 08:42:05.242703] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:20.165 11090.44 IOPS, 43.32 MiB/s 11141.40 IOPS, 43.52 MiB/s 11198.09 IOPS, 43.74 MiB/s 11238.25 IOPS, 43.90 MiB/s 11271.92 IOPS, 44.03 MiB/s 11332.36 IOPS, 44.27 MiB/s 00:27:20.165 Latency(us) 00:27:20.165 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:20.165 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:20.165 Verification LBA range: start 0x0 length 0x4000 00:27:20.165 NVMe0n1 : 15.01 11353.52 44.35 492.93 0.00 10777.50 532.48 31457.28 00:27:20.165 =================================================================================================================== 00:27:20.165 Total : 11353.52 44.35 492.93 0.00 10777.50 532.48 31457.28 00:27:20.165 Received shutdown signal, test time was about 15.000000 seconds 00:27:20.165 00:27:20.165 Latency(us) 00:27:20.165 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:20.165 =================================================================================================================== 00:27:20.165 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:20.165 08:42:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:27:20.165 08:42:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:27:20.165 08:42:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:27:20.165 08:42:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3869678 00:27:20.165 08:42:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3869678 /var/tmp/bdevperf.sock 00:27:20.165 08:42:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:27:20.165 08:42:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 3869678 ']' 00:27:20.165 08:42:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:20.165 08:42:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:20.165 08:42:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:20.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:20.165 08:42:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:20.165 08:42:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:20.736 08:42:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:20.736 08:42:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:27:20.736 08:42:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:20.736 [2024-10-01 08:42:12.531006] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:20.997 08:42:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:27:20.997 [2024-10-01 08:42:12.711391] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:27:20.997 08:42:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:21.256 NVMe0n1 00:27:21.256 08:42:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:21.825 00:27:21.825 08:42:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:21.825 00:27:22.085 08:42:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:22.085 08:42:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:27:22.085 08:42:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:22.345 08:42:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:27:25.642 08:42:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:25.642 08:42:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:27:25.642 08:42:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3870889 00:27:25.642 08:42:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:25.642 08:42:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3870889 00:27:26.583 { 00:27:26.583 "results": [ 00:27:26.583 { 00:27:26.583 "job": "NVMe0n1", 00:27:26.583 "core_mask": "0x1", 00:27:26.583 "workload": "verify", 00:27:26.583 "status": "finished", 00:27:26.583 "verify_range": { 00:27:26.583 "start": 0, 00:27:26.583 "length": 16384 00:27:26.583 }, 00:27:26.583 "queue_depth": 128, 00:27:26.583 "io_size": 4096, 00:27:26.583 "runtime": 1.009373, 00:27:26.583 "iops": 11176.245055098561, 00:27:26.583 "mibps": 43.657207246478755, 00:27:26.583 "io_failed": 0, 00:27:26.583 "io_timeout": 0, 00:27:26.583 "avg_latency_us": 11395.385345270808, 00:27:26.583 "min_latency_us": 2430.2933333333335, 00:27:26.583 "max_latency_us": 13271.04 00:27:26.583 } 00:27:26.583 ], 00:27:26.583 "core_count": 1 00:27:26.583 } 00:27:26.583 08:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:26.583 [2024-10-01 08:42:11.575972] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:27:26.583 [2024-10-01 08:42:11.576036] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3869678 ] 00:27:26.583 [2024-10-01 08:42:11.636610] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:26.583 [2024-10-01 08:42:11.700532] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:26.583 [2024-10-01 08:42:14.003474] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:27:26.583 [2024-10-01 08:42:14.003521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:26.583 [2024-10-01 08:42:14.003532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.583 [2024-10-01 08:42:14.003542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:26.583 [2024-10-01 08:42:14.003550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.583 [2024-10-01 08:42:14.003558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:26.583 [2024-10-01 08:42:14.003565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.583 [2024-10-01 08:42:14.003573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:26.583 [2024-10-01 08:42:14.003580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.583 [2024-10-01 08:42:14.003588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:26.583 [2024-10-01 08:42:14.003616] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:26.583 [2024-10-01 08:42:14.003631] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220cff0 (9): Bad file descriptor 00:27:26.583 [2024-10-01 08:42:14.059191] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:26.584 Running I/O for 1 seconds... 00:27:26.584 11153.00 IOPS, 43.57 MiB/s 00:27:26.584 Latency(us) 00:27:26.584 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:26.584 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:26.584 Verification LBA range: start 0x0 length 0x4000 00:27:26.584 NVMe0n1 : 1.01 11176.25 43.66 0.00 0.00 11395.39 2430.29 13271.04 00:27:26.584 =================================================================================================================== 00:27:26.584 Total : 11176.25 43.66 0.00 0.00 11395.39 2430.29 13271.04 00:27:26.584 08:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:26.584 08:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:27:26.845 08:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:27.108 08:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:27.108 08:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:27:27.108 08:42:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:27.367 08:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:27:30.665 08:42:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:30.665 08:42:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:27:30.665 08:42:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3869678 00:27:30.665 08:42:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 3869678 ']' 00:27:30.665 08:42:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 3869678 00:27:30.665 08:42:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:27:30.665 08:42:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:30.665 08:42:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3869678 00:27:30.665 08:42:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:30.665 08:42:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:30.665 08:42:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3869678' 00:27:30.665 killing process with pid 3869678 00:27:30.665 08:42:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 3869678 00:27:30.665 08:42:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 3869678 00:27:30.665 08:42:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:27:30.665 08:42:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:30.924 08:42:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:27:30.924 08:42:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:30.924 08:42:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:27:30.924 08:42:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # nvmfcleanup 00:27:30.924 08:42:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:27:30.924 08:42:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:30.924 08:42:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:27:30.924 08:42:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:30.924 08:42:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:30.924 rmmod nvme_tcp 00:27:30.924 rmmod nvme_fabrics 00:27:30.924 rmmod nvme_keyring 00:27:30.924 08:42:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:30.924 08:42:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:27:31.184 08:42:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:27:31.184 08:42:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@513 -- # '[' -n 3865424 ']' 00:27:31.184 08:42:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # killprocess 3865424 00:27:31.184 08:42:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 3865424 ']' 00:27:31.184 08:42:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 3865424 00:27:31.184 08:42:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:27:31.184 08:42:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:31.184 08:42:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3865424 00:27:31.184 08:42:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:31.184 08:42:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:31.184 08:42:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3865424' 00:27:31.184 killing process with pid 3865424 00:27:31.184 08:42:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 3865424 00:27:31.184 08:42:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 3865424 00:27:31.184 08:42:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:27:31.184 08:42:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:27:31.184 08:42:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:27:31.184 08:42:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:27:31.184 08:42:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # iptables-save 00:27:31.184 08:42:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:27:31.184 08:42:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # iptables-restore 00:27:31.184 08:42:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:31.184 08:42:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:31.184 08:42:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:31.184 08:42:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:31.184 08:42:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:33.727 08:42:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:33.727 00:27:33.727 real 0m40.034s 00:27:33.727 user 2m3.621s 00:27:33.727 sys 0m8.405s 00:27:33.727 08:42:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:33.727 08:42:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:33.727 ************************************ 00:27:33.727 END TEST nvmf_failover 00:27:33.727 ************************************ 00:27:33.727 08:42:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:27:33.727 08:42:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:33.727 08:42:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:33.727 08:42:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.727 ************************************ 00:27:33.727 START TEST nvmf_host_discovery 00:27:33.727 ************************************ 00:27:33.727 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:27:33.727 * Looking for test storage... 00:27:33.727 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:33.727 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:33.727 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:27:33.727 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:33.727 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:33.727 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:33.727 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:33.727 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:33.727 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:27:33.727 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:27:33.727 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:27:33.727 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:27:33.727 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:27:33.727 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:27:33.727 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:27:33.727 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:33.727 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:27:33.727 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:27:33.727 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:33.727 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:33.727 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:27:33.727 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:27:33.727 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:33.727 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:27:33.727 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:27:33.727 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:27:33.727 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:27:33.727 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:33.727 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:27:33.727 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:27:33.727 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:33.727 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:33.727 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:27:33.727 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:33.727 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:33.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:33.727 --rc genhtml_branch_coverage=1 00:27:33.727 --rc genhtml_function_coverage=1 00:27:33.727 --rc genhtml_legend=1 00:27:33.727 --rc geninfo_all_blocks=1 00:27:33.727 --rc geninfo_unexecuted_blocks=1 00:27:33.727 00:27:33.727 ' 00:27:33.727 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:33.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:33.727 --rc genhtml_branch_coverage=1 00:27:33.727 --rc genhtml_function_coverage=1 00:27:33.727 --rc genhtml_legend=1 00:27:33.727 --rc geninfo_all_blocks=1 00:27:33.727 --rc geninfo_unexecuted_blocks=1 00:27:33.727 00:27:33.727 ' 00:27:33.727 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:33.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:33.727 --rc genhtml_branch_coverage=1 00:27:33.727 --rc genhtml_function_coverage=1 00:27:33.727 --rc genhtml_legend=1 00:27:33.727 --rc geninfo_all_blocks=1 00:27:33.727 --rc geninfo_unexecuted_blocks=1 00:27:33.727 00:27:33.727 ' 00:27:33.727 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:33.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:33.727 --rc genhtml_branch_coverage=1 00:27:33.727 --rc genhtml_function_coverage=1 00:27:33.727 --rc genhtml_legend=1 00:27:33.727 --rc geninfo_all_blocks=1 00:27:33.727 --rc geninfo_unexecuted_blocks=1 00:27:33.727 00:27:33.727 ' 00:27:33.727 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:33.727 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:27:33.727 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:33.727 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:33.727 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:33.727 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:33.728 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:33.728 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:33.728 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:33.728 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:33.728 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:33.728 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:33.728 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:33.728 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:33.728 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:33.728 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:33.728 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:33.728 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:33.728 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:33.728 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:27:33.728 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:33.728 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:33.728 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:33.728 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.728 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.728 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.728 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:27:33.728 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.728 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:27:33.728 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:33.728 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:33.728 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:33.728 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:33.728 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:33.728 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:33.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:33.728 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:33.728 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:33.728 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:33.728 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:27:33.728 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:27:33.728 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:27:33.728 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:27:33.728 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:27:33.728 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:27:33.728 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:27:33.728 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:27:33.728 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:33.728 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@472 -- # prepare_net_devs 00:27:33.728 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@434 -- # local -g is_hw=no 00:27:33.728 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@436 -- # remove_spdk_ns 00:27:33.728 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:33.728 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:33.728 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:33.728 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:27:33.728 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:27:33.728 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:27:33.728 08:42:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:41.870 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:41.870 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:27:41.870 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:41.870 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:41.870 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:41.870 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:41.870 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:41.870 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:27:41.870 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:41.870 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:27:41.870 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:27:41.870 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:27:41.870 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:27:41.870 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:27:41.870 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:27:41.870 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:41.870 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:41.870 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:41.870 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:41.870 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:41.870 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:41.870 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:41.870 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:41.870 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:41.870 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:41.870 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:41.870 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:27:41.870 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:27:41.870 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:27:41.870 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:27:41.870 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:27:41.870 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:27:41.870 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:27:41.870 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:41.870 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:41.870 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:27:41.870 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:27:41.870 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:41.870 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:41.870 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:41.871 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ up == up ]] 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:41.871 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ up == up ]] 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:41.871 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # is_hw=yes 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:41.871 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:41.871 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.637 ms 00:27:41.871 00:27:41.871 --- 10.0.0.2 ping statistics --- 00:27:41.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:41.871 rtt min/avg/max/mdev = 0.637/0.637/0.637/0.000 ms 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:41.871 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:41.871 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:27:41.871 00:27:41.871 --- 10.0.0.1 ping statistics --- 00:27:41.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:41.871 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # return 0 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@505 -- # nvmfpid=3876051 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@506 -- # waitforlisten 3876051 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 3876051 ']' 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:41.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:41.871 08:42:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:41.871 [2024-10-01 08:42:32.841287] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:27:41.871 [2024-10-01 08:42:32.841359] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:41.871 [2024-10-01 08:42:32.929280] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:41.871 [2024-10-01 08:42:33.022800] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:41.871 [2024-10-01 08:42:33.022859] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:41.871 [2024-10-01 08:42:33.022868] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:41.871 [2024-10-01 08:42:33.022875] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:41.871 [2024-10-01 08:42:33.022881] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:41.871 [2024-10-01 08:42:33.023651] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:41.871 08:42:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:41.871 08:42:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:27:41.871 08:42:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:27:41.871 08:42:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:41.871 08:42:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:41.871 08:42:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:41.871 08:42:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:41.872 08:42:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.872 08:42:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:42.132 [2024-10-01 08:42:33.696220] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:42.132 08:42:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.132 08:42:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:27:42.132 08:42:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.132 08:42:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:42.132 [2024-10-01 08:42:33.708477] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:42.132 08:42:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.132 08:42:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:27:42.132 08:42:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.132 08:42:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:42.132 null0 00:27:42.132 08:42:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.132 08:42:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:27:42.132 08:42:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.132 08:42:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:42.132 null1 00:27:42.132 08:42:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.132 08:42:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:27:42.132 08:42:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.132 08:42:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:42.132 08:42:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.132 08:42:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3876394 00:27:42.132 08:42:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:27:42.132 08:42:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3876394 /tmp/host.sock 00:27:42.132 08:42:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 3876394 ']' 00:27:42.132 08:42:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:27:42.132 08:42:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:42.132 08:42:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:42.132 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:42.132 08:42:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:42.132 08:42:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:42.132 [2024-10-01 08:42:33.766915] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:27:42.132 [2024-10-01 08:42:33.766966] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3876394 ] 00:27:42.132 [2024-10-01 08:42:33.821277] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:42.132 [2024-10-01 08:42:33.888311] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:42.393 08:42:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:42.393 08:42:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:27:42.393 08:42:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:42.393 08:42:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:27:42.393 08:42:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.393 08:42:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:42.393 08:42:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.393 08:42:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:27:42.393 08:42:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.393 08:42:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:42.393 08:42:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.393 08:42:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:27:42.393 08:42:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:27:42.393 08:42:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:42.393 08:42:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:42.393 08:42:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.393 08:42:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:42.393 08:42:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:42.393 08:42:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:42.393 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.393 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:27:42.393 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:27:42.393 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:42.393 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:42.393 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.393 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:42.393 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:42.393 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:42.393 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.393 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:27:42.393 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:27:42.393 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.393 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:42.393 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.393 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:27:42.393 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:42.393 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:42.393 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.393 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:42.393 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:42.393 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:42.393 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.393 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:27:42.393 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:27:42.393 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:42.393 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.393 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:42.393 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:42.393 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:42.393 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:42.393 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.393 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:27:42.394 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:27:42.394 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.394 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:42.394 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.394 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:27:42.394 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:42.394 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:42.394 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.394 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:42.394 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:42.394 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:42.654 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.654 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:27:42.654 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:27:42.654 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:42.654 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:42.654 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:42.654 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:42.654 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.654 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:42.654 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.654 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:27:42.654 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:42.654 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.654 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:42.654 [2024-10-01 08:42:34.313946] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:42.654 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.654 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:27:42.654 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:42.654 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:42.654 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.654 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:42.654 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:42.654 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:42.654 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.654 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:27:42.654 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:27:42.654 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:42.654 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:42.654 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:42.654 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.654 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:42.654 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:42.654 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.654 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:27:42.654 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:27:42.654 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:27:42.654 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:42.654 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:42.654 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:42.654 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:42.654 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:42.654 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:27:42.654 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:27:42.654 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:42.654 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.654 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:42.654 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.654 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:27:42.654 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:27:42.654 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:27:42.654 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:42.654 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:27:42.654 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.654 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:42.654 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.654 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:42.654 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:42.654 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:42.655 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:42.655 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:42.916 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:27:42.916 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:42.916 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.916 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:42.916 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:42.916 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:42.916 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:42.916 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.916 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:27:42.916 08:42:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:27:43.488 [2024-10-01 08:42:35.045970] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:43.488 [2024-10-01 08:42:35.045999] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:43.488 [2024-10-01 08:42:35.046014] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:43.488 [2024-10-01 08:42:35.175424] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:43.749 [2024-10-01 08:42:35.401184] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:43.749 [2024-10-01 08:42:35.401212] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:43.749 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:43.749 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:43.749 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:27:43.749 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:43.749 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:43.749 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:43.749 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.749 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:43.749 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:43.749 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:27:44.012 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:27:44.013 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:44.013 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:44.013 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:44.013 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:44.013 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:44.013 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:27:44.013 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:27:44.013 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:44.013 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.013 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:44.013 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.013 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:27:44.013 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:27:44.013 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:27:44.013 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:44.013 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:27:44.013 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.274 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:44.274 [2024-10-01 08:42:35.838015] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:44.274 [2024-10-01 08:42:35.839109] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:44.274 [2024-10-01 08:42:35.839136] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:44.274 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.274 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:44.274 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:44.274 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:44.274 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:44.274 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:44.274 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:27:44.274 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:44.274 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:44.274 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.274 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:44.274 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:44.274 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:44.274 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.274 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.274 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:44.274 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:44.274 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:44.274 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:44.275 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:44.275 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:44.275 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:27:44.275 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:44.275 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:44.275 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.275 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:44.275 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:44.275 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:44.275 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.275 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:44.275 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:44.275 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:27:44.275 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:27:44.275 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:44.275 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:44.275 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:27:44.275 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:27:44.275 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:44.275 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:44.275 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.275 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:44.275 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:44.275 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:44.275 [2024-10-01 08:42:35.966530] bdev_nvme.c:7086:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:27:44.275 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.275 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:27:44.275 08:42:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:27:44.536 [2024-10-01 08:42:36.277152] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:44.536 [2024-10-01 08:42:36.277171] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:44.536 [2024-10-01 08:42:36.277176] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:45.480 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:45.480 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:27:45.480 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:27:45.480 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:45.480 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:45.480 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:45.480 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.480 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:45.480 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:45.480 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.480 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:27:45.480 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:45.480 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:27:45.480 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:27:45.480 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:45.480 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:45.480 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:45.481 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:45.481 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:45.481 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:27:45.481 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:45.481 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:45.481 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.481 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:45.481 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.481 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:27:45.481 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:27:45.481 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:27:45.481 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:45.481 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:45.481 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.481 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:45.481 [2024-10-01 08:42:37.109707] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:45.481 [2024-10-01 08:42:37.109732] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:45.481 [2024-10-01 08:42:37.112851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:45.481 [2024-10-01 08:42:37.112872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:45.481 [2024-10-01 08:42:37.112882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:45.481 [2024-10-01 08:42:37.112890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:45.481 [2024-10-01 08:42:37.112898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:45.481 [2024-10-01 08:42:37.112906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:45.481 [2024-10-01 08:42:37.112914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:45.481 [2024-10-01 08:42:37.112921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:45.481 [2024-10-01 08:42:37.112929] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217a090 is same with the state(6) to be set 00:27:45.481 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.481 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:45.481 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:45.481 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:45.481 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:45.481 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:45.481 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:27:45.481 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:45.481 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:45.481 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.481 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:45.481 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:45.481 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:45.481 [2024-10-01 08:42:37.122863] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217a090 (9): Bad file descriptor 00:27:45.481 [2024-10-01 08:42:37.132901] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:45.481 [2024-10-01 08:42:37.133265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.481 [2024-10-01 08:42:37.133309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217a090 with addr=10.0.0.2, port=4420 00:27:45.481 [2024-10-01 08:42:37.133322] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217a090 is same with the state(6) to be set 00:27:45.481 [2024-10-01 08:42:37.133343] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217a090 (9): Bad file descriptor 00:27:45.481 [2024-10-01 08:42:37.133355] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:45.481 [2024-10-01 08:42:37.133362] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:45.481 [2024-10-01 08:42:37.133371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:45.481 [2024-10-01 08:42:37.133387] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.481 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.481 [2024-10-01 08:42:37.142959] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:45.481 [2024-10-01 08:42:37.143305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.481 [2024-10-01 08:42:37.143319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217a090 with addr=10.0.0.2, port=4420 00:27:45.481 [2024-10-01 08:42:37.143327] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217a090 is same with the state(6) to be set 00:27:45.481 [2024-10-01 08:42:37.143339] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217a090 (9): Bad file descriptor 00:27:45.481 [2024-10-01 08:42:37.143349] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:45.481 [2024-10-01 08:42:37.143356] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:45.481 [2024-10-01 08:42:37.143363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:45.481 [2024-10-01 08:42:37.143374] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.481 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.481 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:45.481 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:45.481 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:45.481 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:45.481 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:45.481 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:45.481 [2024-10-01 08:42:37.153018] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:45.481 [2024-10-01 08:42:37.153357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.481 [2024-10-01 08:42:37.153372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217a090 with addr=10.0.0.2, port=4420 00:27:45.481 [2024-10-01 08:42:37.153381] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217a090 is same with the state(6) to be set 00:27:45.481 [2024-10-01 08:42:37.153394] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217a090 (9): Bad file descriptor 00:27:45.481 [2024-10-01 08:42:37.153404] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:45.481 [2024-10-01 08:42:37.153411] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:45.481 [2024-10-01 08:42:37.153422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:45.481 [2024-10-01 08:42:37.153433] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.481 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:27:45.481 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:45.481 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:45.481 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.481 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:45.481 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:45.481 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:45.481 [2024-10-01 08:42:37.163452] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:45.481 [2024-10-01 08:42:37.163782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.481 [2024-10-01 08:42:37.163796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217a090 with addr=10.0.0.2, port=4420 00:27:45.481 [2024-10-01 08:42:37.163803] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217a090 is same with the state(6) to be set 00:27:45.481 [2024-10-01 08:42:37.163815] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217a090 (9): Bad file descriptor 00:27:45.481 [2024-10-01 08:42:37.163825] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:45.481 [2024-10-01 08:42:37.163832] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:45.481 [2024-10-01 08:42:37.163839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:45.481 [2024-10-01 08:42:37.163850] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.481 [2024-10-01 08:42:37.173510] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:45.481 [2024-10-01 08:42:37.173834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.481 [2024-10-01 08:42:37.173846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217a090 with addr=10.0.0.2, port=4420 00:27:45.481 [2024-10-01 08:42:37.173853] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217a090 is same with the state(6) to be set 00:27:45.481 [2024-10-01 08:42:37.173864] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217a090 (9): Bad file descriptor 00:27:45.481 [2024-10-01 08:42:37.173875] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:45.482 [2024-10-01 08:42:37.173881] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:45.482 [2024-10-01 08:42:37.173888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:45.482 [2024-10-01 08:42:37.173899] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.482 [2024-10-01 08:42:37.183562] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:45.482 [2024-10-01 08:42:37.183892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.482 [2024-10-01 08:42:37.183904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217a090 with addr=10.0.0.2, port=4420 00:27:45.482 [2024-10-01 08:42:37.183912] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217a090 is same with the state(6) to be set 00:27:45.482 [2024-10-01 08:42:37.183922] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217a090 (9): Bad file descriptor 00:27:45.482 [2024-10-01 08:42:37.183937] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:45.482 [2024-10-01 08:42:37.183943] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:45.482 [2024-10-01 08:42:37.183950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:45.482 [2024-10-01 08:42:37.183961] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.482 [2024-10-01 08:42:37.193615] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:45.482 [2024-10-01 08:42:37.193946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.482 [2024-10-01 08:42:37.193957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217a090 with addr=10.0.0.2, port=4420 00:27:45.482 [2024-10-01 08:42:37.193964] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217a090 is same with the state(6) to be set 00:27:45.482 [2024-10-01 08:42:37.193975] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217a090 (9): Bad file descriptor 00:27:45.482 [2024-10-01 08:42:37.193985] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:45.482 [2024-10-01 08:42:37.193991] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:45.482 [2024-10-01 08:42:37.194003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:45.482 [2024-10-01 08:42:37.194014] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.482 [2024-10-01 08:42:37.196561] bdev_nvme.c:6949:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:27:45.482 [2024-10-01 08:42:37.196579] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:45.482 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.482 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:45.482 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:45.482 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:27:45.482 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:27:45.482 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:45.482 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:45.482 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:27:45.482 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:27:45.482 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:45.482 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:45.482 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.482 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:45.482 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:45.482 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:45.482 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.482 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:27:45.482 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:45.482 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:27:45.482 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:27:45.482 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:45.482 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:45.482 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:45.482 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:45.482 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:45.482 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:27:45.482 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:45.482 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:45.482 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.482 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:45.482 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.745 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:27:45.745 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:27:45.745 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:27:45.745 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:45.745 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:27:45.745 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.745 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:45.745 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.745 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:27:45.745 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:27:45.745 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:45.745 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:45.745 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:27:45.745 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:27:45.745 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:45.745 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:45.745 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:45.745 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.745 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:45.745 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:45.745 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.745 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:27:45.745 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:45.745 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:27:45.745 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:27:45.745 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:45.745 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:45.745 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:27:45.745 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:27:45.745 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:45.745 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:45.745 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:45.745 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.745 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:45.745 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:45.745 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.745 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:27:45.745 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:45.745 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:27:45.745 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:27:45.745 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:45.745 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:45.745 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:45.745 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:45.745 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:45.745 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:27:45.745 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:45.745 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:45.745 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.745 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:45.745 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.745 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:27:45.745 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:27:45.745 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:27:45.745 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:45.745 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:45.745 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.745 08:42:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:47.131 [2024-10-01 08:42:38.535196] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:47.131 [2024-10-01 08:42:38.535214] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:47.131 [2024-10-01 08:42:38.535231] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:47.131 [2024-10-01 08:42:38.621497] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:27:47.131 [2024-10-01 08:42:38.936570] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:47.131 [2024-10-01 08:42:38.936601] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:47.131 08:42:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.131 08:42:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:47.131 08:42:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:27:47.131 08:42:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:47.131 08:42:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:47.131 08:42:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:47.131 08:42:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:47.131 08:42:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:47.131 08:42:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:47.131 08:42:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.131 08:42:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:47.131 request: 00:27:47.131 { 00:27:47.131 "name": "nvme", 00:27:47.131 "trtype": "tcp", 00:27:47.131 "traddr": "10.0.0.2", 00:27:47.391 "adrfam": "ipv4", 00:27:47.391 "trsvcid": "8009", 00:27:47.391 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:47.391 "wait_for_attach": true, 00:27:47.391 "method": "bdev_nvme_start_discovery", 00:27:47.391 "req_id": 1 00:27:47.391 } 00:27:47.391 Got JSON-RPC error response 00:27:47.391 response: 00:27:47.391 { 00:27:47.391 "code": -17, 00:27:47.391 "message": "File exists" 00:27:47.391 } 00:27:47.391 08:42:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:47.391 08:42:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:27:47.391 08:42:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:47.391 08:42:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:47.391 08:42:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:47.391 08:42:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:27:47.391 08:42:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:47.391 08:42:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:47.391 08:42:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.391 08:42:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:27:47.391 08:42:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:47.391 08:42:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:27:47.391 08:42:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.391 08:42:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:27:47.391 08:42:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:27:47.391 08:42:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:47.391 08:42:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:47.391 08:42:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.391 08:42:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:47.391 08:42:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:47.391 08:42:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:47.391 08:42:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.391 08:42:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:47.392 08:42:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:47.392 08:42:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:27:47.392 08:42:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:47.392 08:42:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:47.392 08:42:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:47.392 08:42:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:47.392 08:42:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:47.392 08:42:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:47.392 08:42:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.392 08:42:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:47.392 request: 00:27:47.392 { 00:27:47.392 "name": "nvme_second", 00:27:47.392 "trtype": "tcp", 00:27:47.392 "traddr": "10.0.0.2", 00:27:47.392 "adrfam": "ipv4", 00:27:47.392 "trsvcid": "8009", 00:27:47.392 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:47.392 "wait_for_attach": true, 00:27:47.392 "method": "bdev_nvme_start_discovery", 00:27:47.392 "req_id": 1 00:27:47.392 } 00:27:47.392 Got JSON-RPC error response 00:27:47.392 response: 00:27:47.392 { 00:27:47.392 "code": -17, 00:27:47.392 "message": "File exists" 00:27:47.392 } 00:27:47.392 08:42:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:47.392 08:42:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:27:47.392 08:42:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:47.392 08:42:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:47.392 08:42:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:47.392 08:42:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:27:47.392 08:42:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:47.392 08:42:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:47.392 08:42:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.392 08:42:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:27:47.392 08:42:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:47.392 08:42:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:27:47.392 08:42:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.392 08:42:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:27:47.392 08:42:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:27:47.392 08:42:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:47.392 08:42:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:47.392 08:42:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:47.392 08:42:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.392 08:42:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:47.392 08:42:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:47.392 08:42:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.392 08:42:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:47.392 08:42:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:47.392 08:42:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:27:47.392 08:42:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:47.392 08:42:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:47.392 08:42:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:47.392 08:42:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:47.392 08:42:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:47.392 08:42:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:47.392 08:42:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.392 08:42:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:48.774 [2024-10-01 08:42:40.196079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:48.774 [2024-10-01 08:42:40.196118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21ab7a0 with addr=10.0.0.2, port=8010 00:27:48.774 [2024-10-01 08:42:40.196134] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:48.774 [2024-10-01 08:42:40.196141] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:48.774 [2024-10-01 08:42:40.196149] bdev_nvme.c:7224:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:27:49.386 [2024-10-01 08:42:41.198414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:49.386 [2024-10-01 08:42:41.198438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21ab7a0 with addr=10.0.0.2, port=8010 00:27:49.386 [2024-10-01 08:42:41.198450] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:49.386 [2024-10-01 08:42:41.198456] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:49.386 [2024-10-01 08:42:41.198463] bdev_nvme.c:7224:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:27:50.435 [2024-10-01 08:42:42.200415] bdev_nvme.c:7205:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:27:50.435 request: 00:27:50.435 { 00:27:50.435 "name": "nvme_second", 00:27:50.435 "trtype": "tcp", 00:27:50.435 "traddr": "10.0.0.2", 00:27:50.435 "adrfam": "ipv4", 00:27:50.435 "trsvcid": "8010", 00:27:50.435 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:50.435 "wait_for_attach": false, 00:27:50.435 "attach_timeout_ms": 3000, 00:27:50.435 "method": "bdev_nvme_start_discovery", 00:27:50.435 "req_id": 1 00:27:50.435 } 00:27:50.435 Got JSON-RPC error response 00:27:50.435 response: 00:27:50.435 { 00:27:50.435 "code": -110, 00:27:50.435 "message": "Connection timed out" 00:27:50.435 } 00:27:50.435 08:42:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:50.435 08:42:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:27:50.435 08:42:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:50.435 08:42:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:50.435 08:42:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:50.435 08:42:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:27:50.435 08:42:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:50.435 08:42:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:50.435 08:42:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.435 08:42:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:27:50.435 08:42:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:50.435 08:42:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:27:50.435 08:42:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.697 08:42:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:27:50.697 08:42:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:27:50.697 08:42:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3876394 00:27:50.697 08:42:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:27:50.697 08:42:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # nvmfcleanup 00:27:50.697 08:42:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:27:50.697 08:42:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:50.697 08:42:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:27:50.697 08:42:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:50.697 08:42:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:50.697 rmmod nvme_tcp 00:27:50.697 rmmod nvme_fabrics 00:27:50.697 rmmod nvme_keyring 00:27:50.697 08:42:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:50.697 08:42:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:27:50.697 08:42:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:27:50.697 08:42:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@513 -- # '[' -n 3876051 ']' 00:27:50.697 08:42:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@514 -- # killprocess 3876051 00:27:50.697 08:42:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 3876051 ']' 00:27:50.697 08:42:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 3876051 00:27:50.697 08:42:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:27:50.697 08:42:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:50.697 08:42:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3876051 00:27:50.697 08:42:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:50.697 08:42:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:50.697 08:42:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3876051' 00:27:50.697 killing process with pid 3876051 00:27:50.697 08:42:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 3876051 00:27:50.697 08:42:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 3876051 00:27:50.697 08:42:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:27:50.697 08:42:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:27:50.697 08:42:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:27:50.697 08:42:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:27:50.957 08:42:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:27:50.958 08:42:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # iptables-save 00:27:50.958 08:42:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # iptables-restore 00:27:50.958 08:42:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:50.958 08:42:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:50.958 08:42:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:50.958 08:42:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:50.958 08:42:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:52.872 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:52.872 00:27:52.872 real 0m19.486s 00:27:52.872 user 0m21.950s 00:27:52.872 sys 0m7.046s 00:27:52.872 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:52.872 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:52.872 ************************************ 00:27:52.872 END TEST nvmf_host_discovery 00:27:52.872 ************************************ 00:27:52.872 08:42:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:27:52.872 08:42:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:52.872 08:42:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:52.872 08:42:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.872 ************************************ 00:27:52.872 START TEST nvmf_host_multipath_status 00:27:52.872 ************************************ 00:27:52.872 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:27:53.134 * Looking for test storage... 00:27:53.134 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:53.134 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:53.134 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lcov --version 00:27:53.134 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:53.134 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:53.134 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:53.134 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:53.134 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:53.134 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:27:53.134 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:27:53.134 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:27:53.134 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:27:53.134 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:27:53.134 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:27:53.134 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:27:53.134 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:53.134 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:27:53.134 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:27:53.134 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:53.134 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:53.134 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:27:53.134 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:27:53.134 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:53.134 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:27:53.134 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:27:53.134 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:27:53.134 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:27:53.134 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:53.134 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:27:53.134 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:27:53.134 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:53.134 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:53.134 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:27:53.135 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:53.135 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:53.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:53.135 --rc genhtml_branch_coverage=1 00:27:53.135 --rc genhtml_function_coverage=1 00:27:53.135 --rc genhtml_legend=1 00:27:53.135 --rc geninfo_all_blocks=1 00:27:53.135 --rc geninfo_unexecuted_blocks=1 00:27:53.135 00:27:53.135 ' 00:27:53.135 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:53.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:53.135 --rc genhtml_branch_coverage=1 00:27:53.135 --rc genhtml_function_coverage=1 00:27:53.135 --rc genhtml_legend=1 00:27:53.135 --rc geninfo_all_blocks=1 00:27:53.135 --rc geninfo_unexecuted_blocks=1 00:27:53.135 00:27:53.135 ' 00:27:53.135 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:53.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:53.135 --rc genhtml_branch_coverage=1 00:27:53.135 --rc genhtml_function_coverage=1 00:27:53.135 --rc genhtml_legend=1 00:27:53.135 --rc geninfo_all_blocks=1 00:27:53.135 --rc geninfo_unexecuted_blocks=1 00:27:53.135 00:27:53.135 ' 00:27:53.135 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:53.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:53.135 --rc genhtml_branch_coverage=1 00:27:53.135 --rc genhtml_function_coverage=1 00:27:53.135 --rc genhtml_legend=1 00:27:53.135 --rc geninfo_all_blocks=1 00:27:53.135 --rc geninfo_unexecuted_blocks=1 00:27:53.135 00:27:53.135 ' 00:27:53.135 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:53.135 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:27:53.135 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:53.135 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:53.135 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:53.135 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:53.135 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:53.135 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:53.135 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:53.135 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:53.135 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:53.135 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:53.135 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:53.135 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:53.135 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:53.135 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:53.135 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:53.135 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:53.135 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:53.135 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:27:53.135 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:53.135 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:53.135 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:53.135 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.135 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.135 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.135 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:27:53.135 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.135 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:27:53.135 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:53.135 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:53.135 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:53.135 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:53.135 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:53.135 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:53.135 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:53.135 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:53.135 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:53.135 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:53.135 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:53.135 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:53.135 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:53.135 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:27:53.135 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:53.135 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:27:53.135 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:27:53.135 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:27:53.135 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:53.135 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # prepare_net_devs 00:27:53.135 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@434 -- # local -g is_hw=no 00:27:53.135 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # remove_spdk_ns 00:27:53.135 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:53.135 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:53.135 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:53.135 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:27:53.135 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:27:53.135 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:27:53.135 08:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:01.284 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:01.285 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:01.285 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:01.285 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:01.285 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # is_hw=yes 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:01.285 08:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:01.285 08:42:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:01.285 08:42:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:01.285 08:42:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:01.285 08:42:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:01.285 08:42:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:01.285 08:42:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:01.285 08:42:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:01.285 08:42:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:01.285 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:01.285 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.542 ms 00:28:01.286 00:28:01.286 --- 10.0.0.2 ping statistics --- 00:28:01.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:01.286 rtt min/avg/max/mdev = 0.542/0.542/0.542/0.000 ms 00:28:01.286 08:42:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:01.286 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:01.286 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:28:01.286 00:28:01.286 --- 10.0.0.1 ping statistics --- 00:28:01.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:01.286 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:28:01.286 08:42:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:01.286 08:42:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # return 0 00:28:01.286 08:42:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:28:01.286 08:42:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:01.286 08:42:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:28:01.286 08:42:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:28:01.286 08:42:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:01.286 08:42:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:28:01.286 08:42:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:28:01.286 08:42:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:28:01.286 08:42:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:28:01.286 08:42:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:01.286 08:42:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:01.286 08:42:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # nvmfpid=3882253 00:28:01.286 08:42:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # waitforlisten 3882253 00:28:01.286 08:42:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:28:01.286 08:42:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 3882253 ']' 00:28:01.286 08:42:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:01.286 08:42:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:01.286 08:42:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:01.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:01.286 08:42:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:01.286 08:42:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:01.286 [2024-10-01 08:42:52.358509] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:28:01.286 [2024-10-01 08:42:52.358610] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:01.286 [2024-10-01 08:42:52.433742] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:01.286 [2024-10-01 08:42:52.509081] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:01.286 [2024-10-01 08:42:52.509121] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:01.286 [2024-10-01 08:42:52.509129] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:01.286 [2024-10-01 08:42:52.509136] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:01.286 [2024-10-01 08:42:52.509142] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:01.286 [2024-10-01 08:42:52.510060] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:01.286 [2024-10-01 08:42:52.510221] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:01.547 08:42:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:01.547 08:42:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:28:01.547 08:42:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:28:01.547 08:42:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:01.547 08:42:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:01.547 08:42:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:01.547 08:42:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3882253 00:28:01.547 08:42:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:01.547 [2024-10-01 08:42:53.343397] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:01.547 08:42:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:28:01.807 Malloc0 00:28:01.807 08:42:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:28:02.068 08:42:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:02.068 08:42:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:02.328 [2024-10-01 08:42:54.030766] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:02.328 08:42:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:02.589 [2024-10-01 08:42:54.199166] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:02.589 08:42:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3882651 00:28:02.589 08:42:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:28:02.589 08:42:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:02.589 08:42:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3882651 /var/tmp/bdevperf.sock 00:28:02.589 08:42:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 3882651 ']' 00:28:02.589 08:42:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:02.589 08:42:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:02.589 08:42:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:02.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:02.589 08:42:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:02.589 08:42:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:02.849 08:42:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:02.849 08:42:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:28:02.849 08:42:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:28:02.849 08:42:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:28:03.420 Nvme0n1 00:28:03.420 08:42:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:28:03.681 Nvme0n1 00:28:03.681 08:42:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:28:03.681 08:42:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:28:05.604 08:42:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:28:05.604 08:42:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:28:05.865 08:42:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:06.124 08:42:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:28:07.064 08:42:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:28:07.064 08:42:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:07.064 08:42:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:07.064 08:42:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:07.325 08:42:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:07.325 08:42:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:07.325 08:42:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:07.325 08:42:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:07.325 08:42:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:07.325 08:42:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:07.325 08:42:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:07.325 08:42:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:07.585 08:42:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:07.585 08:42:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:07.585 08:42:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:07.585 08:42:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:07.846 08:42:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:07.846 08:42:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:07.847 08:42:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:07.847 08:42:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:07.847 08:42:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:07.847 08:42:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:07.847 08:42:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:07.847 08:42:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:08.108 08:42:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:08.108 08:42:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:28:08.108 08:42:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:08.368 08:42:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:08.368 08:43:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:28:09.751 08:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:28:09.751 08:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:09.751 08:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:09.751 08:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:09.751 08:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:09.751 08:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:09.751 08:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:09.751 08:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:09.751 08:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:09.752 08:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:09.752 08:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:09.752 08:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:10.012 08:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:10.012 08:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:10.012 08:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:10.012 08:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:10.272 08:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:10.272 08:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:10.272 08:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:10.272 08:43:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:10.272 08:43:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:10.272 08:43:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:10.272 08:43:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:10.272 08:43:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:10.533 08:43:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:10.533 08:43:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:28:10.533 08:43:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:10.793 08:43:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:28:10.793 08:43:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:28:12.180 08:43:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:28:12.180 08:43:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:12.180 08:43:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:12.180 08:43:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:12.180 08:43:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:12.180 08:43:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:12.180 08:43:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:12.180 08:43:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:12.180 08:43:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:12.180 08:43:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:12.180 08:43:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:12.180 08:43:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:12.441 08:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:12.441 08:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:12.441 08:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:12.442 08:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:12.702 08:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:12.702 08:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:12.702 08:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:12.702 08:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:12.702 08:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:12.703 08:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:12.703 08:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:12.703 08:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:12.963 08:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:12.963 08:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:28:12.964 08:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:13.225 08:43:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:28:13.485 08:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:28:14.427 08:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:28:14.427 08:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:14.427 08:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:14.427 08:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:14.688 08:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:14.688 08:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:14.688 08:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:14.688 08:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:14.688 08:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:14.688 08:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:14.688 08:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:14.688 08:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:14.949 08:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:14.949 08:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:14.949 08:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:14.949 08:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:15.210 08:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:15.210 08:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:15.210 08:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:15.210 08:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:15.210 08:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:15.210 08:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:28:15.210 08:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:15.210 08:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:15.470 08:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:15.470 08:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:28:15.470 08:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:28:15.730 08:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:28:15.730 08:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:28:17.113 08:43:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:28:17.113 08:43:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:17.113 08:43:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:17.113 08:43:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:17.113 08:43:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:17.114 08:43:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:17.114 08:43:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:17.114 08:43:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:17.114 08:43:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:17.114 08:43:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:17.114 08:43:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:17.114 08:43:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:17.379 08:43:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:17.379 08:43:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:17.379 08:43:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:17.379 08:43:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:17.641 08:43:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:17.641 08:43:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:28:17.641 08:43:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:17.641 08:43:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:17.641 08:43:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:17.641 08:43:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:28:17.641 08:43:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:17.641 08:43:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:17.901 08:43:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:17.901 08:43:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:28:17.901 08:43:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:28:18.162 08:43:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:18.162 08:43:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:28:19.546 08:43:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:28:19.547 08:43:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:19.547 08:43:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:19.547 08:43:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:19.547 08:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:19.547 08:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:19.547 08:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:19.547 08:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:19.547 08:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:19.547 08:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:19.547 08:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:19.547 08:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:19.807 08:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:19.807 08:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:19.807 08:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:19.807 08:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:20.068 08:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:20.069 08:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:28:20.069 08:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:20.069 08:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:20.069 08:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:20.069 08:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:20.069 08:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:20.069 08:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:20.330 08:43:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:20.330 08:43:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:28:20.591 08:43:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:28:20.591 08:43:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:28:20.591 08:43:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:20.852 08:43:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:28:21.796 08:43:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:28:21.796 08:43:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:21.796 08:43:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:21.796 08:43:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:22.057 08:43:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:22.057 08:43:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:22.058 08:43:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:22.058 08:43:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:22.318 08:43:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:22.318 08:43:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:22.318 08:43:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:22.318 08:43:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:22.318 08:43:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:22.318 08:43:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:22.318 08:43:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:22.318 08:43:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:22.580 08:43:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:22.580 08:43:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:22.580 08:43:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:22.580 08:43:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:22.840 08:43:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:22.840 08:43:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:22.840 08:43:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:22.840 08:43:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:23.102 08:43:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:23.102 08:43:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:28:23.102 08:43:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:23.102 08:43:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:23.364 08:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:28:24.306 08:43:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:28:24.306 08:43:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:24.306 08:43:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:24.306 08:43:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:24.567 08:43:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:24.567 08:43:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:24.567 08:43:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:24.567 08:43:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:24.828 08:43:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:24.828 08:43:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:24.828 08:43:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:24.828 08:43:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:24.828 08:43:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:24.828 08:43:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:24.828 08:43:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:24.828 08:43:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:25.088 08:43:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:25.088 08:43:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:25.088 08:43:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:25.088 08:43:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:25.350 08:43:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:25.350 08:43:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:25.350 08:43:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:25.350 08:43:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:25.350 08:43:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:25.350 08:43:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:28:25.350 08:43:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:25.611 08:43:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:28:25.872 08:43:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:28:26.814 08:43:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:28:26.814 08:43:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:26.814 08:43:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:26.814 08:43:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:27.075 08:43:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:27.075 08:43:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:27.075 08:43:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:27.076 08:43:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:27.076 08:43:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:27.076 08:43:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:27.076 08:43:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:27.076 08:43:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:27.340 08:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:27.340 08:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:27.340 08:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:27.340 08:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:27.603 08:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:27.603 08:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:27.603 08:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:27.603 08:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:27.864 08:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:27.864 08:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:27.864 08:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:27.864 08:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:27.864 08:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:27.864 08:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:28:27.864 08:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:28.126 08:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:28:28.387 08:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:28:29.330 08:43:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:28:29.330 08:43:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:29.330 08:43:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:29.330 08:43:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:29.592 08:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:29.592 08:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:29.592 08:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:29.592 08:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:29.592 08:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:29.592 08:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:29.592 08:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:29.592 08:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:29.854 08:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:29.854 08:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:29.854 08:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:29.854 08:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:30.116 08:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:30.116 08:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:30.116 08:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:30.116 08:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:30.116 08:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:30.116 08:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:28:30.116 08:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:30.116 08:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:30.377 08:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:30.377 08:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3882651 00:28:30.377 08:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 3882651 ']' 00:28:30.377 08:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 3882651 00:28:30.377 08:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:28:30.377 08:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:30.377 08:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3882651 00:28:30.377 08:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:28:30.377 08:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:28:30.377 08:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3882651' 00:28:30.377 killing process with pid 3882651 00:28:30.377 08:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 3882651 00:28:30.377 08:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 3882651 00:28:30.377 { 00:28:30.377 "results": [ 00:28:30.377 { 00:28:30.377 "job": "Nvme0n1", 00:28:30.377 "core_mask": "0x4", 00:28:30.377 "workload": "verify", 00:28:30.377 "status": "terminated", 00:28:30.377 "verify_range": { 00:28:30.377 "start": 0, 00:28:30.377 "length": 16384 00:28:30.377 }, 00:28:30.377 "queue_depth": 128, 00:28:30.377 "io_size": 4096, 00:28:30.377 "runtime": 26.686243, 00:28:30.377 "iops": 10845.475700719655, 00:28:30.377 "mibps": 42.36513945593615, 00:28:30.377 "io_failed": 0, 00:28:30.377 "io_timeout": 0, 00:28:30.377 "avg_latency_us": 11784.719601324465, 00:28:30.377 "min_latency_us": 203.09333333333333, 00:28:30.377 "max_latency_us": 3019898.88 00:28:30.377 } 00:28:30.377 ], 00:28:30.377 "core_count": 1 00:28:30.377 } 00:28:30.643 08:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3882651 00:28:30.643 08:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:30.643 [2024-10-01 08:42:54.264336] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:28:30.643 [2024-10-01 08:42:54.264398] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3882651 ] 00:28:30.643 [2024-10-01 08:42:54.315830] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:30.643 [2024-10-01 08:42:54.367958] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:28:30.643 [2024-10-01 08:42:55.235570] bdev_nvme.c:5605:nvme_bdev_ctrlr_create: *WARNING*: multipath_config: deprecated feature bdev_nvme_attach_controller.multipath configuration mismatch to be removed in v25.01 00:28:30.643 Running I/O for 90 seconds... 00:28:30.643 9539.00 IOPS, 37.26 MiB/s 9651.00 IOPS, 37.70 MiB/s 9612.33 IOPS, 37.55 MiB/s 9632.75 IOPS, 37.63 MiB/s 9919.60 IOPS, 38.75 MiB/s 10475.33 IOPS, 40.92 MiB/s 10851.71 IOPS, 42.39 MiB/s 10798.62 IOPS, 42.18 MiB/s 10677.78 IOPS, 41.71 MiB/s 10572.50 IOPS, 41.30 MiB/s 10485.64 IOPS, 40.96 MiB/s [2024-10-01 08:43:07.282524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:70344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.643 [2024-10-01 08:43:07.282558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:30.643 [2024-10-01 08:43:07.282590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.643 [2024-10-01 08:43:07.282596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:30.643 [2024-10-01 08:43:07.282608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:70360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.643 [2024-10-01 08:43:07.282614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:30.643 [2024-10-01 08:43:07.282624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:70368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.643 [2024-10-01 08:43:07.282629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:30.643 [2024-10-01 08:43:07.282640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:70376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.643 [2024-10-01 08:43:07.282645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:30.643 [2024-10-01 08:43:07.282656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:70384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.643 [2024-10-01 08:43:07.282661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.643 [2024-10-01 08:43:07.282671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:70392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.643 [2024-10-01 08:43:07.282677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.643 [2024-10-01 08:43:07.282687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:70400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.643 [2024-10-01 08:43:07.282692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:30.643 [2024-10-01 08:43:07.282703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:70408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.643 [2024-10-01 08:43:07.282708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:30.643 [2024-10-01 08:43:07.282723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:70416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.643 [2024-10-01 08:43:07.282729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:30.643 [2024-10-01 08:43:07.282740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:70424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.643 [2024-10-01 08:43:07.282745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:30.643 [2024-10-01 08:43:07.282756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:70432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.643 [2024-10-01 08:43:07.282762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:30.643 [2024-10-01 08:43:07.282772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.643 [2024-10-01 08:43:07.282777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:30.643 [2024-10-01 08:43:07.282788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:70448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.643 [2024-10-01 08:43:07.282793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:30.643 [2024-10-01 08:43:07.282804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:70776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.643 [2024-10-01 08:43:07.282809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:30.643 [2024-10-01 08:43:07.282820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:70456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.643 [2024-10-01 08:43:07.282825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:30.643 [2024-10-01 08:43:07.282836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:70464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.643 [2024-10-01 08:43:07.282841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:30.643 [2024-10-01 08:43:07.282851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:70472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.643 [2024-10-01 08:43:07.282857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:30.643 [2024-10-01 08:43:07.282868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:70480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.643 [2024-10-01 08:43:07.282874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:30.643 [2024-10-01 08:43:07.282884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:70488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.643 [2024-10-01 08:43:07.282889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:30.643 [2024-10-01 08:43:07.282900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:70496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.643 [2024-10-01 08:43:07.282905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:30.643 [2024-10-01 08:43:07.282917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:70504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.643 [2024-10-01 08:43:07.282923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:30.643 [2024-10-01 08:43:07.282933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:70512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.643 [2024-10-01 08:43:07.282939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:30.643 [2024-10-01 08:43:07.282949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:70520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.643 [2024-10-01 08:43:07.282955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:30.643 [2024-10-01 08:43:07.282966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:70528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.643 [2024-10-01 08:43:07.282971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:30.643 [2024-10-01 08:43:07.282982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:70536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.643 [2024-10-01 08:43:07.282988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:30.643 [2024-10-01 08:43:07.283003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:70544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.643 [2024-10-01 08:43:07.283009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:30.643 [2024-10-01 08:43:07.283020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:70552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.643 [2024-10-01 08:43:07.283025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:30.643 [2024-10-01 08:43:07.283035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:70560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.643 [2024-10-01 08:43:07.283041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:30.643 [2024-10-01 08:43:07.283051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:70568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.643 [2024-10-01 08:43:07.283057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:30.643 [2024-10-01 08:43:07.283067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:70576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.643 [2024-10-01 08:43:07.283073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:30.643 [2024-10-01 08:43:07.283083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:70584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.643 [2024-10-01 08:43:07.283088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:30.643 [2024-10-01 08:43:07.283099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:70592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.643 [2024-10-01 08:43:07.283104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:30.643 [2024-10-01 08:43:07.283116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:70600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.644 [2024-10-01 08:43:07.283122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:30.644 [2024-10-01 08:43:07.283133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:70608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.644 [2024-10-01 08:43:07.283138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:30.644 [2024-10-01 08:43:07.283149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:70616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.644 [2024-10-01 08:43:07.283154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:30.644 [2024-10-01 08:43:07.283165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:70624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.644 [2024-10-01 08:43:07.283170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:30.644 [2024-10-01 08:43:07.283180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:70632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.644 [2024-10-01 08:43:07.283186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:30.644 [2024-10-01 08:43:07.283196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:70640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.644 [2024-10-01 08:43:07.283202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.644 [2024-10-01 08:43:07.283213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:70648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.644 [2024-10-01 08:43:07.283218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:30.644 [2024-10-01 08:43:07.283229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:70656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.644 [2024-10-01 08:43:07.283234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:30.644 [2024-10-01 08:43:07.283244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:70664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.644 [2024-10-01 08:43:07.283249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:30.644 [2024-10-01 08:43:07.283260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:70672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.644 [2024-10-01 08:43:07.283265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:30.644 [2024-10-01 08:43:07.283276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:70680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.644 [2024-10-01 08:43:07.283282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:30.644 [2024-10-01 08:43:07.283293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:70688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.644 [2024-10-01 08:43:07.283299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:30.644 [2024-10-01 08:43:07.283310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:70696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.644 [2024-10-01 08:43:07.283316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:30.644 [2024-10-01 08:43:07.283327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:70704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.644 [2024-10-01 08:43:07.283332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:30.644 [2024-10-01 08:43:07.283342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:70712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.644 [2024-10-01 08:43:07.283347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:30.644 [2024-10-01 08:43:07.283358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:70720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.644 [2024-10-01 08:43:07.283363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:30.644 [2024-10-01 08:43:07.283373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:70728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.644 [2024-10-01 08:43:07.283379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:30.644 [2024-10-01 08:43:07.283389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:70736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.644 [2024-10-01 08:43:07.283394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:30.644 [2024-10-01 08:43:07.283404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:70744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.644 [2024-10-01 08:43:07.283410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:30.644 [2024-10-01 08:43:07.283420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:70752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.644 [2024-10-01 08:43:07.283425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:30.644 [2024-10-01 08:43:07.283436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:70760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.644 [2024-10-01 08:43:07.283441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:30.644 [2024-10-01 08:43:07.283599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:70768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.644 [2024-10-01 08:43:07.283611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:30.644 [2024-10-01 08:43:07.283629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:70784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.644 [2024-10-01 08:43:07.283635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:30.644 [2024-10-01 08:43:07.283648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:70792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.644 [2024-10-01 08:43:07.283653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:30.644 [2024-10-01 08:43:07.283666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:70800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.644 [2024-10-01 08:43:07.283672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:30.644 [2024-10-01 08:43:07.283687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:70808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.644 [2024-10-01 08:43:07.283692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:30.644 [2024-10-01 08:43:07.283705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:70816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.644 [2024-10-01 08:43:07.283710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:30.644 [2024-10-01 08:43:07.283723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:70824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.644 [2024-10-01 08:43:07.283728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:30.644 [2024-10-01 08:43:07.283741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:70832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.644 [2024-10-01 08:43:07.283746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:30.644 [2024-10-01 08:43:07.283759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:70840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.644 [2024-10-01 08:43:07.283764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:30.644 [2024-10-01 08:43:07.283776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:70848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.644 [2024-10-01 08:43:07.283782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:30.644 [2024-10-01 08:43:07.283795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:70856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.644 [2024-10-01 08:43:07.283800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:30.644 [2024-10-01 08:43:07.283814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:70864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.644 [2024-10-01 08:43:07.283819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:30.644 [2024-10-01 08:43:07.283832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:70872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.644 [2024-10-01 08:43:07.283837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:30.644 [2024-10-01 08:43:07.283850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:70880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.644 [2024-10-01 08:43:07.283856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:30.644 [2024-10-01 08:43:07.283869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:70888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.644 [2024-10-01 08:43:07.283874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:30.644 [2024-10-01 08:43:07.283887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:70896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.644 [2024-10-01 08:43:07.283892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:30.644 [2024-10-01 08:43:07.283906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:70904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.644 [2024-10-01 08:43:07.283912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.644 [2024-10-01 08:43:07.283925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:70912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.644 [2024-10-01 08:43:07.283930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:30.644 [2024-10-01 08:43:07.283943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:70920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.645 [2024-10-01 08:43:07.283948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:30.645 [2024-10-01 08:43:07.283961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:70928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.645 [2024-10-01 08:43:07.283967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:30.645 [2024-10-01 08:43:07.283979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:70936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.645 [2024-10-01 08:43:07.283984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:30.645 [2024-10-01 08:43:07.284001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:70944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.645 [2024-10-01 08:43:07.284007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:30.645 [2024-10-01 08:43:07.284019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:70952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.645 [2024-10-01 08:43:07.284025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:30.645 [2024-10-01 08:43:07.284038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:70960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.645 [2024-10-01 08:43:07.284043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:30.645 [2024-10-01 08:43:07.284056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:70968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.645 [2024-10-01 08:43:07.284061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:30.645 [2024-10-01 08:43:07.284074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:70976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.645 [2024-10-01 08:43:07.284079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:30.645 [2024-10-01 08:43:07.284092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:70984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.645 [2024-10-01 08:43:07.284097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:30.645 [2024-10-01 08:43:07.284110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:70992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.645 [2024-10-01 08:43:07.284115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:30.645 [2024-10-01 08:43:07.284128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:71000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.645 [2024-10-01 08:43:07.284134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:30.645 [2024-10-01 08:43:07.284147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:71008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.645 [2024-10-01 08:43:07.284153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:30.645 [2024-10-01 08:43:07.284165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:71016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.645 [2024-10-01 08:43:07.284171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:30.645 [2024-10-01 08:43:07.284183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:71024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.645 [2024-10-01 08:43:07.284189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:30.645 [2024-10-01 08:43:07.284266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:71032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.645 [2024-10-01 08:43:07.284276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:30.645 [2024-10-01 08:43:07.284294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:71040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.645 [2024-10-01 08:43:07.284300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:30.645 [2024-10-01 08:43:07.284314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:71048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.645 [2024-10-01 08:43:07.284320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:30.645 [2024-10-01 08:43:07.284334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:71056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.645 [2024-10-01 08:43:07.284339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:30.645 [2024-10-01 08:43:07.284354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:71064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.645 [2024-10-01 08:43:07.284359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:30.645 [2024-10-01 08:43:07.284373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:71072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.645 [2024-10-01 08:43:07.284379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:30.645 [2024-10-01 08:43:07.284393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:71080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.645 [2024-10-01 08:43:07.284398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:30.645 [2024-10-01 08:43:07.284412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:71088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.645 [2024-10-01 08:43:07.284418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:30.645 [2024-10-01 08:43:07.284433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.645 [2024-10-01 08:43:07.284442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:30.645 [2024-10-01 08:43:07.284457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:71104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.645 [2024-10-01 08:43:07.284462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:30.645 [2024-10-01 08:43:07.284477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:71112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.645 [2024-10-01 08:43:07.284482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:30.645 [2024-10-01 08:43:07.284497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:71120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.645 [2024-10-01 08:43:07.284502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:30.645 [2024-10-01 08:43:07.284517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:71128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.645 [2024-10-01 08:43:07.284522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:30.645 [2024-10-01 08:43:07.284537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:71136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.645 [2024-10-01 08:43:07.284542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:30.645 [2024-10-01 08:43:07.284557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:71144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.645 [2024-10-01 08:43:07.284562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:30.645 [2024-10-01 08:43:07.284577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:71152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.645 [2024-10-01 08:43:07.284583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:30.645 [2024-10-01 08:43:07.284626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:71160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.645 [2024-10-01 08:43:07.284632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.645 [2024-10-01 08:43:07.284648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:71168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.645 [2024-10-01 08:43:07.284653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:30.645 [2024-10-01 08:43:07.284668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:71176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.645 [2024-10-01 08:43:07.284674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:30.645 [2024-10-01 08:43:07.284689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:71184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.645 [2024-10-01 08:43:07.284694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:30.645 [2024-10-01 08:43:07.284709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:71192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.645 [2024-10-01 08:43:07.284715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:30.645 [2024-10-01 08:43:07.284731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:71200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.645 [2024-10-01 08:43:07.284737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:30.645 [2024-10-01 08:43:07.284752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:71208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.645 [2024-10-01 08:43:07.284757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:30.645 [2024-10-01 08:43:07.284773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:71216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.645 [2024-10-01 08:43:07.284777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:30.645 [2024-10-01 08:43:07.284887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:71224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.645 [2024-10-01 08:43:07.284896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:30.645 [2024-10-01 08:43:07.284915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:71232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.645 [2024-10-01 08:43:07.284920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:30.646 [2024-10-01 08:43:07.284936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:71240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.646 [2024-10-01 08:43:07.284941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:30.646 [2024-10-01 08:43:07.284956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:71248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.646 [2024-10-01 08:43:07.284962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:30.646 [2024-10-01 08:43:07.284977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:71256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.646 [2024-10-01 08:43:07.284983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:30.646 [2024-10-01 08:43:07.285002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:71264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.646 [2024-10-01 08:43:07.285007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:30.646 [2024-10-01 08:43:07.285023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:71272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.646 [2024-10-01 08:43:07.285028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:30.646 [2024-10-01 08:43:07.285044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:71280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.646 [2024-10-01 08:43:07.285049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:30.646 [2024-10-01 08:43:07.285152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:71288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.646 [2024-10-01 08:43:07.285158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:30.646 [2024-10-01 08:43:07.285177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:71296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.646 [2024-10-01 08:43:07.285182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:30.646 [2024-10-01 08:43:07.285198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:71304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.646 [2024-10-01 08:43:07.285203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:30.646 [2024-10-01 08:43:07.285219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:71312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.646 [2024-10-01 08:43:07.285224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:30.646 [2024-10-01 08:43:07.285240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:71320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.646 [2024-10-01 08:43:07.285246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:30.646 [2024-10-01 08:43:07.285262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.646 [2024-10-01 08:43:07.285268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:30.646 [2024-10-01 08:43:07.285284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:71336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.646 [2024-10-01 08:43:07.285289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:30.646 [2024-10-01 08:43:07.285306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:71344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.646 [2024-10-01 08:43:07.285311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:30.646 [2024-10-01 08:43:07.285346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:71352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.646 [2024-10-01 08:43:07.285356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:30.646 [2024-10-01 08:43:07.285376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:71360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.646 [2024-10-01 08:43:07.285382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:30.646 10290.08 IOPS, 40.20 MiB/s 9498.54 IOPS, 37.10 MiB/s 8820.07 IOPS, 34.45 MiB/s 8361.80 IOPS, 32.66 MiB/s 8653.50 IOPS, 33.80 MiB/s 8916.53 IOPS, 34.83 MiB/s 9358.78 IOPS, 36.56 MiB/s 9754.11 IOPS, 38.10 MiB/s 10018.70 IOPS, 39.14 MiB/s 10156.57 IOPS, 39.67 MiB/s 10296.41 IOPS, 40.22 MiB/s 10570.09 IOPS, 41.29 MiB/s 10833.42 IOPS, 42.32 MiB/s [2024-10-01 08:43:19.946216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:47880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.646 [2024-10-01 08:43:19.946253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:30.646 [2024-10-01 08:43:19.946285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:47896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.646 [2024-10-01 08:43:19.946292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:30.646 [2024-10-01 08:43:19.946307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:47912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.646 [2024-10-01 08:43:19.946313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:30.646 [2024-10-01 08:43:19.946324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:47928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.646 [2024-10-01 08:43:19.946329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:30.646 [2024-10-01 08:43:19.946339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:47152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.646 [2024-10-01 08:43:19.946345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:30.646 [2024-10-01 08:43:19.946355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:47184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.646 [2024-10-01 08:43:19.946361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.646 [2024-10-01 08:43:19.946812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:47208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.646 [2024-10-01 08:43:19.946825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:30.646 [2024-10-01 08:43:19.946838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:47240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.646 [2024-10-01 08:43:19.946844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:30.646 [2024-10-01 08:43:19.946855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:47784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.646 [2024-10-01 08:43:19.946860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:30.646 [2024-10-01 08:43:19.946870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:47816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.646 [2024-10-01 08:43:19.946876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:30.646 [2024-10-01 08:43:19.946886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:47848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.646 [2024-10-01 08:43:19.946891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:30.646 [2024-10-01 08:43:19.946902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:47952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.646 [2024-10-01 08:43:19.946907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:30.646 [2024-10-01 08:43:19.946917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:47856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.646 [2024-10-01 08:43:19.946922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:30.646 [2024-10-01 08:43:19.946933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:47976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.646 [2024-10-01 08:43:19.946938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:30.646 [2024-10-01 08:43:19.946948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:47992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.646 [2024-10-01 08:43:19.946956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:30.646 [2024-10-01 08:43:19.946966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:48008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.646 [2024-10-01 08:43:19.946971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:30.646 10919.72 IOPS, 42.66 MiB/s 10875.15 IOPS, 42.48 MiB/s Received shutdown signal, test time was about 26.686851 seconds 00:28:30.646 00:28:30.646 Latency(us) 00:28:30.646 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:30.646 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:30.646 Verification LBA range: start 0x0 length 0x4000 00:28:30.646 Nvme0n1 : 26.69 10845.48 42.37 0.00 0.00 11784.72 203.09 3019898.88 00:28:30.646 =================================================================================================================== 00:28:30.646 Total : 10845.48 42.37 0.00 0.00 11784.72 203.09 3019898.88 00:28:30.646 [2024-10-01 08:43:22.159306] app.c:1033:log_deprecation_hits: *WARNING*: multipath_config: deprecation 'bdev_nvme_attach_controller.multipath configuration mismatch' scheduled for removal in v25.01 hit 1 times 00:28:30.646 08:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:30.646 08:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:28:30.646 08:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:30.646 08:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:28:30.646 08:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # nvmfcleanup 00:28:30.646 08:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:28:30.646 08:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:30.647 08:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:28:30.647 08:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:30.647 08:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:30.910 rmmod nvme_tcp 00:28:30.910 rmmod nvme_fabrics 00:28:30.910 rmmod nvme_keyring 00:28:30.910 08:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:30.910 08:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:28:30.910 08:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:28:30.910 08:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@513 -- # '[' -n 3882253 ']' 00:28:30.910 08:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # killprocess 3882253 00:28:30.910 08:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 3882253 ']' 00:28:30.910 08:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 3882253 00:28:30.910 08:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:28:30.910 08:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:30.910 08:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3882253 00:28:30.910 08:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:30.910 08:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:30.910 08:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3882253' 00:28:30.910 killing process with pid 3882253 00:28:30.910 08:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 3882253 00:28:30.910 08:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 3882253 00:28:31.171 08:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:28:31.171 08:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:28:31.171 08:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:28:31.171 08:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:28:31.171 08:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # iptables-save 00:28:31.171 08:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:28:31.171 08:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # iptables-restore 00:28:31.171 08:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:31.171 08:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:31.172 08:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:31.172 08:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:31.172 08:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:33.087 08:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:33.087 00:28:33.087 real 0m40.153s 00:28:33.087 user 1m43.274s 00:28:33.087 sys 0m11.478s 00:28:33.087 08:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:33.087 08:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:33.087 ************************************ 00:28:33.087 END TEST nvmf_host_multipath_status 00:28:33.087 ************************************ 00:28:33.087 08:43:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:33.087 08:43:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:33.087 08:43:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:33.087 08:43:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.087 ************************************ 00:28:33.087 START TEST nvmf_discovery_remove_ifc 00:28:33.087 ************************************ 00:28:33.087 08:43:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:33.348 * Looking for test storage... 00:28:33.348 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:33.348 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:33.348 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lcov --version 00:28:33.348 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:33.348 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:33.348 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:33.348 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:33.348 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:33.348 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:28:33.348 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:28:33.348 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:28:33.348 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:28:33.348 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:28:33.348 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:28:33.348 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:28:33.348 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:33.348 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:28:33.348 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:28:33.348 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:33.348 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:33.348 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:28:33.348 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:28:33.348 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:33.348 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:28:33.348 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:28:33.348 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:28:33.348 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:28:33.348 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:33.348 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:28:33.348 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:28:33.348 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:33.348 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:33.348 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:28:33.348 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:33.348 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:33.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.348 --rc genhtml_branch_coverage=1 00:28:33.348 --rc genhtml_function_coverage=1 00:28:33.348 --rc genhtml_legend=1 00:28:33.349 --rc geninfo_all_blocks=1 00:28:33.349 --rc geninfo_unexecuted_blocks=1 00:28:33.349 00:28:33.349 ' 00:28:33.349 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:33.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.349 --rc genhtml_branch_coverage=1 00:28:33.349 --rc genhtml_function_coverage=1 00:28:33.349 --rc genhtml_legend=1 00:28:33.349 --rc geninfo_all_blocks=1 00:28:33.349 --rc geninfo_unexecuted_blocks=1 00:28:33.349 00:28:33.349 ' 00:28:33.349 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:33.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.349 --rc genhtml_branch_coverage=1 00:28:33.349 --rc genhtml_function_coverage=1 00:28:33.349 --rc genhtml_legend=1 00:28:33.349 --rc geninfo_all_blocks=1 00:28:33.349 --rc geninfo_unexecuted_blocks=1 00:28:33.349 00:28:33.349 ' 00:28:33.349 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:33.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.349 --rc genhtml_branch_coverage=1 00:28:33.349 --rc genhtml_function_coverage=1 00:28:33.349 --rc genhtml_legend=1 00:28:33.349 --rc geninfo_all_blocks=1 00:28:33.349 --rc geninfo_unexecuted_blocks=1 00:28:33.349 00:28:33.349 ' 00:28:33.349 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:33.349 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:28:33.349 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:33.349 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:33.349 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:33.349 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:33.349 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:33.349 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:33.349 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:33.349 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:33.349 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:33.349 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:33.349 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:33.349 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:33.349 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:33.349 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:33.349 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:33.349 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:33.349 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:33.349 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:28:33.349 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:33.349 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:33.349 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:33.349 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.349 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.349 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.349 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:28:33.349 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.349 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:28:33.349 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:33.349 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:33.349 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:33.349 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:33.349 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:33.349 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:33.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:33.349 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:33.349 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:33.349 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:33.349 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:28:33.349 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:28:33.349 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:28:33.349 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:28:33.349 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:28:33.349 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:28:33.349 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:28:33.349 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:28:33.349 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:33.349 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@472 -- # prepare_net_devs 00:28:33.349 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@434 -- # local -g is_hw=no 00:28:33.349 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@436 -- # remove_spdk_ns 00:28:33.349 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:33.349 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:33.349 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:33.349 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:28:33.349 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:28:33.349 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:28:33.349 08:43:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:41.498 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:41.498 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:28:41.498 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:41.498 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:41.498 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:41.498 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:41.498 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:41.498 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:28:41.498 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:41.498 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:28:41.498 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:28:41.498 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:28:41.498 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:28:41.498 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:28:41.498 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:28:41.498 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:41.498 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:41.498 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:41.498 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:41.498 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:41.498 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:41.498 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:41.498 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:41.498 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:41.498 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:41.498 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:41.498 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:28:41.498 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:28:41.498 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:28:41.498 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:28:41.498 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:28:41.498 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:28:41.498 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:41.498 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:41.498 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:41.498 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:41.498 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:41.498 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:41.498 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:41.498 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:41.498 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:41.498 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:41.498 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:41.498 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:41.498 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:41.498 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:41.498 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:41.498 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:41.498 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:28:41.498 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:28:41.498 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:28:41.499 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:41.499 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:41.499 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:41.499 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:41.499 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:41.499 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:41.499 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:41.499 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:41.499 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:41.499 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:41.499 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:41.499 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:41.499 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:41.499 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:41.499 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:41.499 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:41.499 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:41.499 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:41.499 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:41.499 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:41.499 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:28:41.499 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # is_hw=yes 00:28:41.499 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:28:41.499 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:28:41.499 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:28:41.499 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:41.499 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:41.499 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:41.499 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:41.499 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:41.499 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:41.499 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:41.499 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:41.499 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:41.499 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:41.499 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:41.499 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:41.499 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:41.499 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:41.499 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:41.499 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:41.499 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:41.499 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:41.499 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:41.499 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:41.499 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:41.499 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:41.499 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:41.499 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:41.499 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:28:41.499 00:28:41.499 --- 10.0.0.2 ping statistics --- 00:28:41.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:41.499 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:28:41.499 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:41.499 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:41.499 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:28:41.499 00:28:41.499 --- 10.0.0.1 ping statistics --- 00:28:41.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:41.499 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:28:41.499 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:41.499 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # return 0 00:28:41.499 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:28:41.499 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:41.499 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:28:41.499 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:28:41.499 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:41.499 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:28:41.499 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:28:41.499 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:28:41.499 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:28:41.499 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:41.499 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:41.499 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@505 -- # nvmfpid=3892551 00:28:41.499 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@506 -- # waitforlisten 3892551 00:28:41.499 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:41.499 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 3892551 ']' 00:28:41.499 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:41.499 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:41.499 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:41.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:41.499 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:41.499 08:43:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:41.499 [2024-10-01 08:43:32.671515] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:28:41.500 [2024-10-01 08:43:32.671582] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:41.500 [2024-10-01 08:43:32.757951] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:41.500 [2024-10-01 08:43:32.850821] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:41.500 [2024-10-01 08:43:32.850881] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:41.500 [2024-10-01 08:43:32.850889] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:41.500 [2024-10-01 08:43:32.850897] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:41.500 [2024-10-01 08:43:32.850903] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:41.500 [2024-10-01 08:43:32.851724] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:41.802 08:43:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:41.802 08:43:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:28:41.802 08:43:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:28:41.802 08:43:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:41.802 08:43:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:41.802 08:43:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:41.802 08:43:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:28:41.802 08:43:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.802 08:43:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:41.802 [2024-10-01 08:43:33.537092] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:41.802 [2024-10-01 08:43:33.545372] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:28:41.802 null0 00:28:41.802 [2024-10-01 08:43:33.577295] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:41.802 08:43:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.802 08:43:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3892856 00:28:41.802 08:43:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3892856 /tmp/host.sock 00:28:41.802 08:43:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:28:41.802 08:43:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 3892856 ']' 00:28:41.802 08:43:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:28:41.802 08:43:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:41.802 08:43:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:28:41.802 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:28:41.802 08:43:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:41.802 08:43:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:42.093 [2024-10-01 08:43:33.653358] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:28:42.093 [2024-10-01 08:43:33.653424] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3892856 ] 00:28:42.093 [2024-10-01 08:43:33.717809] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:42.093 [2024-10-01 08:43:33.792216] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:42.690 08:43:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:42.690 08:43:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:28:42.690 08:43:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:42.690 08:43:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:28:42.690 08:43:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.690 08:43:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:42.690 08:43:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.690 08:43:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:28:42.690 08:43:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.691 08:43:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:42.951 08:43:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.952 08:43:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:28:42.952 08:43:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.952 08:43:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:43.891 [2024-10-01 08:43:35.575206] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:43.891 [2024-10-01 08:43:35.575230] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:43.891 [2024-10-01 08:43:35.575245] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:43.891 [2024-10-01 08:43:35.661528] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:28:44.151 [2024-10-01 08:43:35.882100] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:28:44.151 [2024-10-01 08:43:35.882149] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:28:44.151 [2024-10-01 08:43:35.882170] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:28:44.151 [2024-10-01 08:43:35.882184] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:44.151 [2024-10-01 08:43:35.882209] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:44.151 08:43:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.151 08:43:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:28:44.151 [2024-10-01 08:43:35.885077] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x24a0600 was disconnected and freed. delete nvme_qpair. 00:28:44.151 08:43:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:44.151 08:43:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:44.151 08:43:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:44.151 08:43:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.151 08:43:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:44.151 08:43:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:44.151 08:43:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:44.151 08:43:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.151 08:43:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:28:44.151 08:43:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:28:44.151 08:43:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:28:44.412 08:43:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:28:44.412 08:43:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:44.412 08:43:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:44.412 08:43:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:44.413 08:43:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.413 08:43:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:44.413 08:43:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:44.413 08:43:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:44.413 08:43:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.413 08:43:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:44.413 08:43:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:45.356 08:43:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:45.356 08:43:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:45.356 08:43:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:45.356 08:43:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.356 08:43:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:45.356 08:43:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:45.356 08:43:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:45.356 08:43:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.356 08:43:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:45.356 08:43:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:46.742 08:43:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:46.742 08:43:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:46.742 08:43:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:46.742 08:43:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:46.742 08:43:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:46.742 08:43:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:46.742 08:43:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:46.742 08:43:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:46.742 08:43:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:46.742 08:43:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:47.684 08:43:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:47.684 08:43:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:47.684 08:43:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:47.684 08:43:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:47.684 08:43:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.684 08:43:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:47.684 08:43:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:47.684 08:43:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.684 08:43:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:47.684 08:43:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:48.625 08:43:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:48.625 08:43:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:48.625 08:43:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:48.625 08:43:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.625 08:43:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:48.625 08:43:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:48.625 08:43:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:48.625 08:43:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:48.625 08:43:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:48.625 08:43:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:49.566 [2024-10-01 08:43:41.323324] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:28:49.566 [2024-10-01 08:43:41.323375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:49.566 [2024-10-01 08:43:41.323387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.566 [2024-10-01 08:43:41.323397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:49.566 [2024-10-01 08:43:41.323410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.566 [2024-10-01 08:43:41.323419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:49.566 [2024-10-01 08:43:41.323427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.566 [2024-10-01 08:43:41.323435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:49.566 [2024-10-01 08:43:41.323442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.566 [2024-10-01 08:43:41.323451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:49.566 [2024-10-01 08:43:41.323459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.566 [2024-10-01 08:43:41.323466] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ce80 is same with the state(6) to be set 00:28:49.566 [2024-10-01 08:43:41.333345] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x247ce80 (9): Bad file descriptor 00:28:49.566 08:43:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:49.566 08:43:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:49.566 08:43:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:49.566 08:43:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:49.566 08:43:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.566 08:43:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:49.566 08:43:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:49.566 [2024-10-01 08:43:41.343387] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:50.948 [2024-10-01 08:43:42.400040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:28:50.948 [2024-10-01 08:43:42.400091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x247ce80 with addr=10.0.0.2, port=4420 00:28:50.948 [2024-10-01 08:43:42.400105] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ce80 is same with the state(6) to be set 00:28:50.948 [2024-10-01 08:43:42.400135] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x247ce80 (9): Bad file descriptor 00:28:50.948 [2024-10-01 08:43:42.400201] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:50.948 [2024-10-01 08:43:42.400224] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:50.948 [2024-10-01 08:43:42.400232] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:50.948 [2024-10-01 08:43:42.400241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:50.948 [2024-10-01 08:43:42.400259] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.948 [2024-10-01 08:43:42.400268] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:50.948 08:43:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:50.948 08:43:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:50.948 08:43:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:51.888 [2024-10-01 08:43:43.402647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:51.888 [2024-10-01 08:43:43.402674] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:51.888 [2024-10-01 08:43:43.402683] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:51.888 [2024-10-01 08:43:43.402690] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:28:51.888 [2024-10-01 08:43:43.402704] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.888 [2024-10-01 08:43:43.402725] bdev_nvme.c:6913:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:28:51.888 [2024-10-01 08:43:43.402748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.888 [2024-10-01 08:43:43.402758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.888 [2024-10-01 08:43:43.402769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.888 [2024-10-01 08:43:43.402776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.888 [2024-10-01 08:43:43.402786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.888 [2024-10-01 08:43:43.402793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.888 [2024-10-01 08:43:43.402801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.888 [2024-10-01 08:43:43.402809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.888 [2024-10-01 08:43:43.402818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.888 [2024-10-01 08:43:43.402825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.888 [2024-10-01 08:43:43.402833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:28:51.888 [2024-10-01 08:43:43.403562] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x246c5c0 (9): Bad file descriptor 00:28:51.888 [2024-10-01 08:43:43.404573] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:28:51.888 [2024-10-01 08:43:43.404583] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:28:51.888 08:43:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:51.888 08:43:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:51.888 08:43:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:51.888 08:43:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.888 08:43:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:51.888 08:43:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:51.888 08:43:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:51.888 08:43:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.888 08:43:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:28:51.888 08:43:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:51.888 08:43:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:51.888 08:43:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:28:51.888 08:43:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:51.888 08:43:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:51.888 08:43:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:51.888 08:43:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.888 08:43:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:51.888 08:43:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:51.888 08:43:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:51.889 08:43:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.889 08:43:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:51.889 08:43:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:52.828 08:43:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:52.828 08:43:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:52.828 08:43:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:52.828 08:43:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:52.828 08:43:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.828 08:43:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:52.828 08:43:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:52.828 08:43:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.087 08:43:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:53.087 08:43:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:53.663 [2024-10-01 08:43:45.461153] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:53.663 [2024-10-01 08:43:45.461170] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:53.663 [2024-10-01 08:43:45.461182] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:53.923 [2024-10-01 08:43:45.587598] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:28:53.923 08:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:53.923 08:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:53.923 08:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:53.923 08:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.923 08:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:53.923 08:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:53.923 08:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:53.923 [2024-10-01 08:43:45.691581] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:28:53.923 [2024-10-01 08:43:45.691624] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:28:53.923 [2024-10-01 08:43:45.691644] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:28:53.923 [2024-10-01 08:43:45.691657] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:28:53.923 [2024-10-01 08:43:45.691670] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:53.923 [2024-10-01 08:43:45.699381] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x2478490 was disconnected and freed. delete nvme_qpair. 00:28:53.923 08:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.923 08:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:28:53.923 08:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:28:53.923 08:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3892856 00:28:53.923 08:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 3892856 ']' 00:28:53.923 08:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 3892856 00:28:53.923 08:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:28:53.923 08:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:53.923 08:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3892856 00:28:54.183 08:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:54.183 08:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:54.183 08:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3892856' 00:28:54.183 killing process with pid 3892856 00:28:54.183 08:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 3892856 00:28:54.183 08:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 3892856 00:28:54.183 08:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:28:54.183 08:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # nvmfcleanup 00:28:54.183 08:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:28:54.183 08:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:54.183 08:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:28:54.183 08:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:54.183 08:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:54.183 rmmod nvme_tcp 00:28:54.183 rmmod nvme_fabrics 00:28:54.183 rmmod nvme_keyring 00:28:54.183 08:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:54.183 08:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:28:54.183 08:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:28:54.183 08:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@513 -- # '[' -n 3892551 ']' 00:28:54.183 08:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@514 -- # killprocess 3892551 00:28:54.183 08:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 3892551 ']' 00:28:54.183 08:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 3892551 00:28:54.183 08:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:28:54.183 08:43:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:54.183 08:43:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3892551 00:28:54.443 08:43:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:54.443 08:43:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:54.443 08:43:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3892551' 00:28:54.443 killing process with pid 3892551 00:28:54.443 08:43:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 3892551 00:28:54.443 08:43:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 3892551 00:28:54.443 08:43:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:28:54.443 08:43:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:28:54.443 08:43:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:28:54.443 08:43:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:28:54.443 08:43:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # iptables-save 00:28:54.444 08:43:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:28:54.444 08:43:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # iptables-restore 00:28:54.444 08:43:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:54.444 08:43:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:54.444 08:43:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:54.444 08:43:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:54.444 08:43:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:56.987 08:43:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:56.987 00:28:56.987 real 0m23.346s 00:28:56.987 user 0m27.381s 00:28:56.987 sys 0m7.030s 00:28:56.987 08:43:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:56.987 08:43:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:56.987 ************************************ 00:28:56.987 END TEST nvmf_discovery_remove_ifc 00:28:56.987 ************************************ 00:28:56.987 08:43:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:28:56.987 08:43:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:56.987 08:43:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:56.987 08:43:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.987 ************************************ 00:28:56.987 START TEST nvmf_identify_kernel_target 00:28:56.987 ************************************ 00:28:56.987 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:28:56.987 * Looking for test storage... 00:28:56.987 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:56.987 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:56.987 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lcov --version 00:28:56.987 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:56.987 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:56.987 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:56.987 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:56.987 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:56.987 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:28:56.987 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:28:56.987 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:28:56.987 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:28:56.987 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:28:56.987 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:28:56.987 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:28:56.987 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:56.987 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:28:56.987 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:28:56.987 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:56.987 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:56.987 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:28:56.988 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:28:56.988 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:56.988 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:28:56.988 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:28:56.988 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:28:56.988 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:28:56.988 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:56.988 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:28:56.988 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:28:56.988 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:56.988 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:56.988 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:28:56.988 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:56.988 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:56.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:56.988 --rc genhtml_branch_coverage=1 00:28:56.988 --rc genhtml_function_coverage=1 00:28:56.988 --rc genhtml_legend=1 00:28:56.988 --rc geninfo_all_blocks=1 00:28:56.988 --rc geninfo_unexecuted_blocks=1 00:28:56.988 00:28:56.988 ' 00:28:56.988 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:56.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:56.988 --rc genhtml_branch_coverage=1 00:28:56.988 --rc genhtml_function_coverage=1 00:28:56.988 --rc genhtml_legend=1 00:28:56.988 --rc geninfo_all_blocks=1 00:28:56.988 --rc geninfo_unexecuted_blocks=1 00:28:56.988 00:28:56.988 ' 00:28:56.988 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:56.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:56.988 --rc genhtml_branch_coverage=1 00:28:56.988 --rc genhtml_function_coverage=1 00:28:56.988 --rc genhtml_legend=1 00:28:56.988 --rc geninfo_all_blocks=1 00:28:56.988 --rc geninfo_unexecuted_blocks=1 00:28:56.988 00:28:56.988 ' 00:28:56.988 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:56.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:56.988 --rc genhtml_branch_coverage=1 00:28:56.988 --rc genhtml_function_coverage=1 00:28:56.988 --rc genhtml_legend=1 00:28:56.988 --rc geninfo_all_blocks=1 00:28:56.988 --rc geninfo_unexecuted_blocks=1 00:28:56.988 00:28:56.988 ' 00:28:56.988 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:56.988 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:28:56.988 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:56.988 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:56.988 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:56.988 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:56.988 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:56.988 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:56.988 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:56.988 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:56.988 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:56.988 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:56.988 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:56.988 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:56.988 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:56.988 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:56.988 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:56.988 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:56.988 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:56.988 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:28:56.988 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:56.988 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:56.988 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:56.988 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:56.988 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:56.988 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:56.988 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:28:56.988 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:56.988 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:28:56.988 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:56.988 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:56.988 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:56.988 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:56.988 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:56.988 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:56.988 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:56.988 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:56.988 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:56.988 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:56.988 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:28:56.988 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:28:56.988 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:56.988 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:28:56.988 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:28:56.988 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:28:56.988 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:56.988 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:56.988 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:56.988 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:28:56.988 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:28:56.988 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:28:56.988 08:43:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:05.126 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:05.126 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:05.126 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:05.126 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # is_hw=yes 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:05.126 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:05.126 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:05.126 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.694 ms 00:29:05.126 00:29:05.126 --- 10.0.0.2 ping statistics --- 00:29:05.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:05.127 rtt min/avg/max/mdev = 0.694/0.694/0.694/0.000 ms 00:29:05.127 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:05.127 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:05.127 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.304 ms 00:29:05.127 00:29:05.127 --- 10.0.0.1 ping statistics --- 00:29:05.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:05.127 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:29:05.127 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:05.127 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # return 0 00:29:05.127 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:29:05.127 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:05.127 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:29:05.127 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:29:05.127 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:05.127 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:29:05.127 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:29:05.127 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:29:05.127 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:29:05.127 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@765 -- # local ip 00:29:05.127 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:05.127 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:05.127 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:05.127 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:05.127 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:05.127 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:05.127 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:05.127 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:05.127 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:05.127 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:29:05.127 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:29:05.127 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:29:05.127 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:29:05.127 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:05.127 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:05.127 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:05.127 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # local block nvme 00:29:05.127 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:29:05.127 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@666 -- # modprobe nvmet 00:29:05.127 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:05.127 08:43:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:07.671 Waiting for block devices as requested 00:29:07.671 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:29:07.671 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:29:07.932 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:29:07.932 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:29:07.932 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:29:08.192 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:29:08.192 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:29:08.192 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:29:08.192 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:29:08.453 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:29:08.453 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:29:08.713 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:29:08.713 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:29:08.713 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:29:08.713 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:29:08.973 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:29:08.973 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:29:09.234 08:44:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:29:09.234 08:44:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:09.234 08:44:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:29:09.234 08:44:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:29:09.234 08:44:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:09.234 08:44:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:29:09.234 08:44:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:29:09.234 08:44:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:29:09.234 08:44:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:29:09.234 No valid GPT data, bailing 00:29:09.234 08:44:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:09.234 08:44:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:29:09.234 08:44:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:29:09.234 08:44:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:29:09.234 08:44:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # [[ -b /dev/nvme0n1 ]] 00:29:09.234 08:44:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:09.234 08:44:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:09.234 08:44:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:09.234 08:44:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:29:09.234 08:44:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo 1 00:29:09.234 08:44:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@692 -- # echo /dev/nvme0n1 00:29:09.234 08:44:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:29:09.234 08:44:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:29:09.234 08:44:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo tcp 00:29:09.234 08:44:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 4420 00:29:09.234 08:44:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo ipv4 00:29:09.234 08:44:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:09.496 08:44:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:29:09.496 00:29:09.496 Discovery Log Number of Records 2, Generation counter 2 00:29:09.496 =====Discovery Log Entry 0====== 00:29:09.496 trtype: tcp 00:29:09.496 adrfam: ipv4 00:29:09.496 subtype: current discovery subsystem 00:29:09.496 treq: not specified, sq flow control disable supported 00:29:09.496 portid: 1 00:29:09.496 trsvcid: 4420 00:29:09.496 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:09.496 traddr: 10.0.0.1 00:29:09.496 eflags: none 00:29:09.496 sectype: none 00:29:09.496 =====Discovery Log Entry 1====== 00:29:09.496 trtype: tcp 00:29:09.496 adrfam: ipv4 00:29:09.496 subtype: nvme subsystem 00:29:09.496 treq: not specified, sq flow control disable supported 00:29:09.496 portid: 1 00:29:09.496 trsvcid: 4420 00:29:09.496 subnqn: nqn.2016-06.io.spdk:testnqn 00:29:09.496 traddr: 10.0.0.1 00:29:09.496 eflags: none 00:29:09.496 sectype: none 00:29:09.496 08:44:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:29:09.496 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:29:09.496 ===================================================== 00:29:09.496 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:29:09.496 ===================================================== 00:29:09.496 Controller Capabilities/Features 00:29:09.496 ================================ 00:29:09.496 Vendor ID: 0000 00:29:09.496 Subsystem Vendor ID: 0000 00:29:09.496 Serial Number: 0808e1e857bc2cbd25f1 00:29:09.496 Model Number: Linux 00:29:09.496 Firmware Version: 6.8.9-20 00:29:09.496 Recommended Arb Burst: 0 00:29:09.496 IEEE OUI Identifier: 00 00 00 00:29:09.496 Multi-path I/O 00:29:09.496 May have multiple subsystem ports: No 00:29:09.496 May have multiple controllers: No 00:29:09.496 Associated with SR-IOV VF: No 00:29:09.496 Max Data Transfer Size: Unlimited 00:29:09.496 Max Number of Namespaces: 0 00:29:09.496 Max Number of I/O Queues: 1024 00:29:09.496 NVMe Specification Version (VS): 1.3 00:29:09.496 NVMe Specification Version (Identify): 1.3 00:29:09.496 Maximum Queue Entries: 1024 00:29:09.496 Contiguous Queues Required: No 00:29:09.496 Arbitration Mechanisms Supported 00:29:09.496 Weighted Round Robin: Not Supported 00:29:09.496 Vendor Specific: Not Supported 00:29:09.496 Reset Timeout: 7500 ms 00:29:09.496 Doorbell Stride: 4 bytes 00:29:09.496 NVM Subsystem Reset: Not Supported 00:29:09.496 Command Sets Supported 00:29:09.496 NVM Command Set: Supported 00:29:09.496 Boot Partition: Not Supported 00:29:09.496 Memory Page Size Minimum: 4096 bytes 00:29:09.496 Memory Page Size Maximum: 4096 bytes 00:29:09.496 Persistent Memory Region: Not Supported 00:29:09.496 Optional Asynchronous Events Supported 00:29:09.496 Namespace Attribute Notices: Not Supported 00:29:09.496 Firmware Activation Notices: Not Supported 00:29:09.496 ANA Change Notices: Not Supported 00:29:09.496 PLE Aggregate Log Change Notices: Not Supported 00:29:09.496 LBA Status Info Alert Notices: Not Supported 00:29:09.496 EGE Aggregate Log Change Notices: Not Supported 00:29:09.496 Normal NVM Subsystem Shutdown event: Not Supported 00:29:09.496 Zone Descriptor Change Notices: Not Supported 00:29:09.496 Discovery Log Change Notices: Supported 00:29:09.496 Controller Attributes 00:29:09.496 128-bit Host Identifier: Not Supported 00:29:09.496 Non-Operational Permissive Mode: Not Supported 00:29:09.496 NVM Sets: Not Supported 00:29:09.496 Read Recovery Levels: Not Supported 00:29:09.496 Endurance Groups: Not Supported 00:29:09.496 Predictable Latency Mode: Not Supported 00:29:09.496 Traffic Based Keep ALive: Not Supported 00:29:09.496 Namespace Granularity: Not Supported 00:29:09.496 SQ Associations: Not Supported 00:29:09.496 UUID List: Not Supported 00:29:09.496 Multi-Domain Subsystem: Not Supported 00:29:09.496 Fixed Capacity Management: Not Supported 00:29:09.496 Variable Capacity Management: Not Supported 00:29:09.496 Delete Endurance Group: Not Supported 00:29:09.496 Delete NVM Set: Not Supported 00:29:09.496 Extended LBA Formats Supported: Not Supported 00:29:09.496 Flexible Data Placement Supported: Not Supported 00:29:09.496 00:29:09.496 Controller Memory Buffer Support 00:29:09.496 ================================ 00:29:09.496 Supported: No 00:29:09.496 00:29:09.496 Persistent Memory Region Support 00:29:09.496 ================================ 00:29:09.496 Supported: No 00:29:09.496 00:29:09.496 Admin Command Set Attributes 00:29:09.496 ============================ 00:29:09.496 Security Send/Receive: Not Supported 00:29:09.496 Format NVM: Not Supported 00:29:09.496 Firmware Activate/Download: Not Supported 00:29:09.496 Namespace Management: Not Supported 00:29:09.496 Device Self-Test: Not Supported 00:29:09.496 Directives: Not Supported 00:29:09.496 NVMe-MI: Not Supported 00:29:09.496 Virtualization Management: Not Supported 00:29:09.496 Doorbell Buffer Config: Not Supported 00:29:09.496 Get LBA Status Capability: Not Supported 00:29:09.496 Command & Feature Lockdown Capability: Not Supported 00:29:09.496 Abort Command Limit: 1 00:29:09.496 Async Event Request Limit: 1 00:29:09.496 Number of Firmware Slots: N/A 00:29:09.496 Firmware Slot 1 Read-Only: N/A 00:29:09.496 Firmware Activation Without Reset: N/A 00:29:09.496 Multiple Update Detection Support: N/A 00:29:09.496 Firmware Update Granularity: No Information Provided 00:29:09.496 Per-Namespace SMART Log: No 00:29:09.496 Asymmetric Namespace Access Log Page: Not Supported 00:29:09.496 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:29:09.496 Command Effects Log Page: Not Supported 00:29:09.496 Get Log Page Extended Data: Supported 00:29:09.496 Telemetry Log Pages: Not Supported 00:29:09.496 Persistent Event Log Pages: Not Supported 00:29:09.496 Supported Log Pages Log Page: May Support 00:29:09.496 Commands Supported & Effects Log Page: Not Supported 00:29:09.496 Feature Identifiers & Effects Log Page:May Support 00:29:09.496 NVMe-MI Commands & Effects Log Page: May Support 00:29:09.496 Data Area 4 for Telemetry Log: Not Supported 00:29:09.496 Error Log Page Entries Supported: 1 00:29:09.496 Keep Alive: Not Supported 00:29:09.496 00:29:09.496 NVM Command Set Attributes 00:29:09.496 ========================== 00:29:09.496 Submission Queue Entry Size 00:29:09.496 Max: 1 00:29:09.496 Min: 1 00:29:09.496 Completion Queue Entry Size 00:29:09.496 Max: 1 00:29:09.496 Min: 1 00:29:09.496 Number of Namespaces: 0 00:29:09.496 Compare Command: Not Supported 00:29:09.496 Write Uncorrectable Command: Not Supported 00:29:09.496 Dataset Management Command: Not Supported 00:29:09.496 Write Zeroes Command: Not Supported 00:29:09.496 Set Features Save Field: Not Supported 00:29:09.496 Reservations: Not Supported 00:29:09.496 Timestamp: Not Supported 00:29:09.496 Copy: Not Supported 00:29:09.496 Volatile Write Cache: Not Present 00:29:09.496 Atomic Write Unit (Normal): 1 00:29:09.496 Atomic Write Unit (PFail): 1 00:29:09.496 Atomic Compare & Write Unit: 1 00:29:09.496 Fused Compare & Write: Not Supported 00:29:09.496 Scatter-Gather List 00:29:09.496 SGL Command Set: Supported 00:29:09.496 SGL Keyed: Not Supported 00:29:09.496 SGL Bit Bucket Descriptor: Not Supported 00:29:09.496 SGL Metadata Pointer: Not Supported 00:29:09.496 Oversized SGL: Not Supported 00:29:09.496 SGL Metadata Address: Not Supported 00:29:09.496 SGL Offset: Supported 00:29:09.496 Transport SGL Data Block: Not Supported 00:29:09.496 Replay Protected Memory Block: Not Supported 00:29:09.496 00:29:09.496 Firmware Slot Information 00:29:09.496 ========================= 00:29:09.496 Active slot: 0 00:29:09.496 00:29:09.497 00:29:09.497 Error Log 00:29:09.497 ========= 00:29:09.497 00:29:09.497 Active Namespaces 00:29:09.497 ================= 00:29:09.497 Discovery Log Page 00:29:09.497 ================== 00:29:09.497 Generation Counter: 2 00:29:09.497 Number of Records: 2 00:29:09.497 Record Format: 0 00:29:09.497 00:29:09.497 Discovery Log Entry 0 00:29:09.497 ---------------------- 00:29:09.497 Transport Type: 3 (TCP) 00:29:09.497 Address Family: 1 (IPv4) 00:29:09.497 Subsystem Type: 3 (Current Discovery Subsystem) 00:29:09.497 Entry Flags: 00:29:09.497 Duplicate Returned Information: 0 00:29:09.497 Explicit Persistent Connection Support for Discovery: 0 00:29:09.497 Transport Requirements: 00:29:09.497 Secure Channel: Not Specified 00:29:09.497 Port ID: 1 (0x0001) 00:29:09.497 Controller ID: 65535 (0xffff) 00:29:09.497 Admin Max SQ Size: 32 00:29:09.497 Transport Service Identifier: 4420 00:29:09.497 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:29:09.497 Transport Address: 10.0.0.1 00:29:09.497 Discovery Log Entry 1 00:29:09.497 ---------------------- 00:29:09.497 Transport Type: 3 (TCP) 00:29:09.497 Address Family: 1 (IPv4) 00:29:09.497 Subsystem Type: 2 (NVM Subsystem) 00:29:09.497 Entry Flags: 00:29:09.497 Duplicate Returned Information: 0 00:29:09.497 Explicit Persistent Connection Support for Discovery: 0 00:29:09.497 Transport Requirements: 00:29:09.497 Secure Channel: Not Specified 00:29:09.497 Port ID: 1 (0x0001) 00:29:09.497 Controller ID: 65535 (0xffff) 00:29:09.497 Admin Max SQ Size: 32 00:29:09.497 Transport Service Identifier: 4420 00:29:09.497 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:29:09.497 Transport Address: 10.0.0.1 00:29:09.497 08:44:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:09.497 get_feature(0x01) failed 00:29:09.497 get_feature(0x02) failed 00:29:09.497 get_feature(0x04) failed 00:29:09.497 ===================================================== 00:29:09.497 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:09.497 ===================================================== 00:29:09.497 Controller Capabilities/Features 00:29:09.497 ================================ 00:29:09.497 Vendor ID: 0000 00:29:09.497 Subsystem Vendor ID: 0000 00:29:09.497 Serial Number: 53fa43134deaa4695038 00:29:09.497 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:29:09.497 Firmware Version: 6.8.9-20 00:29:09.497 Recommended Arb Burst: 6 00:29:09.497 IEEE OUI Identifier: 00 00 00 00:29:09.497 Multi-path I/O 00:29:09.497 May have multiple subsystem ports: Yes 00:29:09.497 May have multiple controllers: Yes 00:29:09.497 Associated with SR-IOV VF: No 00:29:09.497 Max Data Transfer Size: Unlimited 00:29:09.497 Max Number of Namespaces: 1024 00:29:09.497 Max Number of I/O Queues: 128 00:29:09.497 NVMe Specification Version (VS): 1.3 00:29:09.497 NVMe Specification Version (Identify): 1.3 00:29:09.497 Maximum Queue Entries: 1024 00:29:09.497 Contiguous Queues Required: No 00:29:09.497 Arbitration Mechanisms Supported 00:29:09.497 Weighted Round Robin: Not Supported 00:29:09.497 Vendor Specific: Not Supported 00:29:09.497 Reset Timeout: 7500 ms 00:29:09.497 Doorbell Stride: 4 bytes 00:29:09.497 NVM Subsystem Reset: Not Supported 00:29:09.497 Command Sets Supported 00:29:09.497 NVM Command Set: Supported 00:29:09.497 Boot Partition: Not Supported 00:29:09.497 Memory Page Size Minimum: 4096 bytes 00:29:09.497 Memory Page Size Maximum: 4096 bytes 00:29:09.497 Persistent Memory Region: Not Supported 00:29:09.497 Optional Asynchronous Events Supported 00:29:09.497 Namespace Attribute Notices: Supported 00:29:09.497 Firmware Activation Notices: Not Supported 00:29:09.497 ANA Change Notices: Supported 00:29:09.497 PLE Aggregate Log Change Notices: Not Supported 00:29:09.497 LBA Status Info Alert Notices: Not Supported 00:29:09.497 EGE Aggregate Log Change Notices: Not Supported 00:29:09.497 Normal NVM Subsystem Shutdown event: Not Supported 00:29:09.497 Zone Descriptor Change Notices: Not Supported 00:29:09.497 Discovery Log Change Notices: Not Supported 00:29:09.497 Controller Attributes 00:29:09.497 128-bit Host Identifier: Supported 00:29:09.497 Non-Operational Permissive Mode: Not Supported 00:29:09.497 NVM Sets: Not Supported 00:29:09.497 Read Recovery Levels: Not Supported 00:29:09.497 Endurance Groups: Not Supported 00:29:09.497 Predictable Latency Mode: Not Supported 00:29:09.497 Traffic Based Keep ALive: Supported 00:29:09.497 Namespace Granularity: Not Supported 00:29:09.497 SQ Associations: Not Supported 00:29:09.497 UUID List: Not Supported 00:29:09.497 Multi-Domain Subsystem: Not Supported 00:29:09.497 Fixed Capacity Management: Not Supported 00:29:09.497 Variable Capacity Management: Not Supported 00:29:09.497 Delete Endurance Group: Not Supported 00:29:09.497 Delete NVM Set: Not Supported 00:29:09.497 Extended LBA Formats Supported: Not Supported 00:29:09.497 Flexible Data Placement Supported: Not Supported 00:29:09.497 00:29:09.497 Controller Memory Buffer Support 00:29:09.497 ================================ 00:29:09.497 Supported: No 00:29:09.497 00:29:09.497 Persistent Memory Region Support 00:29:09.497 ================================ 00:29:09.497 Supported: No 00:29:09.497 00:29:09.497 Admin Command Set Attributes 00:29:09.497 ============================ 00:29:09.497 Security Send/Receive: Not Supported 00:29:09.497 Format NVM: Not Supported 00:29:09.497 Firmware Activate/Download: Not Supported 00:29:09.497 Namespace Management: Not Supported 00:29:09.497 Device Self-Test: Not Supported 00:29:09.497 Directives: Not Supported 00:29:09.497 NVMe-MI: Not Supported 00:29:09.497 Virtualization Management: Not Supported 00:29:09.497 Doorbell Buffer Config: Not Supported 00:29:09.497 Get LBA Status Capability: Not Supported 00:29:09.497 Command & Feature Lockdown Capability: Not Supported 00:29:09.497 Abort Command Limit: 4 00:29:09.497 Async Event Request Limit: 4 00:29:09.497 Number of Firmware Slots: N/A 00:29:09.497 Firmware Slot 1 Read-Only: N/A 00:29:09.497 Firmware Activation Without Reset: N/A 00:29:09.497 Multiple Update Detection Support: N/A 00:29:09.497 Firmware Update Granularity: No Information Provided 00:29:09.497 Per-Namespace SMART Log: Yes 00:29:09.497 Asymmetric Namespace Access Log Page: Supported 00:29:09.497 ANA Transition Time : 10 sec 00:29:09.497 00:29:09.497 Asymmetric Namespace Access Capabilities 00:29:09.497 ANA Optimized State : Supported 00:29:09.497 ANA Non-Optimized State : Supported 00:29:09.497 ANA Inaccessible State : Supported 00:29:09.497 ANA Persistent Loss State : Supported 00:29:09.497 ANA Change State : Supported 00:29:09.497 ANAGRPID is not changed : No 00:29:09.497 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:29:09.497 00:29:09.497 ANA Group Identifier Maximum : 128 00:29:09.497 Number of ANA Group Identifiers : 128 00:29:09.497 Max Number of Allowed Namespaces : 1024 00:29:09.497 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:29:09.497 Command Effects Log Page: Supported 00:29:09.497 Get Log Page Extended Data: Supported 00:29:09.497 Telemetry Log Pages: Not Supported 00:29:09.497 Persistent Event Log Pages: Not Supported 00:29:09.497 Supported Log Pages Log Page: May Support 00:29:09.497 Commands Supported & Effects Log Page: Not Supported 00:29:09.497 Feature Identifiers & Effects Log Page:May Support 00:29:09.497 NVMe-MI Commands & Effects Log Page: May Support 00:29:09.497 Data Area 4 for Telemetry Log: Not Supported 00:29:09.497 Error Log Page Entries Supported: 128 00:29:09.497 Keep Alive: Supported 00:29:09.497 Keep Alive Granularity: 1000 ms 00:29:09.497 00:29:09.497 NVM Command Set Attributes 00:29:09.497 ========================== 00:29:09.497 Submission Queue Entry Size 00:29:09.497 Max: 64 00:29:09.497 Min: 64 00:29:09.497 Completion Queue Entry Size 00:29:09.497 Max: 16 00:29:09.497 Min: 16 00:29:09.497 Number of Namespaces: 1024 00:29:09.497 Compare Command: Not Supported 00:29:09.497 Write Uncorrectable Command: Not Supported 00:29:09.497 Dataset Management Command: Supported 00:29:09.497 Write Zeroes Command: Supported 00:29:09.497 Set Features Save Field: Not Supported 00:29:09.497 Reservations: Not Supported 00:29:09.497 Timestamp: Not Supported 00:29:09.497 Copy: Not Supported 00:29:09.497 Volatile Write Cache: Present 00:29:09.497 Atomic Write Unit (Normal): 1 00:29:09.497 Atomic Write Unit (PFail): 1 00:29:09.497 Atomic Compare & Write Unit: 1 00:29:09.497 Fused Compare & Write: Not Supported 00:29:09.497 Scatter-Gather List 00:29:09.497 SGL Command Set: Supported 00:29:09.497 SGL Keyed: Not Supported 00:29:09.497 SGL Bit Bucket Descriptor: Not Supported 00:29:09.497 SGL Metadata Pointer: Not Supported 00:29:09.497 Oversized SGL: Not Supported 00:29:09.497 SGL Metadata Address: Not Supported 00:29:09.497 SGL Offset: Supported 00:29:09.497 Transport SGL Data Block: Not Supported 00:29:09.498 Replay Protected Memory Block: Not Supported 00:29:09.498 00:29:09.498 Firmware Slot Information 00:29:09.498 ========================= 00:29:09.498 Active slot: 0 00:29:09.498 00:29:09.498 Asymmetric Namespace Access 00:29:09.498 =========================== 00:29:09.498 Change Count : 0 00:29:09.498 Number of ANA Group Descriptors : 1 00:29:09.498 ANA Group Descriptor : 0 00:29:09.498 ANA Group ID : 1 00:29:09.498 Number of NSID Values : 1 00:29:09.498 Change Count : 0 00:29:09.498 ANA State : 1 00:29:09.498 Namespace Identifier : 1 00:29:09.498 00:29:09.498 Commands Supported and Effects 00:29:09.498 ============================== 00:29:09.498 Admin Commands 00:29:09.498 -------------- 00:29:09.498 Get Log Page (02h): Supported 00:29:09.498 Identify (06h): Supported 00:29:09.498 Abort (08h): Supported 00:29:09.498 Set Features (09h): Supported 00:29:09.498 Get Features (0Ah): Supported 00:29:09.498 Asynchronous Event Request (0Ch): Supported 00:29:09.498 Keep Alive (18h): Supported 00:29:09.498 I/O Commands 00:29:09.498 ------------ 00:29:09.498 Flush (00h): Supported 00:29:09.498 Write (01h): Supported LBA-Change 00:29:09.498 Read (02h): Supported 00:29:09.498 Write Zeroes (08h): Supported LBA-Change 00:29:09.498 Dataset Management (09h): Supported 00:29:09.498 00:29:09.498 Error Log 00:29:09.498 ========= 00:29:09.498 Entry: 0 00:29:09.498 Error Count: 0x3 00:29:09.498 Submission Queue Id: 0x0 00:29:09.498 Command Id: 0x5 00:29:09.498 Phase Bit: 0 00:29:09.498 Status Code: 0x2 00:29:09.498 Status Code Type: 0x0 00:29:09.498 Do Not Retry: 1 00:29:09.759 Error Location: 0x28 00:29:09.759 LBA: 0x0 00:29:09.759 Namespace: 0x0 00:29:09.759 Vendor Log Page: 0x0 00:29:09.759 ----------- 00:29:09.759 Entry: 1 00:29:09.759 Error Count: 0x2 00:29:09.759 Submission Queue Id: 0x0 00:29:09.759 Command Id: 0x5 00:29:09.759 Phase Bit: 0 00:29:09.759 Status Code: 0x2 00:29:09.759 Status Code Type: 0x0 00:29:09.759 Do Not Retry: 1 00:29:09.759 Error Location: 0x28 00:29:09.759 LBA: 0x0 00:29:09.759 Namespace: 0x0 00:29:09.759 Vendor Log Page: 0x0 00:29:09.759 ----------- 00:29:09.759 Entry: 2 00:29:09.759 Error Count: 0x1 00:29:09.759 Submission Queue Id: 0x0 00:29:09.759 Command Id: 0x4 00:29:09.759 Phase Bit: 0 00:29:09.759 Status Code: 0x2 00:29:09.759 Status Code Type: 0x0 00:29:09.759 Do Not Retry: 1 00:29:09.759 Error Location: 0x28 00:29:09.759 LBA: 0x0 00:29:09.759 Namespace: 0x0 00:29:09.759 Vendor Log Page: 0x0 00:29:09.759 00:29:09.759 Number of Queues 00:29:09.759 ================ 00:29:09.759 Number of I/O Submission Queues: 128 00:29:09.759 Number of I/O Completion Queues: 128 00:29:09.759 00:29:09.759 ZNS Specific Controller Data 00:29:09.759 ============================ 00:29:09.759 Zone Append Size Limit: 0 00:29:09.759 00:29:09.759 00:29:09.759 Active Namespaces 00:29:09.759 ================= 00:29:09.759 get_feature(0x05) failed 00:29:09.759 Namespace ID:1 00:29:09.759 Command Set Identifier: NVM (00h) 00:29:09.759 Deallocate: Supported 00:29:09.759 Deallocated/Unwritten Error: Not Supported 00:29:09.759 Deallocated Read Value: Unknown 00:29:09.759 Deallocate in Write Zeroes: Not Supported 00:29:09.759 Deallocated Guard Field: 0xFFFF 00:29:09.759 Flush: Supported 00:29:09.759 Reservation: Not Supported 00:29:09.759 Namespace Sharing Capabilities: Multiple Controllers 00:29:09.759 Size (in LBAs): 3750748848 (1788GiB) 00:29:09.759 Capacity (in LBAs): 3750748848 (1788GiB) 00:29:09.759 Utilization (in LBAs): 3750748848 (1788GiB) 00:29:09.759 UUID: 28bcfd92-1dbf-4c0d-b27c-d962ddb93602 00:29:09.759 Thin Provisioning: Not Supported 00:29:09.759 Per-NS Atomic Units: Yes 00:29:09.759 Atomic Write Unit (Normal): 8 00:29:09.759 Atomic Write Unit (PFail): 8 00:29:09.759 Preferred Write Granularity: 8 00:29:09.759 Atomic Compare & Write Unit: 8 00:29:09.759 Atomic Boundary Size (Normal): 0 00:29:09.759 Atomic Boundary Size (PFail): 0 00:29:09.759 Atomic Boundary Offset: 0 00:29:09.759 NGUID/EUI64 Never Reused: No 00:29:09.759 ANA group ID: 1 00:29:09.759 Namespace Write Protected: No 00:29:09.759 Number of LBA Formats: 1 00:29:09.759 Current LBA Format: LBA Format #00 00:29:09.759 LBA Format #00: Data Size: 512 Metadata Size: 0 00:29:09.759 00:29:09.759 08:44:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:29:09.759 08:44:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:29:09.759 08:44:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:29:09.759 08:44:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:09.759 08:44:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:29:09.759 08:44:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:09.759 08:44:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:09.759 rmmod nvme_tcp 00:29:09.759 rmmod nvme_fabrics 00:29:09.759 08:44:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:09.759 08:44:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:29:09.759 08:44:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:29:09.759 08:44:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:29:09.759 08:44:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:29:09.759 08:44:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:29:09.759 08:44:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:29:09.759 08:44:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:29:09.759 08:44:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # iptables-save 00:29:09.759 08:44:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:29:09.759 08:44:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # iptables-restore 00:29:09.759 08:44:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:09.759 08:44:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:09.759 08:44:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:09.759 08:44:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:09.759 08:44:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:11.674 08:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:11.674 08:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:29:11.674 08:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:29:11.674 08:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # echo 0 00:29:11.674 08:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:11.674 08:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:11.674 08:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:11.674 08:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:11.674 08:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:29:11.674 08:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:29:11.935 08:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@722 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:15.279 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:29:15.279 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:29:15.279 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:29:15.279 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:29:15.279 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:29:15.279 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:29:15.279 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:29:15.279 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:29:15.279 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:29:15.279 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:29:15.279 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:29:15.279 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:29:15.539 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:29:15.539 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:29:15.539 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:29:15.539 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:29:15.539 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:29:15.800 00:29:15.800 real 0m19.187s 00:29:15.800 user 0m5.215s 00:29:15.800 sys 0m11.069s 00:29:15.800 08:44:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:15.800 08:44:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:29:15.800 ************************************ 00:29:15.800 END TEST nvmf_identify_kernel_target 00:29:15.800 ************************************ 00:29:15.800 08:44:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:29:15.800 08:44:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:15.800 08:44:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:15.800 08:44:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.800 ************************************ 00:29:15.800 START TEST nvmf_auth_host 00:29:15.800 ************************************ 00:29:15.800 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:29:16.061 * Looking for test storage... 00:29:16.061 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:16.061 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:16.061 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lcov --version 00:29:16.061 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:16.061 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:16.061 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:16.061 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:16.061 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:16.061 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:29:16.061 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:29:16.061 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:29:16.061 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:29:16.061 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:29:16.061 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:29:16.061 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:29:16.061 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:16.061 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:29:16.061 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:29:16.061 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:16.061 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:16.061 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:29:16.062 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:29:16.062 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:16.062 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:29:16.062 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:29:16.062 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:29:16.062 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:29:16.062 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:16.062 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:29:16.062 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:29:16.062 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:16.062 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:16.062 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:29:16.062 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:16.062 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:16.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.062 --rc genhtml_branch_coverage=1 00:29:16.062 --rc genhtml_function_coverage=1 00:29:16.062 --rc genhtml_legend=1 00:29:16.062 --rc geninfo_all_blocks=1 00:29:16.062 --rc geninfo_unexecuted_blocks=1 00:29:16.062 00:29:16.062 ' 00:29:16.062 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:16.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.062 --rc genhtml_branch_coverage=1 00:29:16.062 --rc genhtml_function_coverage=1 00:29:16.062 --rc genhtml_legend=1 00:29:16.062 --rc geninfo_all_blocks=1 00:29:16.062 --rc geninfo_unexecuted_blocks=1 00:29:16.062 00:29:16.062 ' 00:29:16.062 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:16.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.062 --rc genhtml_branch_coverage=1 00:29:16.062 --rc genhtml_function_coverage=1 00:29:16.062 --rc genhtml_legend=1 00:29:16.062 --rc geninfo_all_blocks=1 00:29:16.062 --rc geninfo_unexecuted_blocks=1 00:29:16.062 00:29:16.062 ' 00:29:16.062 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:16.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.062 --rc genhtml_branch_coverage=1 00:29:16.062 --rc genhtml_function_coverage=1 00:29:16.062 --rc genhtml_legend=1 00:29:16.062 --rc geninfo_all_blocks=1 00:29:16.062 --rc geninfo_unexecuted_blocks=1 00:29:16.062 00:29:16.062 ' 00:29:16.062 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:16.062 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:29:16.062 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:16.062 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:16.062 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:16.062 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:16.062 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:16.062 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:16.062 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:16.062 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:16.062 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:16.062 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:16.062 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:16.062 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:16.062 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:16.062 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:16.062 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:16.062 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:16.062 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:16.062 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:29:16.062 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:16.062 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:16.062 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:16.062 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.062 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.062 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.062 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:29:16.062 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.062 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:29:16.062 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:16.062 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:16.062 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:16.062 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:16.062 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:16.062 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:16.062 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:16.062 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:16.062 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:16.062 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:16.062 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:29:16.062 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:29:16.062 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:29:16.062 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:29:16.062 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:16.062 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:16.062 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:29:16.062 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:29:16.062 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:29:16.062 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:29:16.062 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:16.063 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # prepare_net_devs 00:29:16.063 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@434 -- # local -g is_hw=no 00:29:16.063 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # remove_spdk_ns 00:29:16.063 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:16.063 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:16.063 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:16.063 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:29:16.063 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:29:16.063 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:29:16.063 08:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.200 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:24.200 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:29:24.200 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:24.200 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:24.200 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:24.200 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:24.200 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:24.200 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:29:24.200 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:24.200 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:29:24.200 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:29:24.200 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:29:24.200 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:29:24.200 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:29:24.200 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:29:24.200 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:24.200 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:24.200 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:24.200 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:24.200 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:24.200 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:24.200 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:24.200 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:24.200 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:24.200 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:24.200 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:24.200 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:29:24.200 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:29:24.200 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:29:24.200 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:29:24.200 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:29:24.200 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:29:24.200 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:24.200 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:24.200 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:24.200 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:24.200 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:24.200 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:24.200 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:24.200 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:24.200 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:24.200 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:24.200 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:24.200 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:24.200 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:24.200 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:24.200 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:24.200 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:24.201 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:29:24.201 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:29:24.201 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:29:24.201 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:24.201 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:24.201 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:24.201 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:24.201 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:24.201 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:24.201 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:24.201 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:24.201 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:24.201 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:24.201 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:24.201 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:24.201 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:24.201 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:24.201 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:24.201 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:24.201 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:24.201 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:24.201 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:24.201 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:24.201 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:29:24.201 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # is_hw=yes 00:29:24.201 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:29:24.201 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:29:24.201 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:29:24.201 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:24.201 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:24.201 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:24.201 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:24.201 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:24.201 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:24.201 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:24.201 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:24.201 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:24.201 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:24.201 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:24.201 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:24.201 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:24.201 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:24.201 08:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:24.201 08:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:24.201 08:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:24.201 08:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:24.201 08:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:24.201 08:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:24.201 08:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:24.201 08:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:24.201 08:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:24.201 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:24.201 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.606 ms 00:29:24.201 00:29:24.201 --- 10.0.0.2 ping statistics --- 00:29:24.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:24.201 rtt min/avg/max/mdev = 0.606/0.606/0.606/0.000 ms 00:29:24.201 08:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:24.201 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:24.201 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.248 ms 00:29:24.201 00:29:24.201 --- 10.0.0.1 ping statistics --- 00:29:24.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:24.201 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:29:24.201 08:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:24.201 08:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # return 0 00:29:24.201 08:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:29:24.201 08:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:24.201 08:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:29:24.201 08:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:29:24.201 08:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:24.201 08:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:29:24.201 08:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:29:24.201 08:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:29:24.201 08:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:29:24.201 08:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:24.201 08:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.201 08:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # nvmfpid=3907029 00:29:24.201 08:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # waitforlisten 3907029 00:29:24.201 08:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:29:24.201 08:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 3907029 ']' 00:29:24.201 08:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:24.201 08:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:24.201 08:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:24.201 08:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:24.201 08:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.462 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:24.462 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:29:24.462 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:29:24.462 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:24.462 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.462 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:24.462 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:29:24.462 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:29:24.462 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:29:24.462 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:24.462 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:29:24.462 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:29:24.462 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:29:24.462 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:24.462 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=f9e6a5fc6a5f32c1b44f146bc81eb0bd 00:29:24.462 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:29:24.462 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.kCm 00:29:24.462 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key f9e6a5fc6a5f32c1b44f146bc81eb0bd 0 00:29:24.462 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 f9e6a5fc6a5f32c1b44f146bc81eb0bd 0 00:29:24.462 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:29:24.462 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:29:24.462 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=f9e6a5fc6a5f32c1b44f146bc81eb0bd 00:29:24.462 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:29:24.462 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:29:24.462 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.kCm 00:29:24.462 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.kCm 00:29:24.462 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.kCm 00:29:24.462 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:29:24.462 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:29:24.462 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:24.462 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:29:24.462 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha512 00:29:24.462 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=64 00:29:24.462 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:29:24.462 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=9bfb1ce42faa4350ca91b15b9697d305bcf267f8bd3df3222f4649038f0b08c9 00:29:24.462 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:29:24.462 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.YVv 00:29:24.462 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 9bfb1ce42faa4350ca91b15b9697d305bcf267f8bd3df3222f4649038f0b08c9 3 00:29:24.462 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 9bfb1ce42faa4350ca91b15b9697d305bcf267f8bd3df3222f4649038f0b08c9 3 00:29:24.462 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:29:24.462 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:29:24.462 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=9bfb1ce42faa4350ca91b15b9697d305bcf267f8bd3df3222f4649038f0b08c9 00:29:24.462 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=3 00:29:24.462 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:29:24.462 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.YVv 00:29:24.462 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.YVv 00:29:24.724 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.YVv 00:29:24.724 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:29:24.724 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:29:24.724 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:24.724 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:29:24.724 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:29:24.724 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:29:24.724 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:29:24.724 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=0cc47f7abc6d502c4602fdfd7f7a45247882fbb18007b5f3 00:29:24.724 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:29:24.724 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.glL 00:29:24.724 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 0cc47f7abc6d502c4602fdfd7f7a45247882fbb18007b5f3 0 00:29:24.724 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 0cc47f7abc6d502c4602fdfd7f7a45247882fbb18007b5f3 0 00:29:24.724 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:29:24.724 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:29:24.724 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=0cc47f7abc6d502c4602fdfd7f7a45247882fbb18007b5f3 00:29:24.724 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:29:24.724 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:29:24.724 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.glL 00:29:24.724 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.glL 00:29:24.724 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.glL 00:29:24.724 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:29:24.724 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:29:24.724 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:24.724 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:29:24.724 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha384 00:29:24.724 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:29:24.724 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:29:24.724 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=3850e630760d276fa2c979e23435a45130628a573cd01c74 00:29:24.724 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:29:24.724 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.ris 00:29:24.724 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 3850e630760d276fa2c979e23435a45130628a573cd01c74 2 00:29:24.724 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 3850e630760d276fa2c979e23435a45130628a573cd01c74 2 00:29:24.724 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:29:24.724 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:29:24.724 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=3850e630760d276fa2c979e23435a45130628a573cd01c74 00:29:24.724 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=2 00:29:24.724 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:29:24.724 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.ris 00:29:24.724 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.ris 00:29:24.724 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.ris 00:29:24.724 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:29:24.724 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:29:24.724 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:24.724 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:29:24.724 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha256 00:29:24.724 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:29:24.724 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:24.724 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=eb646cafcabe5b4bf3ef44f77e1eb4b5 00:29:24.724 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:29:24.724 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.EAO 00:29:24.724 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key eb646cafcabe5b4bf3ef44f77e1eb4b5 1 00:29:24.724 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 eb646cafcabe5b4bf3ef44f77e1eb4b5 1 00:29:24.724 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:29:24.724 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:29:24.724 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=eb646cafcabe5b4bf3ef44f77e1eb4b5 00:29:24.724 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=1 00:29:24.724 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:29:24.724 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.EAO 00:29:24.724 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.EAO 00:29:24.724 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.EAO 00:29:24.724 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:29:24.724 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:29:24.724 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:24.724 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:29:24.724 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha256 00:29:24.724 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:29:24.724 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:24.724 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=9f52e977c280229a3677c3fd97b40282 00:29:24.725 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:29:24.725 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.aH3 00:29:24.725 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 9f52e977c280229a3677c3fd97b40282 1 00:29:24.725 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 9f52e977c280229a3677c3fd97b40282 1 00:29:24.725 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:29:24.725 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:29:24.725 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=9f52e977c280229a3677c3fd97b40282 00:29:24.725 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=1 00:29:24.725 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:29:24.725 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.aH3 00:29:24.725 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.aH3 00:29:24.725 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.aH3 00:29:24.725 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:29:24.725 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:29:24.725 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:24.725 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:29:24.725 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha384 00:29:24.725 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:29:24.725 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:29:24.725 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=15a8268985b1e0ddacf46ecae9d0dcf15c109b87628a9fb1 00:29:24.985 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:29:24.985 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.1LV 00:29:24.985 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 15a8268985b1e0ddacf46ecae9d0dcf15c109b87628a9fb1 2 00:29:24.985 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 15a8268985b1e0ddacf46ecae9d0dcf15c109b87628a9fb1 2 00:29:24.985 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:29:24.985 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:29:24.985 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=15a8268985b1e0ddacf46ecae9d0dcf15c109b87628a9fb1 00:29:24.985 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=2 00:29:24.985 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:29:24.985 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.1LV 00:29:24.985 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.1LV 00:29:24.985 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.1LV 00:29:24.985 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:29:24.985 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:29:24.985 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:24.985 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:29:24.985 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:29:24.985 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:29:24.985 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:24.985 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=2e1fd830710a0e07a0b8e3e06fe5dae9 00:29:24.985 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:29:24.985 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.IB8 00:29:24.985 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 2e1fd830710a0e07a0b8e3e06fe5dae9 0 00:29:24.985 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 2e1fd830710a0e07a0b8e3e06fe5dae9 0 00:29:24.985 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:29:24.985 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:29:24.985 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=2e1fd830710a0e07a0b8e3e06fe5dae9 00:29:24.985 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:29:24.985 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:29:24.985 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.IB8 00:29:24.985 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.IB8 00:29:24.985 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.IB8 00:29:24.985 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:29:24.985 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:29:24.985 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:24.985 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:29:24.985 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha512 00:29:24.985 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=64 00:29:24.985 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:29:24.985 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=382a2b998a209729f9af722e787f9a24391543fe48181857799867816681a02d 00:29:24.985 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:29:24.985 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.CQd 00:29:24.986 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 382a2b998a209729f9af722e787f9a24391543fe48181857799867816681a02d 3 00:29:24.986 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 382a2b998a209729f9af722e787f9a24391543fe48181857799867816681a02d 3 00:29:24.986 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:29:24.986 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:29:24.986 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=382a2b998a209729f9af722e787f9a24391543fe48181857799867816681a02d 00:29:24.986 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=3 00:29:24.986 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:29:24.986 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.CQd 00:29:24.986 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.CQd 00:29:24.986 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.CQd 00:29:24.986 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:29:24.986 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3907029 00:29:24.986 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 3907029 ']' 00:29:24.986 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:24.986 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:24.986 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:24.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:24.986 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:24.986 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.246 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:25.246 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:29:25.246 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:25.246 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.kCm 00:29:25.246 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.246 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.246 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.246 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.YVv ]] 00:29:25.246 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.YVv 00:29:25.246 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.246 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.246 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.246 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:25.246 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.glL 00:29:25.246 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.246 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.246 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.246 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.ris ]] 00:29:25.246 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ris 00:29:25.246 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.246 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.246 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.246 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:25.246 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.EAO 00:29:25.246 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.246 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.246 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.246 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.aH3 ]] 00:29:25.246 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.aH3 00:29:25.246 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.246 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.246 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.246 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:25.246 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.1LV 00:29:25.246 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.246 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.246 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.246 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.IB8 ]] 00:29:25.246 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.IB8 00:29:25.246 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.246 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.246 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.246 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:25.246 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.CQd 00:29:25.246 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.246 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.246 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.246 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:29:25.246 08:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:29:25.246 08:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:29:25.246 08:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:25.246 08:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:25.246 08:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:25.246 08:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:25.246 08:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:25.246 08:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:25.246 08:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:25.246 08:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:25.246 08:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:25.246 08:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:25.246 08:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:29:25.246 08:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:29:25.246 08:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:29:25.246 08:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:25.246 08:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:25.246 08:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:25.246 08:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # local block nvme 00:29:25.246 08:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:29:25.246 08:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@666 -- # modprobe nvmet 00:29:25.246 08:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:25.246 08:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:28.551 Waiting for block devices as requested 00:29:28.551 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:29:28.551 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:29:28.812 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:29:28.812 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:29:28.812 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:29:28.812 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:29:29.073 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:29:29.073 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:29:29.073 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:29:29.334 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:29:29.334 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:29:29.594 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:29:29.594 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:29:29.594 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:29:29.594 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:29:29.855 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:29:29.855 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:29:30.798 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:29:30.798 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:30.798 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:29:30.798 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:29:30.799 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:30.799 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:29:30.799 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:29:30.799 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:29:30.799 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:29:30.799 No valid GPT data, bailing 00:29:30.799 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:30.799 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:29:30.799 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:29:30.799 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:29:30.799 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # [[ -b /dev/nvme0n1 ]] 00:29:30.799 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:30.799 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:30.799 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:30.799 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:29:30.799 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo 1 00:29:30.799 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@692 -- # echo /dev/nvme0n1 00:29:30.799 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:29:30.799 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:29:30.799 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo tcp 00:29:30.799 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 4420 00:29:30.799 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo ipv4 00:29:30.799 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:30.799 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:29:31.060 00:29:31.060 Discovery Log Number of Records 2, Generation counter 2 00:29:31.060 =====Discovery Log Entry 0====== 00:29:31.060 trtype: tcp 00:29:31.060 adrfam: ipv4 00:29:31.060 subtype: current discovery subsystem 00:29:31.060 treq: not specified, sq flow control disable supported 00:29:31.060 portid: 1 00:29:31.060 trsvcid: 4420 00:29:31.060 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:31.060 traddr: 10.0.0.1 00:29:31.060 eflags: none 00:29:31.060 sectype: none 00:29:31.060 =====Discovery Log Entry 1====== 00:29:31.060 trtype: tcp 00:29:31.060 adrfam: ipv4 00:29:31.060 subtype: nvme subsystem 00:29:31.060 treq: not specified, sq flow control disable supported 00:29:31.060 portid: 1 00:29:31.060 trsvcid: 4420 00:29:31.060 subnqn: nqn.2024-02.io.spdk:cnode0 00:29:31.060 traddr: 10.0.0.1 00:29:31.060 eflags: none 00:29:31.060 sectype: none 00:29:31.060 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:31.060 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:29:31.060 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:29:31.060 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:31.060 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:31.060 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:31.060 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:31.060 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:31.060 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGNjNDdmN2FiYzZkNTAyYzQ2MDJmZGZkN2Y3YTQ1MjQ3ODgyZmJiMTgwMDdiNWYzQUhcRw==: 00:29:31.060 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzg1MGU2MzA3NjBkMjc2ZmEyYzk3OWUyMzQzNWE0NTEzMDYyOGE1NzNjZDAxYzc0rwM5Jg==: 00:29:31.060 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:31.060 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:31.060 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGNjNDdmN2FiYzZkNTAyYzQ2MDJmZGZkN2Y3YTQ1MjQ3ODgyZmJiMTgwMDdiNWYzQUhcRw==: 00:29:31.060 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzg1MGU2MzA3NjBkMjc2ZmEyYzk3OWUyMzQzNWE0NTEzMDYyOGE1NzNjZDAxYzc0rwM5Jg==: ]] 00:29:31.060 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzg1MGU2MzA3NjBkMjc2ZmEyYzk3OWUyMzQzNWE0NTEzMDYyOGE1NzNjZDAxYzc0rwM5Jg==: 00:29:31.060 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:29:31.060 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:29:31.060 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:29:31.060 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:31.061 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:29:31.061 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:31.061 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:29:31.061 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:31.061 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:31.061 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:31.061 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:31.061 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.061 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.061 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.061 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:31.061 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:31.061 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:31.061 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:31.061 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:31.061 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:31.061 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:31.061 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:31.061 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:31.061 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:31.061 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:31.061 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:31.061 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.061 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.061 nvme0n1 00:29:31.061 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.061 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:31.061 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:31.061 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.061 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.061 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.322 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:31.322 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:31.322 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.322 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.322 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.322 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:29:31.322 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:31.322 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:31.322 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:29:31.322 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:31.322 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:31.322 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:31.322 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:31.322 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjllNmE1ZmM2YTVmMzJjMWI0NGYxNDZiYzgxZWIwYmTgDZ1l: 00:29:31.322 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWJmYjFjZTQyZmFhNDM1MGNhOTFiMTViOTY5N2QzMDViY2YyNjdmOGJkM2RmMzIyMmY0NjQ5MDM4ZjBiMDhjORMpKbY=: 00:29:31.322 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:31.322 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:31.322 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjllNmE1ZmM2YTVmMzJjMWI0NGYxNDZiYzgxZWIwYmTgDZ1l: 00:29:31.322 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWJmYjFjZTQyZmFhNDM1MGNhOTFiMTViOTY5N2QzMDViY2YyNjdmOGJkM2RmMzIyMmY0NjQ5MDM4ZjBiMDhjORMpKbY=: ]] 00:29:31.322 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWJmYjFjZTQyZmFhNDM1MGNhOTFiMTViOTY5N2QzMDViY2YyNjdmOGJkM2RmMzIyMmY0NjQ5MDM4ZjBiMDhjORMpKbY=: 00:29:31.322 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:29:31.322 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:31.322 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:31.322 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:31.322 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:31.322 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:31.322 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:31.322 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.322 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.322 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.322 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:31.322 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:31.322 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:31.322 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:31.322 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:31.323 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:31.323 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:31.323 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:31.323 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:31.323 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:31.323 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:31.323 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:31.323 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.323 08:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.323 nvme0n1 00:29:31.323 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.323 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:31.323 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:31.323 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.323 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.323 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.323 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:31.323 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:31.323 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.323 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.584 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.584 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:31.584 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:31.584 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:31.584 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:31.584 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:31.584 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:31.584 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGNjNDdmN2FiYzZkNTAyYzQ2MDJmZGZkN2Y3YTQ1MjQ3ODgyZmJiMTgwMDdiNWYzQUhcRw==: 00:29:31.584 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzg1MGU2MzA3NjBkMjc2ZmEyYzk3OWUyMzQzNWE0NTEzMDYyOGE1NzNjZDAxYzc0rwM5Jg==: 00:29:31.584 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:31.584 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:31.584 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGNjNDdmN2FiYzZkNTAyYzQ2MDJmZGZkN2Y3YTQ1MjQ3ODgyZmJiMTgwMDdiNWYzQUhcRw==: 00:29:31.584 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzg1MGU2MzA3NjBkMjc2ZmEyYzk3OWUyMzQzNWE0NTEzMDYyOGE1NzNjZDAxYzc0rwM5Jg==: ]] 00:29:31.584 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzg1MGU2MzA3NjBkMjc2ZmEyYzk3OWUyMzQzNWE0NTEzMDYyOGE1NzNjZDAxYzc0rwM5Jg==: 00:29:31.584 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:29:31.584 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:31.584 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:31.584 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:31.584 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:31.584 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:31.584 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:31.584 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.584 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.584 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.584 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:31.584 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:31.584 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:31.584 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:31.584 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:31.584 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:31.584 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:31.584 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:31.584 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:31.584 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:31.584 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:31.584 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:31.584 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.584 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.584 nvme0n1 00:29:31.584 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.584 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:31.584 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.584 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:31.584 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.584 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.584 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:31.584 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:31.584 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.584 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.584 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.584 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:31.584 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:29:31.584 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:31.584 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:31.584 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:31.584 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:31.584 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWI2NDZjYWZjYWJlNWI0YmYzZWY0NGY3N2UxZWI0YjUrS2Fp: 00:29:31.584 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWY1MmU5NzdjMjgwMjI5YTM2NzdjM2ZkOTdiNDAyODIWO896: 00:29:31.584 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:31.584 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:31.584 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWI2NDZjYWZjYWJlNWI0YmYzZWY0NGY3N2UxZWI0YjUrS2Fp: 00:29:31.584 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWY1MmU5NzdjMjgwMjI5YTM2NzdjM2ZkOTdiNDAyODIWO896: ]] 00:29:31.584 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWY1MmU5NzdjMjgwMjI5YTM2NzdjM2ZkOTdiNDAyODIWO896: 00:29:31.584 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:29:31.584 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:31.584 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:31.585 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:31.585 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:31.585 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:31.585 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:31.585 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.585 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.585 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.585 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:31.585 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:31.585 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:31.585 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:31.846 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:31.846 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:31.846 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:31.846 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:31.846 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:31.846 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:31.846 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:31.846 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:31.846 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.846 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.846 nvme0n1 00:29:31.846 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.846 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:31.846 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:31.846 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.846 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.846 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.846 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:31.846 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:31.846 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.846 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.846 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.846 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:31.846 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:29:31.846 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:31.846 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:31.846 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:31.846 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:31.846 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTVhODI2ODk4NWIxZTBkZGFjZjQ2ZWNhZTlkMGRjZjE1YzEwOWI4NzYyOGE5ZmIxygktRw==: 00:29:31.846 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmUxZmQ4MzA3MTBhMGUwN2EwYjhlM2UwNmZlNWRhZTnkEg2G: 00:29:31.846 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:31.846 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:31.846 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTVhODI2ODk4NWIxZTBkZGFjZjQ2ZWNhZTlkMGRjZjE1YzEwOWI4NzYyOGE5ZmIxygktRw==: 00:29:31.846 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmUxZmQ4MzA3MTBhMGUwN2EwYjhlM2UwNmZlNWRhZTnkEg2G: ]] 00:29:31.846 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmUxZmQ4MzA3MTBhMGUwN2EwYjhlM2UwNmZlNWRhZTnkEg2G: 00:29:31.846 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:29:31.846 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:31.846 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:31.846 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:31.846 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:31.846 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:31.846 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:31.846 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.846 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.846 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.846 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:31.846 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:31.846 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:31.846 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:31.846 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:31.846 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:31.846 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:31.846 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:31.846 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:31.846 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:31.846 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:31.846 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:31.846 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.846 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.107 nvme0n1 00:29:32.107 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.107 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:32.107 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.107 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:32.107 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.107 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.107 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:32.107 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:32.107 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.107 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.107 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.107 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:32.107 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:29:32.107 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:32.107 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:32.107 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:32.107 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:32.107 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzgyYTJiOTk4YTIwOTcyOWY5YWY3MjJlNzg3ZjlhMjQzOTE1NDNmZTQ4MTgxODU3Nzk5ODY3ODE2NjgxYTAyZDvMMPo=: 00:29:32.107 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:32.107 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:32.107 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:32.107 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzgyYTJiOTk4YTIwOTcyOWY5YWY3MjJlNzg3ZjlhMjQzOTE1NDNmZTQ4MTgxODU3Nzk5ODY3ODE2NjgxYTAyZDvMMPo=: 00:29:32.107 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:32.107 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:29:32.107 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:32.107 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:32.107 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:32.107 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:32.107 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:32.107 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:32.107 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.107 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.107 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.107 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:32.107 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:32.107 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:32.107 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:32.107 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:32.107 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:32.107 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:32.107 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:32.107 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:32.107 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:32.107 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:32.108 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:32.108 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.108 08:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.369 nvme0n1 00:29:32.369 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.369 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:32.369 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:32.369 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.369 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.369 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.369 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:32.369 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:32.369 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.369 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.369 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.369 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:32.369 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:32.369 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:29:32.369 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:32.369 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:32.369 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:32.369 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:32.369 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjllNmE1ZmM2YTVmMzJjMWI0NGYxNDZiYzgxZWIwYmTgDZ1l: 00:29:32.369 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWJmYjFjZTQyZmFhNDM1MGNhOTFiMTViOTY5N2QzMDViY2YyNjdmOGJkM2RmMzIyMmY0NjQ5MDM4ZjBiMDhjORMpKbY=: 00:29:32.369 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:32.369 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:32.369 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjllNmE1ZmM2YTVmMzJjMWI0NGYxNDZiYzgxZWIwYmTgDZ1l: 00:29:32.369 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWJmYjFjZTQyZmFhNDM1MGNhOTFiMTViOTY5N2QzMDViY2YyNjdmOGJkM2RmMzIyMmY0NjQ5MDM4ZjBiMDhjORMpKbY=: ]] 00:29:32.369 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWJmYjFjZTQyZmFhNDM1MGNhOTFiMTViOTY5N2QzMDViY2YyNjdmOGJkM2RmMzIyMmY0NjQ5MDM4ZjBiMDhjORMpKbY=: 00:29:32.369 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:29:32.369 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:32.369 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:32.369 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:32.369 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:32.369 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:32.369 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:32.369 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.369 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.369 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.369 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:32.369 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:32.369 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:32.369 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:32.369 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:32.369 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:32.369 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:32.369 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:32.369 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:32.369 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:32.369 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:32.369 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:32.369 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.369 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.629 nvme0n1 00:29:32.629 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.629 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:32.630 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:32.630 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.630 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.630 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.630 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:32.630 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:32.630 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.630 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.630 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.630 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:32.630 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:29:32.630 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:32.630 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:32.630 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:32.630 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:32.630 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGNjNDdmN2FiYzZkNTAyYzQ2MDJmZGZkN2Y3YTQ1MjQ3ODgyZmJiMTgwMDdiNWYzQUhcRw==: 00:29:32.630 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzg1MGU2MzA3NjBkMjc2ZmEyYzk3OWUyMzQzNWE0NTEzMDYyOGE1NzNjZDAxYzc0rwM5Jg==: 00:29:32.630 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:32.630 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:32.630 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGNjNDdmN2FiYzZkNTAyYzQ2MDJmZGZkN2Y3YTQ1MjQ3ODgyZmJiMTgwMDdiNWYzQUhcRw==: 00:29:32.630 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzg1MGU2MzA3NjBkMjc2ZmEyYzk3OWUyMzQzNWE0NTEzMDYyOGE1NzNjZDAxYzc0rwM5Jg==: ]] 00:29:32.630 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzg1MGU2MzA3NjBkMjc2ZmEyYzk3OWUyMzQzNWE0NTEzMDYyOGE1NzNjZDAxYzc0rwM5Jg==: 00:29:32.630 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:29:32.630 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:32.630 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:32.630 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:32.630 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:32.630 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:32.630 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:32.630 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.630 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.630 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.630 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:32.630 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:32.630 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:32.630 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:32.630 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:32.630 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:32.630 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:32.630 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:32.630 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:32.630 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:32.630 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:32.630 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:32.630 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.630 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.890 nvme0n1 00:29:32.890 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.890 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:32.890 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:32.890 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.890 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.890 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.890 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:32.890 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:32.890 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.890 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.890 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.890 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:32.890 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:29:32.890 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:32.890 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:32.890 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:32.890 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:32.890 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWI2NDZjYWZjYWJlNWI0YmYzZWY0NGY3N2UxZWI0YjUrS2Fp: 00:29:32.890 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWY1MmU5NzdjMjgwMjI5YTM2NzdjM2ZkOTdiNDAyODIWO896: 00:29:32.890 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:32.890 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:32.890 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWI2NDZjYWZjYWJlNWI0YmYzZWY0NGY3N2UxZWI0YjUrS2Fp: 00:29:32.890 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWY1MmU5NzdjMjgwMjI5YTM2NzdjM2ZkOTdiNDAyODIWO896: ]] 00:29:32.890 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWY1MmU5NzdjMjgwMjI5YTM2NzdjM2ZkOTdiNDAyODIWO896: 00:29:32.890 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:29:32.890 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:32.890 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:32.890 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:32.890 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:32.890 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:32.890 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:32.890 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.890 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.890 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.890 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:32.890 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:32.890 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:32.890 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:32.890 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:32.890 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:32.890 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:32.890 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:32.890 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:32.890 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:32.890 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:32.890 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:32.890 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.891 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.151 nvme0n1 00:29:33.151 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.151 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:33.151 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:33.151 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.151 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.151 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.151 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:33.151 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:33.151 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.151 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.151 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.151 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:33.151 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:29:33.151 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:33.151 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:33.151 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:33.151 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:33.151 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTVhODI2ODk4NWIxZTBkZGFjZjQ2ZWNhZTlkMGRjZjE1YzEwOWI4NzYyOGE5ZmIxygktRw==: 00:29:33.151 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmUxZmQ4MzA3MTBhMGUwN2EwYjhlM2UwNmZlNWRhZTnkEg2G: 00:29:33.151 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:33.151 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:33.151 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTVhODI2ODk4NWIxZTBkZGFjZjQ2ZWNhZTlkMGRjZjE1YzEwOWI4NzYyOGE5ZmIxygktRw==: 00:29:33.151 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmUxZmQ4MzA3MTBhMGUwN2EwYjhlM2UwNmZlNWRhZTnkEg2G: ]] 00:29:33.151 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmUxZmQ4MzA3MTBhMGUwN2EwYjhlM2UwNmZlNWRhZTnkEg2G: 00:29:33.151 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:29:33.151 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:33.151 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:33.151 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:33.151 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:33.151 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:33.151 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:33.151 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.151 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.151 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.151 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:33.151 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:33.151 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:33.151 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:33.151 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:33.151 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:33.151 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:33.151 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:33.152 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:33.152 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:33.152 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:33.152 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:33.152 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.152 08:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.412 nvme0n1 00:29:33.412 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.412 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:33.412 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:33.412 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.412 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.412 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.412 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:33.412 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:33.412 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.412 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.412 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.412 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:33.412 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:29:33.412 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:33.412 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:33.412 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:33.412 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:33.412 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzgyYTJiOTk4YTIwOTcyOWY5YWY3MjJlNzg3ZjlhMjQzOTE1NDNmZTQ4MTgxODU3Nzk5ODY3ODE2NjgxYTAyZDvMMPo=: 00:29:33.412 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:33.412 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:33.412 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:33.412 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzgyYTJiOTk4YTIwOTcyOWY5YWY3MjJlNzg3ZjlhMjQzOTE1NDNmZTQ4MTgxODU3Nzk5ODY3ODE2NjgxYTAyZDvMMPo=: 00:29:33.412 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:33.412 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:29:33.412 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:33.412 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:33.412 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:33.412 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:33.412 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:33.412 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:33.412 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.412 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.412 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.412 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:33.412 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:33.412 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:33.412 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:33.412 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:33.412 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:33.412 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:33.412 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:33.412 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:33.412 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:33.412 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:33.412 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:33.412 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.412 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.681 nvme0n1 00:29:33.681 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.681 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:33.681 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:33.681 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.681 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.681 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.681 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:33.681 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:33.681 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.681 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.681 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.681 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:33.681 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:33.681 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:29:33.681 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:33.681 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:33.681 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:33.681 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:33.681 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjllNmE1ZmM2YTVmMzJjMWI0NGYxNDZiYzgxZWIwYmTgDZ1l: 00:29:33.681 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWJmYjFjZTQyZmFhNDM1MGNhOTFiMTViOTY5N2QzMDViY2YyNjdmOGJkM2RmMzIyMmY0NjQ5MDM4ZjBiMDhjORMpKbY=: 00:29:33.681 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:33.681 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:33.681 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjllNmE1ZmM2YTVmMzJjMWI0NGYxNDZiYzgxZWIwYmTgDZ1l: 00:29:33.681 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWJmYjFjZTQyZmFhNDM1MGNhOTFiMTViOTY5N2QzMDViY2YyNjdmOGJkM2RmMzIyMmY0NjQ5MDM4ZjBiMDhjORMpKbY=: ]] 00:29:33.681 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWJmYjFjZTQyZmFhNDM1MGNhOTFiMTViOTY5N2QzMDViY2YyNjdmOGJkM2RmMzIyMmY0NjQ5MDM4ZjBiMDhjORMpKbY=: 00:29:33.681 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:29:33.681 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:33.681 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:33.681 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:33.681 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:33.681 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:33.681 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:33.681 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.681 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.681 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.681 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:33.681 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:33.681 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:33.681 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:33.681 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:33.681 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:33.681 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:33.681 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:33.682 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:33.682 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:33.682 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:33.682 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:33.682 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.682 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.941 nvme0n1 00:29:33.941 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.941 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:33.941 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.941 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:33.941 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.941 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.941 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:33.941 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:33.941 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.941 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.941 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.941 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:33.941 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:29:33.941 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:33.941 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:33.941 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:33.941 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:33.941 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGNjNDdmN2FiYzZkNTAyYzQ2MDJmZGZkN2Y3YTQ1MjQ3ODgyZmJiMTgwMDdiNWYzQUhcRw==: 00:29:33.941 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzg1MGU2MzA3NjBkMjc2ZmEyYzk3OWUyMzQzNWE0NTEzMDYyOGE1NzNjZDAxYzc0rwM5Jg==: 00:29:33.941 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:33.941 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:33.941 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGNjNDdmN2FiYzZkNTAyYzQ2MDJmZGZkN2Y3YTQ1MjQ3ODgyZmJiMTgwMDdiNWYzQUhcRw==: 00:29:33.941 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzg1MGU2MzA3NjBkMjc2ZmEyYzk3OWUyMzQzNWE0NTEzMDYyOGE1NzNjZDAxYzc0rwM5Jg==: ]] 00:29:33.941 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzg1MGU2MzA3NjBkMjc2ZmEyYzk3OWUyMzQzNWE0NTEzMDYyOGE1NzNjZDAxYzc0rwM5Jg==: 00:29:33.941 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:29:33.941 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:33.941 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:33.941 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:33.941 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:33.941 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:33.941 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:33.941 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.941 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.202 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.202 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:34.202 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:34.202 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:34.202 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:34.202 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:34.202 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:34.202 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:34.202 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:34.202 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:34.202 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:34.202 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:34.202 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:34.202 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.202 08:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.463 nvme0n1 00:29:34.463 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.463 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:34.463 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:34.463 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.463 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.463 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.463 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:34.463 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:34.463 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.463 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.463 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.463 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:34.463 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:29:34.463 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:34.463 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:34.463 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:34.463 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:34.463 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWI2NDZjYWZjYWJlNWI0YmYzZWY0NGY3N2UxZWI0YjUrS2Fp: 00:29:34.463 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWY1MmU5NzdjMjgwMjI5YTM2NzdjM2ZkOTdiNDAyODIWO896: 00:29:34.463 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:34.463 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:34.463 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWI2NDZjYWZjYWJlNWI0YmYzZWY0NGY3N2UxZWI0YjUrS2Fp: 00:29:34.463 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWY1MmU5NzdjMjgwMjI5YTM2NzdjM2ZkOTdiNDAyODIWO896: ]] 00:29:34.463 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWY1MmU5NzdjMjgwMjI5YTM2NzdjM2ZkOTdiNDAyODIWO896: 00:29:34.463 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:29:34.463 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:34.463 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:34.463 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:34.463 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:34.463 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:34.463 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:34.463 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.463 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.463 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.463 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:34.463 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:34.463 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:34.463 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:34.463 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:34.463 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:34.463 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:34.463 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:34.463 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:34.463 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:34.463 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:34.463 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:34.463 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.463 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.724 nvme0n1 00:29:34.724 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.724 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:34.724 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:34.724 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.724 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.724 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.724 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:34.724 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:34.724 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.724 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.724 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.724 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:34.724 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:29:34.724 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:34.724 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:34.724 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:34.724 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:34.724 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTVhODI2ODk4NWIxZTBkZGFjZjQ2ZWNhZTlkMGRjZjE1YzEwOWI4NzYyOGE5ZmIxygktRw==: 00:29:34.724 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmUxZmQ4MzA3MTBhMGUwN2EwYjhlM2UwNmZlNWRhZTnkEg2G: 00:29:34.724 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:34.724 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:34.724 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTVhODI2ODk4NWIxZTBkZGFjZjQ2ZWNhZTlkMGRjZjE1YzEwOWI4NzYyOGE5ZmIxygktRw==: 00:29:34.724 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmUxZmQ4MzA3MTBhMGUwN2EwYjhlM2UwNmZlNWRhZTnkEg2G: ]] 00:29:34.724 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmUxZmQ4MzA3MTBhMGUwN2EwYjhlM2UwNmZlNWRhZTnkEg2G: 00:29:34.724 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:29:34.724 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:34.724 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:34.724 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:34.724 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:34.724 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:34.724 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:34.724 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.724 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.724 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.724 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:34.724 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:34.724 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:34.724 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:34.724 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:34.724 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:34.724 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:34.724 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:34.724 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:34.725 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:34.725 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:34.725 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:34.725 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.725 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.985 nvme0n1 00:29:34.985 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.985 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:34.985 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:34.985 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.985 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.985 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.246 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:35.246 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:35.246 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.246 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.246 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.246 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:35.246 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:29:35.246 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:35.246 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:35.246 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:35.246 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:35.246 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzgyYTJiOTk4YTIwOTcyOWY5YWY3MjJlNzg3ZjlhMjQzOTE1NDNmZTQ4MTgxODU3Nzk5ODY3ODE2NjgxYTAyZDvMMPo=: 00:29:35.246 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:35.246 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:35.246 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:35.246 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzgyYTJiOTk4YTIwOTcyOWY5YWY3MjJlNzg3ZjlhMjQzOTE1NDNmZTQ4MTgxODU3Nzk5ODY3ODE2NjgxYTAyZDvMMPo=: 00:29:35.246 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:35.246 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:29:35.246 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:35.246 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:35.246 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:35.246 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:35.246 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:35.246 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:35.246 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.246 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.246 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.246 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:35.246 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:35.246 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:35.246 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:35.246 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:35.246 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:35.246 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:35.246 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:35.246 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:35.246 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:35.246 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:35.246 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:35.246 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.246 08:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.508 nvme0n1 00:29:35.508 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.508 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:35.508 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:35.508 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.508 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.508 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.508 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:35.508 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:35.508 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.508 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.508 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.508 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:35.508 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:35.508 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:29:35.508 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:35.508 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:35.508 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:35.508 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:35.508 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjllNmE1ZmM2YTVmMzJjMWI0NGYxNDZiYzgxZWIwYmTgDZ1l: 00:29:35.508 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWJmYjFjZTQyZmFhNDM1MGNhOTFiMTViOTY5N2QzMDViY2YyNjdmOGJkM2RmMzIyMmY0NjQ5MDM4ZjBiMDhjORMpKbY=: 00:29:35.508 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:35.508 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:35.508 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjllNmE1ZmM2YTVmMzJjMWI0NGYxNDZiYzgxZWIwYmTgDZ1l: 00:29:35.508 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWJmYjFjZTQyZmFhNDM1MGNhOTFiMTViOTY5N2QzMDViY2YyNjdmOGJkM2RmMzIyMmY0NjQ5MDM4ZjBiMDhjORMpKbY=: ]] 00:29:35.508 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWJmYjFjZTQyZmFhNDM1MGNhOTFiMTViOTY5N2QzMDViY2YyNjdmOGJkM2RmMzIyMmY0NjQ5MDM4ZjBiMDhjORMpKbY=: 00:29:35.508 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:29:35.508 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:35.508 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:35.508 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:35.508 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:35.508 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:35.508 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:35.508 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.508 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.508 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.508 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:35.508 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:35.508 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:35.508 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:35.508 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:35.508 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:35.508 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:35.508 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:35.508 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:35.508 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:35.508 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:35.508 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:35.508 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.508 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.123 nvme0n1 00:29:36.123 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.123 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:36.123 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:36.123 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.123 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.123 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.123 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:36.123 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:36.123 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.123 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.123 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.123 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:36.123 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:29:36.123 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:36.123 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:36.123 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:36.123 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:36.123 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGNjNDdmN2FiYzZkNTAyYzQ2MDJmZGZkN2Y3YTQ1MjQ3ODgyZmJiMTgwMDdiNWYzQUhcRw==: 00:29:36.123 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzg1MGU2MzA3NjBkMjc2ZmEyYzk3OWUyMzQzNWE0NTEzMDYyOGE1NzNjZDAxYzc0rwM5Jg==: 00:29:36.123 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:36.123 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:36.123 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGNjNDdmN2FiYzZkNTAyYzQ2MDJmZGZkN2Y3YTQ1MjQ3ODgyZmJiMTgwMDdiNWYzQUhcRw==: 00:29:36.123 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzg1MGU2MzA3NjBkMjc2ZmEyYzk3OWUyMzQzNWE0NTEzMDYyOGE1NzNjZDAxYzc0rwM5Jg==: ]] 00:29:36.123 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzg1MGU2MzA3NjBkMjc2ZmEyYzk3OWUyMzQzNWE0NTEzMDYyOGE1NzNjZDAxYzc0rwM5Jg==: 00:29:36.123 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:29:36.123 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:36.123 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:36.123 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:36.123 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:36.123 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:36.123 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:36.123 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.123 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.123 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.123 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:36.123 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:36.123 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:36.123 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:36.123 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:36.123 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:36.123 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:36.123 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:36.123 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:36.123 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:36.123 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:36.123 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:36.123 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.123 08:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.440 nvme0n1 00:29:36.440 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.440 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:36.440 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:36.440 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.440 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.440 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.702 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:36.702 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:36.702 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.702 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.702 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.702 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:36.702 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:29:36.702 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:36.702 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:36.702 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:36.702 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:36.702 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWI2NDZjYWZjYWJlNWI0YmYzZWY0NGY3N2UxZWI0YjUrS2Fp: 00:29:36.702 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWY1MmU5NzdjMjgwMjI5YTM2NzdjM2ZkOTdiNDAyODIWO896: 00:29:36.702 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:36.702 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:36.702 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWI2NDZjYWZjYWJlNWI0YmYzZWY0NGY3N2UxZWI0YjUrS2Fp: 00:29:36.702 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWY1MmU5NzdjMjgwMjI5YTM2NzdjM2ZkOTdiNDAyODIWO896: ]] 00:29:36.702 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWY1MmU5NzdjMjgwMjI5YTM2NzdjM2ZkOTdiNDAyODIWO896: 00:29:36.702 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:29:36.702 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:36.702 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:36.702 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:36.702 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:36.702 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:36.702 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:36.702 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.702 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.702 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.702 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:36.702 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:36.702 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:36.702 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:36.702 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:36.702 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:36.702 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:36.702 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:36.702 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:36.702 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:36.702 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:36.702 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:36.702 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.702 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.963 nvme0n1 00:29:36.963 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.963 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:36.963 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:36.963 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.963 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.963 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.222 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:37.222 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:37.222 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.222 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.222 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.222 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:37.222 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:29:37.222 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:37.222 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:37.222 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:37.222 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:37.222 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTVhODI2ODk4NWIxZTBkZGFjZjQ2ZWNhZTlkMGRjZjE1YzEwOWI4NzYyOGE5ZmIxygktRw==: 00:29:37.222 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmUxZmQ4MzA3MTBhMGUwN2EwYjhlM2UwNmZlNWRhZTnkEg2G: 00:29:37.222 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:37.222 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:37.222 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTVhODI2ODk4NWIxZTBkZGFjZjQ2ZWNhZTlkMGRjZjE1YzEwOWI4NzYyOGE5ZmIxygktRw==: 00:29:37.222 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmUxZmQ4MzA3MTBhMGUwN2EwYjhlM2UwNmZlNWRhZTnkEg2G: ]] 00:29:37.222 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmUxZmQ4MzA3MTBhMGUwN2EwYjhlM2UwNmZlNWRhZTnkEg2G: 00:29:37.222 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:29:37.223 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:37.223 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:37.223 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:37.223 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:37.223 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:37.223 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:37.223 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.223 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.223 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.223 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:37.223 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:37.223 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:37.223 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:37.223 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:37.223 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:37.223 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:37.223 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:37.223 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:37.223 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:37.223 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:37.223 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:37.223 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.223 08:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.792 nvme0n1 00:29:37.792 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.792 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:37.792 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:37.792 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.792 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.792 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.792 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:37.792 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:37.792 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.792 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.792 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.792 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:37.792 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:29:37.792 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:37.792 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:37.792 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:37.792 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:37.792 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzgyYTJiOTk4YTIwOTcyOWY5YWY3MjJlNzg3ZjlhMjQzOTE1NDNmZTQ4MTgxODU3Nzk5ODY3ODE2NjgxYTAyZDvMMPo=: 00:29:37.792 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:37.792 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:37.792 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:37.792 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzgyYTJiOTk4YTIwOTcyOWY5YWY3MjJlNzg3ZjlhMjQzOTE1NDNmZTQ4MTgxODU3Nzk5ODY3ODE2NjgxYTAyZDvMMPo=: 00:29:37.792 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:37.792 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:29:37.792 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:37.792 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:37.792 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:37.792 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:37.792 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:37.792 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:37.792 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.792 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.792 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.792 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:37.792 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:37.792 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:37.792 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:37.792 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:37.792 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:37.792 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:37.792 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:37.792 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:37.792 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:37.792 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:37.792 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:37.792 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.792 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.052 nvme0n1 00:29:38.052 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.052 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:38.052 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.052 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:38.052 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.311 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.311 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:38.311 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:38.311 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.311 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.311 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.311 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:38.311 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:38.311 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:29:38.311 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:38.311 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:38.311 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:38.311 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:38.311 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjllNmE1ZmM2YTVmMzJjMWI0NGYxNDZiYzgxZWIwYmTgDZ1l: 00:29:38.311 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWJmYjFjZTQyZmFhNDM1MGNhOTFiMTViOTY5N2QzMDViY2YyNjdmOGJkM2RmMzIyMmY0NjQ5MDM4ZjBiMDhjORMpKbY=: 00:29:38.311 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:38.311 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:38.311 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjllNmE1ZmM2YTVmMzJjMWI0NGYxNDZiYzgxZWIwYmTgDZ1l: 00:29:38.311 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWJmYjFjZTQyZmFhNDM1MGNhOTFiMTViOTY5N2QzMDViY2YyNjdmOGJkM2RmMzIyMmY0NjQ5MDM4ZjBiMDhjORMpKbY=: ]] 00:29:38.311 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWJmYjFjZTQyZmFhNDM1MGNhOTFiMTViOTY5N2QzMDViY2YyNjdmOGJkM2RmMzIyMmY0NjQ5MDM4ZjBiMDhjORMpKbY=: 00:29:38.311 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:29:38.311 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:38.311 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:38.311 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:38.311 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:38.311 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:38.311 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:38.311 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.311 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.311 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.311 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:38.311 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:38.311 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:38.311 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:38.311 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:38.311 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:38.311 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:38.311 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:38.311 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:38.311 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:38.311 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:38.311 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:38.311 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.311 08:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.880 nvme0n1 00:29:39.140 08:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.140 08:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:39.140 08:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.140 08:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:39.140 08:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.140 08:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.140 08:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:39.140 08:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:39.140 08:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.140 08:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.140 08:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.140 08:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:39.140 08:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:29:39.140 08:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:39.140 08:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:39.140 08:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:39.140 08:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:39.140 08:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGNjNDdmN2FiYzZkNTAyYzQ2MDJmZGZkN2Y3YTQ1MjQ3ODgyZmJiMTgwMDdiNWYzQUhcRw==: 00:29:39.140 08:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzg1MGU2MzA3NjBkMjc2ZmEyYzk3OWUyMzQzNWE0NTEzMDYyOGE1NzNjZDAxYzc0rwM5Jg==: 00:29:39.140 08:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:39.140 08:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:39.140 08:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGNjNDdmN2FiYzZkNTAyYzQ2MDJmZGZkN2Y3YTQ1MjQ3ODgyZmJiMTgwMDdiNWYzQUhcRw==: 00:29:39.140 08:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzg1MGU2MzA3NjBkMjc2ZmEyYzk3OWUyMzQzNWE0NTEzMDYyOGE1NzNjZDAxYzc0rwM5Jg==: ]] 00:29:39.140 08:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzg1MGU2MzA3NjBkMjc2ZmEyYzk3OWUyMzQzNWE0NTEzMDYyOGE1NzNjZDAxYzc0rwM5Jg==: 00:29:39.140 08:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:29:39.140 08:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:39.140 08:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:39.140 08:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:39.140 08:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:39.140 08:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:39.140 08:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:39.140 08:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.140 08:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.140 08:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.140 08:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:39.140 08:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:39.140 08:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:39.140 08:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:39.140 08:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:39.140 08:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:39.140 08:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:39.140 08:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:39.140 08:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:39.140 08:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:39.141 08:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:39.141 08:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:39.141 08:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.141 08:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.081 nvme0n1 00:29:40.081 08:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.081 08:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:40.081 08:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:40.081 08:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.081 08:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.081 08:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.081 08:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:40.081 08:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:40.081 08:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.081 08:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.081 08:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.081 08:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:40.081 08:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:29:40.081 08:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:40.081 08:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:40.081 08:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:40.081 08:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:40.081 08:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWI2NDZjYWZjYWJlNWI0YmYzZWY0NGY3N2UxZWI0YjUrS2Fp: 00:29:40.081 08:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWY1MmU5NzdjMjgwMjI5YTM2NzdjM2ZkOTdiNDAyODIWO896: 00:29:40.081 08:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:40.081 08:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:40.081 08:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWI2NDZjYWZjYWJlNWI0YmYzZWY0NGY3N2UxZWI0YjUrS2Fp: 00:29:40.081 08:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWY1MmU5NzdjMjgwMjI5YTM2NzdjM2ZkOTdiNDAyODIWO896: ]] 00:29:40.081 08:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWY1MmU5NzdjMjgwMjI5YTM2NzdjM2ZkOTdiNDAyODIWO896: 00:29:40.081 08:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:29:40.081 08:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:40.081 08:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:40.081 08:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:40.081 08:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:40.081 08:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:40.081 08:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:40.081 08:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.081 08:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.081 08:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.081 08:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:40.081 08:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:40.081 08:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:40.081 08:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:40.081 08:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:40.081 08:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:40.081 08:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:40.081 08:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:40.081 08:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:40.081 08:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:40.081 08:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:40.081 08:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:40.081 08:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.081 08:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.651 nvme0n1 00:29:40.651 08:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.651 08:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:40.651 08:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:40.651 08:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.651 08:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.651 08:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.651 08:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:40.651 08:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:40.651 08:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.651 08:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.651 08:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.651 08:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:40.651 08:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:29:40.651 08:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:40.651 08:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:40.651 08:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:40.651 08:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:40.651 08:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTVhODI2ODk4NWIxZTBkZGFjZjQ2ZWNhZTlkMGRjZjE1YzEwOWI4NzYyOGE5ZmIxygktRw==: 00:29:40.651 08:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmUxZmQ4MzA3MTBhMGUwN2EwYjhlM2UwNmZlNWRhZTnkEg2G: 00:29:40.651 08:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:40.651 08:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:40.651 08:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTVhODI2ODk4NWIxZTBkZGFjZjQ2ZWNhZTlkMGRjZjE1YzEwOWI4NzYyOGE5ZmIxygktRw==: 00:29:40.651 08:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmUxZmQ4MzA3MTBhMGUwN2EwYjhlM2UwNmZlNWRhZTnkEg2G: ]] 00:29:40.651 08:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmUxZmQ4MzA3MTBhMGUwN2EwYjhlM2UwNmZlNWRhZTnkEg2G: 00:29:40.651 08:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:29:40.651 08:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:40.651 08:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:40.651 08:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:40.651 08:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:40.651 08:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:40.651 08:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:40.651 08:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.651 08:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.651 08:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.651 08:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:40.651 08:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:40.651 08:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:40.651 08:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:40.651 08:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:40.651 08:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:40.651 08:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:40.651 08:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:40.651 08:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:40.651 08:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:40.651 08:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:40.652 08:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:40.652 08:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.652 08:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.591 nvme0n1 00:29:41.591 08:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.591 08:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:41.591 08:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:41.591 08:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.591 08:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.591 08:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.591 08:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:41.591 08:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:41.591 08:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.591 08:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.591 08:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.591 08:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:41.591 08:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:29:41.591 08:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:41.591 08:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:41.591 08:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:41.591 08:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:41.591 08:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzgyYTJiOTk4YTIwOTcyOWY5YWY3MjJlNzg3ZjlhMjQzOTE1NDNmZTQ4MTgxODU3Nzk5ODY3ODE2NjgxYTAyZDvMMPo=: 00:29:41.591 08:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:41.591 08:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:41.591 08:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:41.591 08:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzgyYTJiOTk4YTIwOTcyOWY5YWY3MjJlNzg3ZjlhMjQzOTE1NDNmZTQ4MTgxODU3Nzk5ODY3ODE2NjgxYTAyZDvMMPo=: 00:29:41.591 08:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:41.591 08:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:29:41.591 08:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:41.591 08:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:41.591 08:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:41.591 08:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:41.591 08:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:41.591 08:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:41.591 08:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.591 08:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.591 08:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.591 08:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:41.591 08:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:41.591 08:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:41.591 08:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:41.591 08:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:41.591 08:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:41.591 08:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:41.591 08:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:41.591 08:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:41.591 08:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:41.591 08:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:41.591 08:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:41.591 08:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.591 08:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.530 nvme0n1 00:29:42.530 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.530 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:42.530 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.530 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:42.530 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.530 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.530 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:42.530 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:42.530 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.530 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.530 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.530 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:29:42.530 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:42.530 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:42.530 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:29:42.530 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:42.530 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:42.530 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:42.530 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:42.531 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjllNmE1ZmM2YTVmMzJjMWI0NGYxNDZiYzgxZWIwYmTgDZ1l: 00:29:42.531 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWJmYjFjZTQyZmFhNDM1MGNhOTFiMTViOTY5N2QzMDViY2YyNjdmOGJkM2RmMzIyMmY0NjQ5MDM4ZjBiMDhjORMpKbY=: 00:29:42.531 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:42.531 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:42.531 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjllNmE1ZmM2YTVmMzJjMWI0NGYxNDZiYzgxZWIwYmTgDZ1l: 00:29:42.531 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWJmYjFjZTQyZmFhNDM1MGNhOTFiMTViOTY5N2QzMDViY2YyNjdmOGJkM2RmMzIyMmY0NjQ5MDM4ZjBiMDhjORMpKbY=: ]] 00:29:42.531 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWJmYjFjZTQyZmFhNDM1MGNhOTFiMTViOTY5N2QzMDViY2YyNjdmOGJkM2RmMzIyMmY0NjQ5MDM4ZjBiMDhjORMpKbY=: 00:29:42.531 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:29:42.531 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:42.531 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:42.531 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:42.531 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:42.531 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:42.531 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:42.531 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.531 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.531 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.531 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:42.531 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:42.531 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:42.531 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:42.531 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:42.531 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:42.531 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:42.531 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:42.531 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:42.531 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:42.531 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:42.531 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:42.531 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.531 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.531 nvme0n1 00:29:42.531 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.531 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:42.531 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:42.531 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.531 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.531 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.531 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:42.531 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:42.531 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.531 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.791 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.791 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:42.791 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:29:42.791 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:42.791 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:42.791 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:42.791 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:42.791 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGNjNDdmN2FiYzZkNTAyYzQ2MDJmZGZkN2Y3YTQ1MjQ3ODgyZmJiMTgwMDdiNWYzQUhcRw==: 00:29:42.791 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzg1MGU2MzA3NjBkMjc2ZmEyYzk3OWUyMzQzNWE0NTEzMDYyOGE1NzNjZDAxYzc0rwM5Jg==: 00:29:42.791 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:42.791 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:42.791 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGNjNDdmN2FiYzZkNTAyYzQ2MDJmZGZkN2Y3YTQ1MjQ3ODgyZmJiMTgwMDdiNWYzQUhcRw==: 00:29:42.791 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzg1MGU2MzA3NjBkMjc2ZmEyYzk3OWUyMzQzNWE0NTEzMDYyOGE1NzNjZDAxYzc0rwM5Jg==: ]] 00:29:42.791 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzg1MGU2MzA3NjBkMjc2ZmEyYzk3OWUyMzQzNWE0NTEzMDYyOGE1NzNjZDAxYzc0rwM5Jg==: 00:29:42.791 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:29:42.791 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:42.791 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:42.791 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:42.791 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:42.791 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:42.791 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:42.791 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.791 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.791 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.791 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:42.791 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:42.791 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:42.791 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:42.791 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:42.791 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:42.791 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:42.791 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:42.791 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:42.791 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:42.791 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:42.791 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:42.791 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.791 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.791 nvme0n1 00:29:42.791 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.791 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:42.791 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:42.791 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.791 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.791 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.791 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:42.791 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:42.791 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.791 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.791 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.791 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:42.791 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:29:42.791 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:42.791 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:42.791 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:42.791 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:42.791 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWI2NDZjYWZjYWJlNWI0YmYzZWY0NGY3N2UxZWI0YjUrS2Fp: 00:29:42.791 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWY1MmU5NzdjMjgwMjI5YTM2NzdjM2ZkOTdiNDAyODIWO896: 00:29:42.791 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:42.791 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:42.791 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWI2NDZjYWZjYWJlNWI0YmYzZWY0NGY3N2UxZWI0YjUrS2Fp: 00:29:42.791 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWY1MmU5NzdjMjgwMjI5YTM2NzdjM2ZkOTdiNDAyODIWO896: ]] 00:29:42.791 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWY1MmU5NzdjMjgwMjI5YTM2NzdjM2ZkOTdiNDAyODIWO896: 00:29:42.791 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:29:42.792 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:42.792 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:42.792 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:42.792 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:42.792 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:42.792 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:42.792 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.792 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.052 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.052 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:43.052 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:43.052 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:43.052 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:43.052 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:43.052 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:43.052 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:43.052 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:43.052 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:43.052 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:43.052 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:43.052 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:43.052 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.052 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.052 nvme0n1 00:29:43.052 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.052 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:43.052 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:43.052 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.052 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.052 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.052 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:43.052 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:43.052 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.052 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.052 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.052 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:43.052 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:29:43.052 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:43.052 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:43.052 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:43.052 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:43.052 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTVhODI2ODk4NWIxZTBkZGFjZjQ2ZWNhZTlkMGRjZjE1YzEwOWI4NzYyOGE5ZmIxygktRw==: 00:29:43.052 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmUxZmQ4MzA3MTBhMGUwN2EwYjhlM2UwNmZlNWRhZTnkEg2G: 00:29:43.052 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:43.052 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:43.052 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTVhODI2ODk4NWIxZTBkZGFjZjQ2ZWNhZTlkMGRjZjE1YzEwOWI4NzYyOGE5ZmIxygktRw==: 00:29:43.052 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmUxZmQ4MzA3MTBhMGUwN2EwYjhlM2UwNmZlNWRhZTnkEg2G: ]] 00:29:43.052 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmUxZmQ4MzA3MTBhMGUwN2EwYjhlM2UwNmZlNWRhZTnkEg2G: 00:29:43.052 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:29:43.052 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:43.052 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:43.052 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:43.052 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:43.052 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:43.052 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:43.052 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.052 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.052 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.052 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:43.052 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:43.052 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:43.052 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:43.052 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:43.052 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:43.052 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:43.052 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:43.053 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:43.053 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:43.053 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:43.053 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:43.053 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.053 08:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.313 nvme0n1 00:29:43.313 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.313 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:43.313 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.313 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:43.313 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.313 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.313 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:43.313 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:43.313 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.313 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.313 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.313 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:43.313 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:29:43.313 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:43.313 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:43.313 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:43.313 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:43.313 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzgyYTJiOTk4YTIwOTcyOWY5YWY3MjJlNzg3ZjlhMjQzOTE1NDNmZTQ4MTgxODU3Nzk5ODY3ODE2NjgxYTAyZDvMMPo=: 00:29:43.313 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:43.313 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:43.313 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:43.313 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzgyYTJiOTk4YTIwOTcyOWY5YWY3MjJlNzg3ZjlhMjQzOTE1NDNmZTQ4MTgxODU3Nzk5ODY3ODE2NjgxYTAyZDvMMPo=: 00:29:43.313 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:43.313 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:29:43.313 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:43.313 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:43.313 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:43.313 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:43.313 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:43.313 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:43.313 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.313 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.313 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.313 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:43.313 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:43.313 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:43.313 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:43.313 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:43.313 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:43.313 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:43.313 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:43.313 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:43.313 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:43.313 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:43.313 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:43.313 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.313 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.574 nvme0n1 00:29:43.574 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.574 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:43.574 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.574 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:43.574 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.574 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.574 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:43.574 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:43.574 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.574 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.574 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.574 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:43.574 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:43.574 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:29:43.574 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:43.574 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:43.574 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:43.574 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:43.574 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjllNmE1ZmM2YTVmMzJjMWI0NGYxNDZiYzgxZWIwYmTgDZ1l: 00:29:43.574 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWJmYjFjZTQyZmFhNDM1MGNhOTFiMTViOTY5N2QzMDViY2YyNjdmOGJkM2RmMzIyMmY0NjQ5MDM4ZjBiMDhjORMpKbY=: 00:29:43.574 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:43.574 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:43.574 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjllNmE1ZmM2YTVmMzJjMWI0NGYxNDZiYzgxZWIwYmTgDZ1l: 00:29:43.574 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWJmYjFjZTQyZmFhNDM1MGNhOTFiMTViOTY5N2QzMDViY2YyNjdmOGJkM2RmMzIyMmY0NjQ5MDM4ZjBiMDhjORMpKbY=: ]] 00:29:43.574 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWJmYjFjZTQyZmFhNDM1MGNhOTFiMTViOTY5N2QzMDViY2YyNjdmOGJkM2RmMzIyMmY0NjQ5MDM4ZjBiMDhjORMpKbY=: 00:29:43.574 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:29:43.574 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:43.574 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:43.574 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:43.574 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:43.574 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:43.574 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:43.574 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.574 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.574 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.574 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:43.574 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:43.574 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:43.574 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:43.574 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:43.574 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:43.574 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:43.574 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:43.574 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:43.574 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:43.574 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:43.574 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:43.574 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.574 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.834 nvme0n1 00:29:43.834 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.834 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:43.834 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:43.834 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.834 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.834 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.834 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:43.834 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:43.834 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.834 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.834 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.834 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:43.834 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:29:43.834 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:43.834 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:43.834 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:43.834 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:43.834 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGNjNDdmN2FiYzZkNTAyYzQ2MDJmZGZkN2Y3YTQ1MjQ3ODgyZmJiMTgwMDdiNWYzQUhcRw==: 00:29:43.834 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzg1MGU2MzA3NjBkMjc2ZmEyYzk3OWUyMzQzNWE0NTEzMDYyOGE1NzNjZDAxYzc0rwM5Jg==: 00:29:43.834 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:43.834 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:43.834 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGNjNDdmN2FiYzZkNTAyYzQ2MDJmZGZkN2Y3YTQ1MjQ3ODgyZmJiMTgwMDdiNWYzQUhcRw==: 00:29:43.834 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzg1MGU2MzA3NjBkMjc2ZmEyYzk3OWUyMzQzNWE0NTEzMDYyOGE1NzNjZDAxYzc0rwM5Jg==: ]] 00:29:43.834 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzg1MGU2MzA3NjBkMjc2ZmEyYzk3OWUyMzQzNWE0NTEzMDYyOGE1NzNjZDAxYzc0rwM5Jg==: 00:29:43.834 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:29:43.834 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:43.834 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:43.834 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:43.834 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:43.834 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:43.834 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:43.834 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.834 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.834 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.834 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:43.834 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:43.834 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:43.834 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:43.834 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:43.835 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:43.835 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:43.835 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:43.835 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:43.835 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:43.835 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:43.835 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:43.835 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.835 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.095 nvme0n1 00:29:44.095 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.095 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:44.095 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.095 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:44.095 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.095 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.095 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:44.095 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:44.095 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.095 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.095 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.095 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:44.095 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:29:44.095 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:44.095 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:44.095 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:44.095 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:44.095 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWI2NDZjYWZjYWJlNWI0YmYzZWY0NGY3N2UxZWI0YjUrS2Fp: 00:29:44.095 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWY1MmU5NzdjMjgwMjI5YTM2NzdjM2ZkOTdiNDAyODIWO896: 00:29:44.095 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:44.095 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:44.095 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWI2NDZjYWZjYWJlNWI0YmYzZWY0NGY3N2UxZWI0YjUrS2Fp: 00:29:44.095 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWY1MmU5NzdjMjgwMjI5YTM2NzdjM2ZkOTdiNDAyODIWO896: ]] 00:29:44.095 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWY1MmU5NzdjMjgwMjI5YTM2NzdjM2ZkOTdiNDAyODIWO896: 00:29:44.095 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:29:44.095 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:44.095 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:44.095 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:44.095 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:44.095 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:44.095 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:44.095 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.095 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.095 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.095 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:44.095 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:44.095 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:44.095 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:44.095 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:44.095 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:44.095 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:44.095 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:44.095 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:44.095 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:44.095 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:44.095 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:44.095 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.095 08:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.356 nvme0n1 00:29:44.356 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.356 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:44.356 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.356 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:44.356 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.356 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.356 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:44.356 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:44.356 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.356 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.356 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.356 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:44.356 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:29:44.356 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:44.356 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:44.356 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:44.356 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:44.356 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTVhODI2ODk4NWIxZTBkZGFjZjQ2ZWNhZTlkMGRjZjE1YzEwOWI4NzYyOGE5ZmIxygktRw==: 00:29:44.356 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmUxZmQ4MzA3MTBhMGUwN2EwYjhlM2UwNmZlNWRhZTnkEg2G: 00:29:44.356 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:44.356 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:44.356 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTVhODI2ODk4NWIxZTBkZGFjZjQ2ZWNhZTlkMGRjZjE1YzEwOWI4NzYyOGE5ZmIxygktRw==: 00:29:44.356 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmUxZmQ4MzA3MTBhMGUwN2EwYjhlM2UwNmZlNWRhZTnkEg2G: ]] 00:29:44.356 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmUxZmQ4MzA3MTBhMGUwN2EwYjhlM2UwNmZlNWRhZTnkEg2G: 00:29:44.356 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:29:44.356 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:44.356 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:44.356 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:44.356 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:44.356 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:44.356 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:44.357 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.357 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.357 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.357 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:44.357 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:44.357 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:44.357 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:44.357 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:44.357 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:44.357 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:44.357 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:44.357 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:44.357 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:44.357 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:44.357 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:44.357 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.357 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.617 nvme0n1 00:29:44.617 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.617 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:44.617 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:44.617 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.617 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.617 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.617 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:44.617 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:44.617 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.617 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.617 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.617 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:44.617 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:29:44.617 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:44.617 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:44.617 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:44.617 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:44.617 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzgyYTJiOTk4YTIwOTcyOWY5YWY3MjJlNzg3ZjlhMjQzOTE1NDNmZTQ4MTgxODU3Nzk5ODY3ODE2NjgxYTAyZDvMMPo=: 00:29:44.617 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:44.617 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:44.617 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:44.617 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzgyYTJiOTk4YTIwOTcyOWY5YWY3MjJlNzg3ZjlhMjQzOTE1NDNmZTQ4MTgxODU3Nzk5ODY3ODE2NjgxYTAyZDvMMPo=: 00:29:44.617 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:44.617 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:29:44.617 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:44.617 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:44.617 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:44.617 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:44.617 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:44.617 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:44.617 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.617 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.617 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.617 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:44.617 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:44.617 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:44.617 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:44.617 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:44.617 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:44.617 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:44.617 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:44.617 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:44.617 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:44.617 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:44.617 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:44.617 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.617 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.878 nvme0n1 00:29:44.878 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.878 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:44.878 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:44.878 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.878 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.878 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.878 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:44.878 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:44.878 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.878 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.878 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.878 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:44.878 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:44.878 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:29:44.878 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:44.878 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:44.878 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:44.878 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:44.878 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjllNmE1ZmM2YTVmMzJjMWI0NGYxNDZiYzgxZWIwYmTgDZ1l: 00:29:44.878 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWJmYjFjZTQyZmFhNDM1MGNhOTFiMTViOTY5N2QzMDViY2YyNjdmOGJkM2RmMzIyMmY0NjQ5MDM4ZjBiMDhjORMpKbY=: 00:29:44.878 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:44.878 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:44.878 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjllNmE1ZmM2YTVmMzJjMWI0NGYxNDZiYzgxZWIwYmTgDZ1l: 00:29:44.878 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWJmYjFjZTQyZmFhNDM1MGNhOTFiMTViOTY5N2QzMDViY2YyNjdmOGJkM2RmMzIyMmY0NjQ5MDM4ZjBiMDhjORMpKbY=: ]] 00:29:44.878 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWJmYjFjZTQyZmFhNDM1MGNhOTFiMTViOTY5N2QzMDViY2YyNjdmOGJkM2RmMzIyMmY0NjQ5MDM4ZjBiMDhjORMpKbY=: 00:29:44.878 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:29:44.878 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:44.878 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:44.878 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:44.878 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:44.879 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:44.879 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:44.879 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.879 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.879 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.879 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:44.879 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:44.879 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:44.879 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:44.879 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:44.879 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:44.879 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:44.879 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:44.879 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:44.879 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:44.879 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:44.879 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:44.879 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.879 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.139 nvme0n1 00:29:45.139 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.139 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:45.139 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:45.139 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.139 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.139 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.399 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:45.399 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:45.399 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.399 08:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.399 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.399 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:45.399 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:29:45.399 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:45.399 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:45.399 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:45.399 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:45.399 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGNjNDdmN2FiYzZkNTAyYzQ2MDJmZGZkN2Y3YTQ1MjQ3ODgyZmJiMTgwMDdiNWYzQUhcRw==: 00:29:45.399 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzg1MGU2MzA3NjBkMjc2ZmEyYzk3OWUyMzQzNWE0NTEzMDYyOGE1NzNjZDAxYzc0rwM5Jg==: 00:29:45.399 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:45.399 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:45.399 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGNjNDdmN2FiYzZkNTAyYzQ2MDJmZGZkN2Y3YTQ1MjQ3ODgyZmJiMTgwMDdiNWYzQUhcRw==: 00:29:45.399 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzg1MGU2MzA3NjBkMjc2ZmEyYzk3OWUyMzQzNWE0NTEzMDYyOGE1NzNjZDAxYzc0rwM5Jg==: ]] 00:29:45.399 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzg1MGU2MzA3NjBkMjc2ZmEyYzk3OWUyMzQzNWE0NTEzMDYyOGE1NzNjZDAxYzc0rwM5Jg==: 00:29:45.399 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:29:45.399 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:45.399 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:45.399 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:45.399 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:45.399 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:45.399 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:45.399 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.399 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.399 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.399 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:45.399 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:45.399 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:45.399 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:45.399 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:45.399 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:45.399 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:45.399 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:45.399 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:45.399 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:45.399 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:45.399 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:45.399 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.399 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.659 nvme0n1 00:29:45.659 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.659 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:45.659 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:45.659 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.659 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.659 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.659 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:45.659 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:45.659 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.659 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.659 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.659 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:45.659 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:29:45.659 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:45.659 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:45.659 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:45.659 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:45.659 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWI2NDZjYWZjYWJlNWI0YmYzZWY0NGY3N2UxZWI0YjUrS2Fp: 00:29:45.659 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWY1MmU5NzdjMjgwMjI5YTM2NzdjM2ZkOTdiNDAyODIWO896: 00:29:45.659 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:45.659 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:45.659 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWI2NDZjYWZjYWJlNWI0YmYzZWY0NGY3N2UxZWI0YjUrS2Fp: 00:29:45.659 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWY1MmU5NzdjMjgwMjI5YTM2NzdjM2ZkOTdiNDAyODIWO896: ]] 00:29:45.659 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWY1MmU5NzdjMjgwMjI5YTM2NzdjM2ZkOTdiNDAyODIWO896: 00:29:45.659 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:29:45.660 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:45.660 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:45.660 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:45.660 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:45.660 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:45.660 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:45.660 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.660 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.660 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.660 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:45.660 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:45.660 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:45.660 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:45.660 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:45.660 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:45.660 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:45.660 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:45.660 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:45.660 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:45.660 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:45.660 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:45.660 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.660 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.920 nvme0n1 00:29:45.920 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.920 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:45.920 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:45.920 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.920 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.920 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.920 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:45.920 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:45.920 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.920 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.920 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.920 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:45.920 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:29:45.920 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:45.920 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:45.920 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:45.920 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:45.920 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTVhODI2ODk4NWIxZTBkZGFjZjQ2ZWNhZTlkMGRjZjE1YzEwOWI4NzYyOGE5ZmIxygktRw==: 00:29:45.920 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmUxZmQ4MzA3MTBhMGUwN2EwYjhlM2UwNmZlNWRhZTnkEg2G: 00:29:45.920 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:45.920 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:45.920 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTVhODI2ODk4NWIxZTBkZGFjZjQ2ZWNhZTlkMGRjZjE1YzEwOWI4NzYyOGE5ZmIxygktRw==: 00:29:45.920 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmUxZmQ4MzA3MTBhMGUwN2EwYjhlM2UwNmZlNWRhZTnkEg2G: ]] 00:29:45.920 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmUxZmQ4MzA3MTBhMGUwN2EwYjhlM2UwNmZlNWRhZTnkEg2G: 00:29:45.920 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:29:45.920 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:45.920 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:45.920 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:45.920 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:45.920 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:45.920 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:45.920 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.920 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.180 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.180 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:46.180 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:46.180 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:46.180 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:46.180 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:46.181 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:46.181 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:46.181 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:46.181 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:46.181 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:46.181 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:46.181 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:46.181 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.181 08:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.441 nvme0n1 00:29:46.441 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.441 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:46.441 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:46.441 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.441 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.441 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.441 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:46.441 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:46.441 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.441 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.441 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.441 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:46.441 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:29:46.441 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:46.441 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:46.441 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:46.441 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:46.441 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzgyYTJiOTk4YTIwOTcyOWY5YWY3MjJlNzg3ZjlhMjQzOTE1NDNmZTQ4MTgxODU3Nzk5ODY3ODE2NjgxYTAyZDvMMPo=: 00:29:46.441 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:46.441 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:46.441 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:46.441 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzgyYTJiOTk4YTIwOTcyOWY5YWY3MjJlNzg3ZjlhMjQzOTE1NDNmZTQ4MTgxODU3Nzk5ODY3ODE2NjgxYTAyZDvMMPo=: 00:29:46.441 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:46.441 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:29:46.441 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:46.441 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:46.441 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:46.441 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:46.441 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:46.441 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:46.441 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.441 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.441 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.441 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:46.441 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:46.441 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:46.441 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:46.441 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:46.441 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:46.441 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:46.441 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:46.441 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:46.441 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:46.441 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:46.441 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:46.441 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.441 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.702 nvme0n1 00:29:46.702 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.702 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:46.702 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:46.702 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.702 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.702 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.702 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:46.702 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:46.703 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.703 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.703 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.703 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:46.703 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:46.703 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:29:46.703 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:46.703 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:46.703 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:46.703 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:46.703 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjllNmE1ZmM2YTVmMzJjMWI0NGYxNDZiYzgxZWIwYmTgDZ1l: 00:29:46.703 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWJmYjFjZTQyZmFhNDM1MGNhOTFiMTViOTY5N2QzMDViY2YyNjdmOGJkM2RmMzIyMmY0NjQ5MDM4ZjBiMDhjORMpKbY=: 00:29:46.703 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:46.703 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:46.703 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjllNmE1ZmM2YTVmMzJjMWI0NGYxNDZiYzgxZWIwYmTgDZ1l: 00:29:46.703 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWJmYjFjZTQyZmFhNDM1MGNhOTFiMTViOTY5N2QzMDViY2YyNjdmOGJkM2RmMzIyMmY0NjQ5MDM4ZjBiMDhjORMpKbY=: ]] 00:29:46.703 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWJmYjFjZTQyZmFhNDM1MGNhOTFiMTViOTY5N2QzMDViY2YyNjdmOGJkM2RmMzIyMmY0NjQ5MDM4ZjBiMDhjORMpKbY=: 00:29:46.703 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:29:46.703 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:46.703 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:46.703 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:46.703 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:46.703 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:46.703 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:46.703 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.703 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.703 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.703 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:46.703 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:46.703 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:46.703 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:46.703 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:46.703 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:46.703 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:46.703 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:46.703 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:46.703 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:46.703 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:46.703 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:46.703 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.703 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.274 nvme0n1 00:29:47.274 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.274 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:47.274 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:47.274 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.274 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.274 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.274 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:47.274 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:47.274 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.274 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.274 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.274 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:47.274 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:29:47.274 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:47.274 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:47.274 08:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:47.274 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:47.274 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGNjNDdmN2FiYzZkNTAyYzQ2MDJmZGZkN2Y3YTQ1MjQ3ODgyZmJiMTgwMDdiNWYzQUhcRw==: 00:29:47.274 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzg1MGU2MzA3NjBkMjc2ZmEyYzk3OWUyMzQzNWE0NTEzMDYyOGE1NzNjZDAxYzc0rwM5Jg==: 00:29:47.274 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:47.274 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:47.274 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGNjNDdmN2FiYzZkNTAyYzQ2MDJmZGZkN2Y3YTQ1MjQ3ODgyZmJiMTgwMDdiNWYzQUhcRw==: 00:29:47.274 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzg1MGU2MzA3NjBkMjc2ZmEyYzk3OWUyMzQzNWE0NTEzMDYyOGE1NzNjZDAxYzc0rwM5Jg==: ]] 00:29:47.274 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzg1MGU2MzA3NjBkMjc2ZmEyYzk3OWUyMzQzNWE0NTEzMDYyOGE1NzNjZDAxYzc0rwM5Jg==: 00:29:47.274 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:29:47.274 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:47.274 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:47.274 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:47.274 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:47.274 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:47.274 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:47.274 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.274 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.274 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.274 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:47.274 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:47.274 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:47.274 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:47.274 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:47.274 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:47.274 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:47.274 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:47.274 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:47.274 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:47.274 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:47.274 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:47.274 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.274 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.843 nvme0n1 00:29:47.843 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.843 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:47.843 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:47.843 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.843 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.843 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.843 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:47.843 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:47.843 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.843 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.843 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.843 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:47.843 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:29:47.843 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:47.843 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:47.843 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:47.843 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:47.843 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWI2NDZjYWZjYWJlNWI0YmYzZWY0NGY3N2UxZWI0YjUrS2Fp: 00:29:47.843 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWY1MmU5NzdjMjgwMjI5YTM2NzdjM2ZkOTdiNDAyODIWO896: 00:29:47.843 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:47.843 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:47.843 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWI2NDZjYWZjYWJlNWI0YmYzZWY0NGY3N2UxZWI0YjUrS2Fp: 00:29:47.843 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWY1MmU5NzdjMjgwMjI5YTM2NzdjM2ZkOTdiNDAyODIWO896: ]] 00:29:47.843 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWY1MmU5NzdjMjgwMjI5YTM2NzdjM2ZkOTdiNDAyODIWO896: 00:29:47.843 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:29:47.843 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:47.843 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:47.843 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:47.843 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:47.843 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:47.843 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:47.843 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.843 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.843 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.843 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:47.843 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:47.843 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:47.843 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:47.843 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:47.843 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:47.843 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:47.843 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:47.843 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:47.843 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:47.843 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:47.843 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:47.843 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.843 08:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.415 nvme0n1 00:29:48.415 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.415 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:48.415 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:48.415 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.415 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.415 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.415 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:48.415 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:48.415 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.415 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.415 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.415 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:48.415 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:29:48.415 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:48.415 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:48.415 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:48.415 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:48.415 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTVhODI2ODk4NWIxZTBkZGFjZjQ2ZWNhZTlkMGRjZjE1YzEwOWI4NzYyOGE5ZmIxygktRw==: 00:29:48.415 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmUxZmQ4MzA3MTBhMGUwN2EwYjhlM2UwNmZlNWRhZTnkEg2G: 00:29:48.415 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:48.415 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:48.415 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTVhODI2ODk4NWIxZTBkZGFjZjQ2ZWNhZTlkMGRjZjE1YzEwOWI4NzYyOGE5ZmIxygktRw==: 00:29:48.415 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmUxZmQ4MzA3MTBhMGUwN2EwYjhlM2UwNmZlNWRhZTnkEg2G: ]] 00:29:48.415 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmUxZmQ4MzA3MTBhMGUwN2EwYjhlM2UwNmZlNWRhZTnkEg2G: 00:29:48.415 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:29:48.415 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:48.415 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:48.415 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:48.415 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:48.415 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:48.415 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:48.415 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.415 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.415 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.415 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:48.415 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:48.415 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:48.415 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:48.415 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:48.415 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:48.415 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:48.415 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:48.415 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:48.415 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:48.415 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:48.415 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:48.415 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.415 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.987 nvme0n1 00:29:48.987 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.987 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:48.987 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.987 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:48.987 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.987 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.987 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:48.987 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:48.987 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.987 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.987 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.987 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:48.987 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:29:48.987 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:48.987 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:48.987 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:48.987 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:48.987 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzgyYTJiOTk4YTIwOTcyOWY5YWY3MjJlNzg3ZjlhMjQzOTE1NDNmZTQ4MTgxODU3Nzk5ODY3ODE2NjgxYTAyZDvMMPo=: 00:29:48.987 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:48.987 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:48.987 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:48.987 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzgyYTJiOTk4YTIwOTcyOWY5YWY3MjJlNzg3ZjlhMjQzOTE1NDNmZTQ4MTgxODU3Nzk5ODY3ODE2NjgxYTAyZDvMMPo=: 00:29:48.987 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:48.987 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:29:48.987 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:48.987 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:48.987 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:48.987 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:48.987 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:48.987 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:48.987 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.987 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.987 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.987 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:48.987 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:48.987 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:48.987 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:48.987 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:48.987 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:48.987 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:48.987 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:48.987 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:48.987 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:48.987 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:48.987 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:48.987 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.987 08:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.559 nvme0n1 00:29:49.559 08:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:49.559 08:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:49.559 08:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:49.559 08:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:49.559 08:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.559 08:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:49.559 08:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:49.559 08:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:49.559 08:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:49.559 08:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.559 08:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:49.559 08:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:49.559 08:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:49.559 08:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:29:49.559 08:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:49.559 08:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:49.559 08:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:49.559 08:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:49.559 08:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjllNmE1ZmM2YTVmMzJjMWI0NGYxNDZiYzgxZWIwYmTgDZ1l: 00:29:49.559 08:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWJmYjFjZTQyZmFhNDM1MGNhOTFiMTViOTY5N2QzMDViY2YyNjdmOGJkM2RmMzIyMmY0NjQ5MDM4ZjBiMDhjORMpKbY=: 00:29:49.559 08:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:49.559 08:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:49.559 08:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjllNmE1ZmM2YTVmMzJjMWI0NGYxNDZiYzgxZWIwYmTgDZ1l: 00:29:49.559 08:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWJmYjFjZTQyZmFhNDM1MGNhOTFiMTViOTY5N2QzMDViY2YyNjdmOGJkM2RmMzIyMmY0NjQ5MDM4ZjBiMDhjORMpKbY=: ]] 00:29:49.559 08:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWJmYjFjZTQyZmFhNDM1MGNhOTFiMTViOTY5N2QzMDViY2YyNjdmOGJkM2RmMzIyMmY0NjQ5MDM4ZjBiMDhjORMpKbY=: 00:29:49.559 08:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:29:49.559 08:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:49.559 08:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:49.559 08:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:49.559 08:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:49.559 08:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:49.559 08:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:49.559 08:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:49.559 08:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.559 08:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:49.559 08:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:49.559 08:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:49.559 08:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:49.559 08:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:49.559 08:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:49.559 08:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:49.559 08:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:49.559 08:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:49.559 08:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:49.559 08:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:49.559 08:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:49.559 08:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:49.559 08:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:49.560 08:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.501 nvme0n1 00:29:50.501 08:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.501 08:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:50.501 08:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.501 08:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:50.501 08:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.501 08:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.501 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:50.501 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:50.501 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.501 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.501 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.501 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:50.501 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:29:50.501 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:50.501 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:50.501 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:50.501 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:50.501 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGNjNDdmN2FiYzZkNTAyYzQ2MDJmZGZkN2Y3YTQ1MjQ3ODgyZmJiMTgwMDdiNWYzQUhcRw==: 00:29:50.501 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzg1MGU2MzA3NjBkMjc2ZmEyYzk3OWUyMzQzNWE0NTEzMDYyOGE1NzNjZDAxYzc0rwM5Jg==: 00:29:50.501 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:50.501 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:50.501 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGNjNDdmN2FiYzZkNTAyYzQ2MDJmZGZkN2Y3YTQ1MjQ3ODgyZmJiMTgwMDdiNWYzQUhcRw==: 00:29:50.501 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzg1MGU2MzA3NjBkMjc2ZmEyYzk3OWUyMzQzNWE0NTEzMDYyOGE1NzNjZDAxYzc0rwM5Jg==: ]] 00:29:50.501 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzg1MGU2MzA3NjBkMjc2ZmEyYzk3OWUyMzQzNWE0NTEzMDYyOGE1NzNjZDAxYzc0rwM5Jg==: 00:29:50.501 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:29:50.501 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:50.501 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:50.501 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:50.501 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:50.501 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:50.501 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:50.501 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.501 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.501 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.501 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:50.501 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:50.501 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:50.501 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:50.501 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:50.501 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:50.501 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:50.501 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:50.501 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:50.501 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:50.501 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:50.501 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:50.501 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.501 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.072 nvme0n1 00:29:51.072 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.072 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:51.072 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.072 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.072 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:51.072 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.072 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:51.072 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:51.072 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.072 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.072 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.072 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:51.072 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:29:51.072 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:51.072 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:51.072 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:51.072 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:51.072 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWI2NDZjYWZjYWJlNWI0YmYzZWY0NGY3N2UxZWI0YjUrS2Fp: 00:29:51.072 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWY1MmU5NzdjMjgwMjI5YTM2NzdjM2ZkOTdiNDAyODIWO896: 00:29:51.072 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:51.072 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:51.072 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWI2NDZjYWZjYWJlNWI0YmYzZWY0NGY3N2UxZWI0YjUrS2Fp: 00:29:51.072 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWY1MmU5NzdjMjgwMjI5YTM2NzdjM2ZkOTdiNDAyODIWO896: ]] 00:29:51.072 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWY1MmU5NzdjMjgwMjI5YTM2NzdjM2ZkOTdiNDAyODIWO896: 00:29:51.072 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:29:51.072 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:51.072 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:51.072 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:51.072 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:51.072 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:51.072 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:51.072 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.072 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.072 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.072 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:51.072 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:51.072 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:51.072 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:51.072 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:51.072 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:51.072 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:51.072 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:51.072 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:51.072 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:51.072 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:51.072 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:51.072 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.072 08:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.015 nvme0n1 00:29:52.015 08:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.015 08:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:52.015 08:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:52.015 08:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.015 08:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.015 08:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.015 08:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:52.015 08:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:52.015 08:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.015 08:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.015 08:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.015 08:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:52.015 08:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:29:52.015 08:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:52.015 08:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:52.015 08:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:52.015 08:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:52.015 08:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTVhODI2ODk4NWIxZTBkZGFjZjQ2ZWNhZTlkMGRjZjE1YzEwOWI4NzYyOGE5ZmIxygktRw==: 00:29:52.015 08:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmUxZmQ4MzA3MTBhMGUwN2EwYjhlM2UwNmZlNWRhZTnkEg2G: 00:29:52.015 08:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:52.015 08:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:52.015 08:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTVhODI2ODk4NWIxZTBkZGFjZjQ2ZWNhZTlkMGRjZjE1YzEwOWI4NzYyOGE5ZmIxygktRw==: 00:29:52.015 08:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmUxZmQ4MzA3MTBhMGUwN2EwYjhlM2UwNmZlNWRhZTnkEg2G: ]] 00:29:52.015 08:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmUxZmQ4MzA3MTBhMGUwN2EwYjhlM2UwNmZlNWRhZTnkEg2G: 00:29:52.015 08:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:29:52.015 08:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:52.015 08:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:52.015 08:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:52.015 08:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:52.015 08:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:52.015 08:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:52.015 08:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.015 08:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.015 08:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.015 08:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:52.015 08:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:52.015 08:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:52.015 08:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:52.015 08:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:52.015 08:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:52.015 08:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:52.015 08:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:52.015 08:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:52.015 08:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:52.015 08:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:52.015 08:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:52.015 08:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.015 08:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.959 nvme0n1 00:29:52.959 08:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.959 08:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:52.959 08:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:52.959 08:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.959 08:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.959 08:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.959 08:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:52.959 08:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:52.959 08:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.959 08:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.959 08:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.959 08:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:52.959 08:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:29:52.959 08:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:52.959 08:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:52.959 08:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:52.959 08:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:52.959 08:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzgyYTJiOTk4YTIwOTcyOWY5YWY3MjJlNzg3ZjlhMjQzOTE1NDNmZTQ4MTgxODU3Nzk5ODY3ODE2NjgxYTAyZDvMMPo=: 00:29:52.959 08:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:52.959 08:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:52.959 08:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:52.959 08:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzgyYTJiOTk4YTIwOTcyOWY5YWY3MjJlNzg3ZjlhMjQzOTE1NDNmZTQ4MTgxODU3Nzk5ODY3ODE2NjgxYTAyZDvMMPo=: 00:29:52.959 08:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:52.959 08:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:29:52.959 08:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:52.959 08:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:52.959 08:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:52.959 08:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:52.959 08:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:52.959 08:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:52.959 08:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.959 08:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.959 08:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.959 08:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:52.959 08:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:52.959 08:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:52.959 08:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:52.959 08:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:52.959 08:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:52.959 08:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:52.959 08:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:52.959 08:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:52.959 08:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:52.959 08:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:52.959 08:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:52.959 08:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.959 08:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.530 nvme0n1 00:29:53.530 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.530 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:53.530 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:53.530 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.530 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.530 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.792 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:53.792 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:53.792 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.792 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.792 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.792 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:29:53.792 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:53.792 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:53.792 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:29:53.792 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:53.792 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:53.792 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:53.792 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:53.792 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjllNmE1ZmM2YTVmMzJjMWI0NGYxNDZiYzgxZWIwYmTgDZ1l: 00:29:53.792 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWJmYjFjZTQyZmFhNDM1MGNhOTFiMTViOTY5N2QzMDViY2YyNjdmOGJkM2RmMzIyMmY0NjQ5MDM4ZjBiMDhjORMpKbY=: 00:29:53.792 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:53.792 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:53.792 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjllNmE1ZmM2YTVmMzJjMWI0NGYxNDZiYzgxZWIwYmTgDZ1l: 00:29:53.792 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWJmYjFjZTQyZmFhNDM1MGNhOTFiMTViOTY5N2QzMDViY2YyNjdmOGJkM2RmMzIyMmY0NjQ5MDM4ZjBiMDhjORMpKbY=: ]] 00:29:53.792 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWJmYjFjZTQyZmFhNDM1MGNhOTFiMTViOTY5N2QzMDViY2YyNjdmOGJkM2RmMzIyMmY0NjQ5MDM4ZjBiMDhjORMpKbY=: 00:29:53.792 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:29:53.792 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:53.792 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:53.792 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:53.792 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:53.792 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:53.792 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:53.792 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.792 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.792 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.792 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:53.792 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:53.792 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:53.792 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:53.792 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:53.792 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:53.792 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:53.792 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:53.792 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:53.792 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:53.792 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:53.792 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:53.792 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.792 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.792 nvme0n1 00:29:53.792 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.792 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:53.792 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:53.792 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.792 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.792 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.792 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:53.792 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:53.792 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.792 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGNjNDdmN2FiYzZkNTAyYzQ2MDJmZGZkN2Y3YTQ1MjQ3ODgyZmJiMTgwMDdiNWYzQUhcRw==: 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzg1MGU2MzA3NjBkMjc2ZmEyYzk3OWUyMzQzNWE0NTEzMDYyOGE1NzNjZDAxYzc0rwM5Jg==: 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGNjNDdmN2FiYzZkNTAyYzQ2MDJmZGZkN2Y3YTQ1MjQ3ODgyZmJiMTgwMDdiNWYzQUhcRw==: 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzg1MGU2MzA3NjBkMjc2ZmEyYzk3OWUyMzQzNWE0NTEzMDYyOGE1NzNjZDAxYzc0rwM5Jg==: ]] 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzg1MGU2MzA3NjBkMjc2ZmEyYzk3OWUyMzQzNWE0NTEzMDYyOGE1NzNjZDAxYzc0rwM5Jg==: 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.053 nvme0n1 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWI2NDZjYWZjYWJlNWI0YmYzZWY0NGY3N2UxZWI0YjUrS2Fp: 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWY1MmU5NzdjMjgwMjI5YTM2NzdjM2ZkOTdiNDAyODIWO896: 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWI2NDZjYWZjYWJlNWI0YmYzZWY0NGY3N2UxZWI0YjUrS2Fp: 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWY1MmU5NzdjMjgwMjI5YTM2NzdjM2ZkOTdiNDAyODIWO896: ]] 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWY1MmU5NzdjMjgwMjI5YTM2NzdjM2ZkOTdiNDAyODIWO896: 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.053 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.314 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.314 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:54.314 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:54.314 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:54.314 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:54.314 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:54.314 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:54.314 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:54.314 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:54.314 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:54.314 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:54.314 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:54.314 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:54.314 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.314 08:44:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.314 nvme0n1 00:29:54.314 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.314 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:54.314 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:54.314 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.314 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.314 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.314 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:54.314 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:54.314 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.314 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.314 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.314 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:54.314 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:29:54.314 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:54.314 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:54.314 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:54.314 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:54.314 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTVhODI2ODk4NWIxZTBkZGFjZjQ2ZWNhZTlkMGRjZjE1YzEwOWI4NzYyOGE5ZmIxygktRw==: 00:29:54.314 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmUxZmQ4MzA3MTBhMGUwN2EwYjhlM2UwNmZlNWRhZTnkEg2G: 00:29:54.314 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:54.314 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:54.314 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTVhODI2ODk4NWIxZTBkZGFjZjQ2ZWNhZTlkMGRjZjE1YzEwOWI4NzYyOGE5ZmIxygktRw==: 00:29:54.314 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmUxZmQ4MzA3MTBhMGUwN2EwYjhlM2UwNmZlNWRhZTnkEg2G: ]] 00:29:54.314 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmUxZmQ4MzA3MTBhMGUwN2EwYjhlM2UwNmZlNWRhZTnkEg2G: 00:29:54.314 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:29:54.314 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:54.314 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:54.314 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:54.314 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:54.314 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:54.314 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:54.314 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.314 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.314 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.314 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:54.314 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:54.314 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:54.314 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:54.314 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:54.314 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:54.314 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:54.314 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:54.314 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:54.314 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:54.314 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:54.314 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:54.314 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.314 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.575 nvme0n1 00:29:54.575 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.575 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:54.575 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:54.575 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.575 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.575 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.575 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:54.575 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:54.575 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.575 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.575 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.575 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:54.575 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:29:54.575 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:54.575 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:54.575 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:54.575 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:54.575 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzgyYTJiOTk4YTIwOTcyOWY5YWY3MjJlNzg3ZjlhMjQzOTE1NDNmZTQ4MTgxODU3Nzk5ODY3ODE2NjgxYTAyZDvMMPo=: 00:29:54.575 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:54.576 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:54.576 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:54.576 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzgyYTJiOTk4YTIwOTcyOWY5YWY3MjJlNzg3ZjlhMjQzOTE1NDNmZTQ4MTgxODU3Nzk5ODY3ODE2NjgxYTAyZDvMMPo=: 00:29:54.576 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:54.576 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:29:54.576 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:54.576 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:54.576 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:54.576 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:54.576 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:54.576 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:54.576 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.576 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.576 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.576 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:54.576 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:54.576 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:54.576 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:54.576 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:54.576 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:54.576 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:54.576 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:54.576 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:54.576 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:54.576 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:54.576 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:54.576 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.576 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.836 nvme0n1 00:29:54.836 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.836 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:54.836 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:54.836 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.836 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.836 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.837 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:54.837 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:54.837 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.837 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.837 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.837 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:54.837 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:54.837 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:29:54.837 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:54.837 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:54.837 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:54.837 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:54.837 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjllNmE1ZmM2YTVmMzJjMWI0NGYxNDZiYzgxZWIwYmTgDZ1l: 00:29:54.837 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWJmYjFjZTQyZmFhNDM1MGNhOTFiMTViOTY5N2QzMDViY2YyNjdmOGJkM2RmMzIyMmY0NjQ5MDM4ZjBiMDhjORMpKbY=: 00:29:54.837 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:54.837 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:54.837 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjllNmE1ZmM2YTVmMzJjMWI0NGYxNDZiYzgxZWIwYmTgDZ1l: 00:29:54.837 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWJmYjFjZTQyZmFhNDM1MGNhOTFiMTViOTY5N2QzMDViY2YyNjdmOGJkM2RmMzIyMmY0NjQ5MDM4ZjBiMDhjORMpKbY=: ]] 00:29:54.837 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWJmYjFjZTQyZmFhNDM1MGNhOTFiMTViOTY5N2QzMDViY2YyNjdmOGJkM2RmMzIyMmY0NjQ5MDM4ZjBiMDhjORMpKbY=: 00:29:54.837 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:29:54.837 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:54.837 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:54.837 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:54.837 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:54.837 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:54.837 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:54.837 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.837 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.837 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.837 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:54.837 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:54.837 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:54.837 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:54.837 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:54.837 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:54.837 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:54.837 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:54.837 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:54.837 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:54.837 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:54.837 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:54.837 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.837 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.098 nvme0n1 00:29:55.098 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.098 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:55.098 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:55.098 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.098 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.098 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.098 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:55.098 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:55.098 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.098 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.098 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.098 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:55.098 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:29:55.098 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:55.098 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:55.098 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:55.098 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:55.098 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGNjNDdmN2FiYzZkNTAyYzQ2MDJmZGZkN2Y3YTQ1MjQ3ODgyZmJiMTgwMDdiNWYzQUhcRw==: 00:29:55.098 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzg1MGU2MzA3NjBkMjc2ZmEyYzk3OWUyMzQzNWE0NTEzMDYyOGE1NzNjZDAxYzc0rwM5Jg==: 00:29:55.098 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:55.098 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:55.098 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGNjNDdmN2FiYzZkNTAyYzQ2MDJmZGZkN2Y3YTQ1MjQ3ODgyZmJiMTgwMDdiNWYzQUhcRw==: 00:29:55.098 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzg1MGU2MzA3NjBkMjc2ZmEyYzk3OWUyMzQzNWE0NTEzMDYyOGE1NzNjZDAxYzc0rwM5Jg==: ]] 00:29:55.098 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzg1MGU2MzA3NjBkMjc2ZmEyYzk3OWUyMzQzNWE0NTEzMDYyOGE1NzNjZDAxYzc0rwM5Jg==: 00:29:55.098 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:29:55.098 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:55.098 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:55.098 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:55.098 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:55.098 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:55.098 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:55.098 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.098 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.098 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.098 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:55.098 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:55.098 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:55.098 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:55.098 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:55.098 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:55.098 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:55.098 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:55.098 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:55.098 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:55.098 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:55.098 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:55.098 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.098 08:44:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.358 nvme0n1 00:29:55.358 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.358 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:55.358 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:55.358 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.358 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.358 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.358 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:55.358 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:55.358 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.358 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.358 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.358 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:55.358 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:29:55.358 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:55.358 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:55.358 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:55.358 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:55.358 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWI2NDZjYWZjYWJlNWI0YmYzZWY0NGY3N2UxZWI0YjUrS2Fp: 00:29:55.358 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWY1MmU5NzdjMjgwMjI5YTM2NzdjM2ZkOTdiNDAyODIWO896: 00:29:55.358 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:55.358 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:55.358 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWI2NDZjYWZjYWJlNWI0YmYzZWY0NGY3N2UxZWI0YjUrS2Fp: 00:29:55.358 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWY1MmU5NzdjMjgwMjI5YTM2NzdjM2ZkOTdiNDAyODIWO896: ]] 00:29:55.358 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWY1MmU5NzdjMjgwMjI5YTM2NzdjM2ZkOTdiNDAyODIWO896: 00:29:55.358 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:29:55.358 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:55.358 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:55.358 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:55.358 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:55.358 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:55.358 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:55.358 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.358 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.358 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.358 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:55.358 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:55.358 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:55.358 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:55.358 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:55.358 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:55.358 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:55.358 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:55.358 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:55.358 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:55.358 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:55.358 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:55.358 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.358 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.618 nvme0n1 00:29:55.618 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.618 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:55.618 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:55.618 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.618 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.618 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.618 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:55.618 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:55.618 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.618 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.618 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.618 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:55.618 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:29:55.618 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:55.618 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:55.618 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:55.618 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:55.618 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTVhODI2ODk4NWIxZTBkZGFjZjQ2ZWNhZTlkMGRjZjE1YzEwOWI4NzYyOGE5ZmIxygktRw==: 00:29:55.618 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmUxZmQ4MzA3MTBhMGUwN2EwYjhlM2UwNmZlNWRhZTnkEg2G: 00:29:55.618 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:55.618 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:55.618 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTVhODI2ODk4NWIxZTBkZGFjZjQ2ZWNhZTlkMGRjZjE1YzEwOWI4NzYyOGE5ZmIxygktRw==: 00:29:55.618 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmUxZmQ4MzA3MTBhMGUwN2EwYjhlM2UwNmZlNWRhZTnkEg2G: ]] 00:29:55.618 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmUxZmQ4MzA3MTBhMGUwN2EwYjhlM2UwNmZlNWRhZTnkEg2G: 00:29:55.618 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:29:55.618 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:55.618 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:55.618 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:55.618 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:55.618 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:55.618 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:55.618 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.618 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.618 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.618 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:55.618 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:55.618 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:55.618 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:55.618 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:55.618 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:55.618 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:55.618 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:55.618 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:55.618 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:55.618 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:55.618 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:55.618 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.618 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.877 nvme0n1 00:29:55.877 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.877 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:55.877 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:55.877 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.877 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.877 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.877 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:55.877 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:55.877 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.877 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.877 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.877 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:55.877 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:29:55.877 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:55.877 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:55.877 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:55.877 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:55.877 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzgyYTJiOTk4YTIwOTcyOWY5YWY3MjJlNzg3ZjlhMjQzOTE1NDNmZTQ4MTgxODU3Nzk5ODY3ODE2NjgxYTAyZDvMMPo=: 00:29:55.877 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:55.877 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:55.877 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:55.877 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzgyYTJiOTk4YTIwOTcyOWY5YWY3MjJlNzg3ZjlhMjQzOTE1NDNmZTQ4MTgxODU3Nzk5ODY3ODE2NjgxYTAyZDvMMPo=: 00:29:55.878 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:55.878 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:29:55.878 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:55.878 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:55.878 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:55.878 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:55.878 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:55.878 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:55.878 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.878 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.878 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.878 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:55.878 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:55.878 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:55.878 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:55.878 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:55.878 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:55.878 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:55.878 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:55.878 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:55.878 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:55.878 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:55.878 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:55.878 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.878 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.138 nvme0n1 00:29:56.138 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.138 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:56.138 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:56.138 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.138 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.138 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.138 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:56.138 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:56.138 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.138 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.138 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.138 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:56.138 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:56.138 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:29:56.138 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:56.138 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:56.138 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:56.138 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:56.138 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjllNmE1ZmM2YTVmMzJjMWI0NGYxNDZiYzgxZWIwYmTgDZ1l: 00:29:56.138 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWJmYjFjZTQyZmFhNDM1MGNhOTFiMTViOTY5N2QzMDViY2YyNjdmOGJkM2RmMzIyMmY0NjQ5MDM4ZjBiMDhjORMpKbY=: 00:29:56.138 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:56.138 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:56.138 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjllNmE1ZmM2YTVmMzJjMWI0NGYxNDZiYzgxZWIwYmTgDZ1l: 00:29:56.138 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWJmYjFjZTQyZmFhNDM1MGNhOTFiMTViOTY5N2QzMDViY2YyNjdmOGJkM2RmMzIyMmY0NjQ5MDM4ZjBiMDhjORMpKbY=: ]] 00:29:56.138 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWJmYjFjZTQyZmFhNDM1MGNhOTFiMTViOTY5N2QzMDViY2YyNjdmOGJkM2RmMzIyMmY0NjQ5MDM4ZjBiMDhjORMpKbY=: 00:29:56.138 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:29:56.138 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:56.138 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:56.138 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:56.138 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:56.138 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:56.138 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:56.138 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.138 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.138 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.138 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:56.138 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:56.139 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:56.139 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:56.399 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:56.399 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:56.399 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:56.399 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:56.399 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:56.399 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:56.399 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:56.399 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:56.399 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.399 08:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.659 nvme0n1 00:29:56.659 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.659 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:56.659 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:56.659 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.659 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.659 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.659 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:56.659 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:56.659 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.659 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.659 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.659 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:56.659 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:29:56.659 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:56.659 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:56.660 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:56.660 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:56.660 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGNjNDdmN2FiYzZkNTAyYzQ2MDJmZGZkN2Y3YTQ1MjQ3ODgyZmJiMTgwMDdiNWYzQUhcRw==: 00:29:56.660 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzg1MGU2MzA3NjBkMjc2ZmEyYzk3OWUyMzQzNWE0NTEzMDYyOGE1NzNjZDAxYzc0rwM5Jg==: 00:29:56.660 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:56.660 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:56.660 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGNjNDdmN2FiYzZkNTAyYzQ2MDJmZGZkN2Y3YTQ1MjQ3ODgyZmJiMTgwMDdiNWYzQUhcRw==: 00:29:56.660 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzg1MGU2MzA3NjBkMjc2ZmEyYzk3OWUyMzQzNWE0NTEzMDYyOGE1NzNjZDAxYzc0rwM5Jg==: ]] 00:29:56.660 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzg1MGU2MzA3NjBkMjc2ZmEyYzk3OWUyMzQzNWE0NTEzMDYyOGE1NzNjZDAxYzc0rwM5Jg==: 00:29:56.660 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:29:56.660 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:56.660 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:56.660 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:56.660 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:56.660 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:56.660 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:56.660 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.660 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.660 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.660 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:56.660 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:56.660 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:56.660 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:56.660 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:56.660 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:56.660 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:56.660 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:56.660 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:56.660 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:56.660 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:56.660 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:56.660 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.660 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.921 nvme0n1 00:29:56.921 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.921 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:56.921 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:56.921 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.921 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.921 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.921 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:56.921 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:56.921 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.921 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.921 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.921 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:56.921 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:29:56.921 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:56.921 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:56.921 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:56.921 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:56.921 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWI2NDZjYWZjYWJlNWI0YmYzZWY0NGY3N2UxZWI0YjUrS2Fp: 00:29:56.921 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWY1MmU5NzdjMjgwMjI5YTM2NzdjM2ZkOTdiNDAyODIWO896: 00:29:56.921 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:56.921 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:56.921 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWI2NDZjYWZjYWJlNWI0YmYzZWY0NGY3N2UxZWI0YjUrS2Fp: 00:29:56.921 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWY1MmU5NzdjMjgwMjI5YTM2NzdjM2ZkOTdiNDAyODIWO896: ]] 00:29:56.921 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWY1MmU5NzdjMjgwMjI5YTM2NzdjM2ZkOTdiNDAyODIWO896: 00:29:56.921 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:29:56.921 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:56.921 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:56.921 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:56.921 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:56.921 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:56.921 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:56.921 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.921 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.921 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.921 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:56.921 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:56.921 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:56.921 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:56.921 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:56.921 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:56.921 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:56.921 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:56.921 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:56.921 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:56.921 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:56.921 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:56.921 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.921 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.181 nvme0n1 00:29:57.181 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:57.181 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:57.181 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:57.181 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:57.181 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.181 08:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:57.441 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:57.441 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:57.441 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:57.441 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.441 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:57.441 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:57.441 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:29:57.441 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:57.441 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:57.441 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:57.441 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:57.441 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTVhODI2ODk4NWIxZTBkZGFjZjQ2ZWNhZTlkMGRjZjE1YzEwOWI4NzYyOGE5ZmIxygktRw==: 00:29:57.441 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmUxZmQ4MzA3MTBhMGUwN2EwYjhlM2UwNmZlNWRhZTnkEg2G: 00:29:57.441 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:57.441 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:57.441 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTVhODI2ODk4NWIxZTBkZGFjZjQ2ZWNhZTlkMGRjZjE1YzEwOWI4NzYyOGE5ZmIxygktRw==: 00:29:57.441 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmUxZmQ4MzA3MTBhMGUwN2EwYjhlM2UwNmZlNWRhZTnkEg2G: ]] 00:29:57.441 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmUxZmQ4MzA3MTBhMGUwN2EwYjhlM2UwNmZlNWRhZTnkEg2G: 00:29:57.441 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:29:57.441 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:57.441 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:57.441 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:57.441 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:57.441 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:57.441 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:57.441 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:57.441 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.441 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:57.441 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:57.441 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:57.441 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:57.441 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:57.441 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:57.441 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:57.441 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:57.441 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:57.441 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:57.441 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:57.441 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:57.441 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:57.441 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:57.441 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.701 nvme0n1 00:29:57.701 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:57.701 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:57.701 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:57.701 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:57.701 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.701 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:57.701 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:57.701 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:57.701 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:57.701 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.702 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:57.702 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:57.702 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:29:57.702 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:57.702 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:57.702 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:57.702 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:57.702 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzgyYTJiOTk4YTIwOTcyOWY5YWY3MjJlNzg3ZjlhMjQzOTE1NDNmZTQ4MTgxODU3Nzk5ODY3ODE2NjgxYTAyZDvMMPo=: 00:29:57.702 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:57.702 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:57.702 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:57.702 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzgyYTJiOTk4YTIwOTcyOWY5YWY3MjJlNzg3ZjlhMjQzOTE1NDNmZTQ4MTgxODU3Nzk5ODY3ODE2NjgxYTAyZDvMMPo=: 00:29:57.702 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:57.702 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:29:57.702 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:57.702 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:57.702 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:57.702 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:57.702 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:57.702 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:57.702 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:57.702 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.702 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:57.702 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:57.702 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:57.702 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:57.702 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:57.702 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:57.702 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:57.702 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:57.702 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:57.702 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:57.702 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:57.702 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:57.702 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:57.702 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:57.702 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.962 nvme0n1 00:29:57.962 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:57.962 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:57.962 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:57.962 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:57.962 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.962 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:57.962 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:57.962 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:57.962 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:57.962 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.962 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:57.962 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:57.962 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:57.962 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:29:57.962 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:57.962 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:57.962 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:57.962 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:57.962 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjllNmE1ZmM2YTVmMzJjMWI0NGYxNDZiYzgxZWIwYmTgDZ1l: 00:29:57.962 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWJmYjFjZTQyZmFhNDM1MGNhOTFiMTViOTY5N2QzMDViY2YyNjdmOGJkM2RmMzIyMmY0NjQ5MDM4ZjBiMDhjORMpKbY=: 00:29:57.962 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:57.962 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:57.962 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjllNmE1ZmM2YTVmMzJjMWI0NGYxNDZiYzgxZWIwYmTgDZ1l: 00:29:57.962 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWJmYjFjZTQyZmFhNDM1MGNhOTFiMTViOTY5N2QzMDViY2YyNjdmOGJkM2RmMzIyMmY0NjQ5MDM4ZjBiMDhjORMpKbY=: ]] 00:29:57.962 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWJmYjFjZTQyZmFhNDM1MGNhOTFiMTViOTY5N2QzMDViY2YyNjdmOGJkM2RmMzIyMmY0NjQ5MDM4ZjBiMDhjORMpKbY=: 00:29:57.962 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:29:57.962 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:57.962 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:57.962 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:57.962 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:57.962 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:57.962 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:57.962 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:57.962 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.962 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:57.962 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:57.962 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:57.962 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:57.962 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:57.962 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:57.962 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:57.962 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:57.962 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:57.962 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:57.962 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:57.962 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:57.962 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:57.962 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:57.962 08:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.533 nvme0n1 00:29:58.533 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:58.533 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:58.533 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:58.533 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:58.533 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.533 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:58.533 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:58.533 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:58.533 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:58.533 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.533 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:58.533 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:58.533 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:29:58.533 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:58.533 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:58.533 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:58.533 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:58.533 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGNjNDdmN2FiYzZkNTAyYzQ2MDJmZGZkN2Y3YTQ1MjQ3ODgyZmJiMTgwMDdiNWYzQUhcRw==: 00:29:58.533 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzg1MGU2MzA3NjBkMjc2ZmEyYzk3OWUyMzQzNWE0NTEzMDYyOGE1NzNjZDAxYzc0rwM5Jg==: 00:29:58.533 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:58.533 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:58.533 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGNjNDdmN2FiYzZkNTAyYzQ2MDJmZGZkN2Y3YTQ1MjQ3ODgyZmJiMTgwMDdiNWYzQUhcRw==: 00:29:58.533 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzg1MGU2MzA3NjBkMjc2ZmEyYzk3OWUyMzQzNWE0NTEzMDYyOGE1NzNjZDAxYzc0rwM5Jg==: ]] 00:29:58.533 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzg1MGU2MzA3NjBkMjc2ZmEyYzk3OWUyMzQzNWE0NTEzMDYyOGE1NzNjZDAxYzc0rwM5Jg==: 00:29:58.533 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:29:58.533 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:58.533 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:58.533 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:58.533 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:58.533 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:58.533 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:58.533 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:58.533 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.533 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:58.533 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:58.533 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:58.533 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:58.533 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:58.533 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:58.533 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:58.533 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:58.533 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:58.533 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:58.533 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:58.533 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:58.533 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:58.533 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:58.533 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.104 nvme0n1 00:29:59.104 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.104 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:59.104 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:59.104 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.104 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.104 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.104 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:59.104 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:59.104 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.104 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.104 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.104 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:59.104 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:29:59.104 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:59.104 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:59.104 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:59.104 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:59.104 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWI2NDZjYWZjYWJlNWI0YmYzZWY0NGY3N2UxZWI0YjUrS2Fp: 00:29:59.104 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWY1MmU5NzdjMjgwMjI5YTM2NzdjM2ZkOTdiNDAyODIWO896: 00:29:59.104 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:59.104 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:59.104 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWI2NDZjYWZjYWJlNWI0YmYzZWY0NGY3N2UxZWI0YjUrS2Fp: 00:29:59.104 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWY1MmU5NzdjMjgwMjI5YTM2NzdjM2ZkOTdiNDAyODIWO896: ]] 00:29:59.104 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWY1MmU5NzdjMjgwMjI5YTM2NzdjM2ZkOTdiNDAyODIWO896: 00:29:59.104 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:29:59.104 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:59.104 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:59.104 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:59.104 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:59.104 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:59.104 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:59.104 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.104 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.104 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.104 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:59.104 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:59.104 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:59.104 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:59.104 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:59.104 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:59.104 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:59.104 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:59.104 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:59.104 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:59.104 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:59.104 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:59.104 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.104 08:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.676 nvme0n1 00:29:59.676 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.676 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:59.676 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:59.676 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.676 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.676 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.676 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:59.676 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:59.676 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.676 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.676 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.676 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:59.676 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:29:59.676 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:59.676 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:59.676 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:59.676 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:59.676 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTVhODI2ODk4NWIxZTBkZGFjZjQ2ZWNhZTlkMGRjZjE1YzEwOWI4NzYyOGE5ZmIxygktRw==: 00:29:59.676 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmUxZmQ4MzA3MTBhMGUwN2EwYjhlM2UwNmZlNWRhZTnkEg2G: 00:29:59.676 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:59.676 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:59.676 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTVhODI2ODk4NWIxZTBkZGFjZjQ2ZWNhZTlkMGRjZjE1YzEwOWI4NzYyOGE5ZmIxygktRw==: 00:29:59.676 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmUxZmQ4MzA3MTBhMGUwN2EwYjhlM2UwNmZlNWRhZTnkEg2G: ]] 00:29:59.676 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmUxZmQ4MzA3MTBhMGUwN2EwYjhlM2UwNmZlNWRhZTnkEg2G: 00:29:59.676 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:29:59.676 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:59.676 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:59.676 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:59.676 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:59.676 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:59.676 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:59.676 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.676 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.676 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.676 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:59.676 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:29:59.676 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:59.676 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:59.676 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:59.676 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:59.676 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:59.676 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:59.676 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:59.676 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:59.676 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:59.676 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:59.676 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.676 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.246 nvme0n1 00:30:00.246 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.246 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:00.246 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:00.246 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.246 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.246 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.246 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:00.246 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:00.246 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.246 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.246 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.246 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:00.246 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:30:00.246 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:00.246 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:00.246 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:00.246 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:00.246 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzgyYTJiOTk4YTIwOTcyOWY5YWY3MjJlNzg3ZjlhMjQzOTE1NDNmZTQ4MTgxODU3Nzk5ODY3ODE2NjgxYTAyZDvMMPo=: 00:30:00.246 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:00.246 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:00.246 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:00.246 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzgyYTJiOTk4YTIwOTcyOWY5YWY3MjJlNzg3ZjlhMjQzOTE1NDNmZTQ4MTgxODU3Nzk5ODY3ODE2NjgxYTAyZDvMMPo=: 00:30:00.246 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:00.246 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:30:00.246 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:00.246 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:00.246 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:00.246 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:00.246 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:00.247 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:00.247 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.247 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.247 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.247 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:00.247 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:30:00.247 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:30:00.247 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:30:00.247 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:00.247 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:00.247 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:30:00.247 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:00.247 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:30:00.247 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:30:00.247 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:30:00.247 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:00.247 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.247 08:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.817 nvme0n1 00:30:00.817 08:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.817 08:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:00.817 08:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:00.817 08:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.817 08:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.817 08:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.817 08:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:00.817 08:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:00.817 08:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.817 08:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.817 08:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.817 08:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:00.817 08:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:00.817 08:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:30:00.817 08:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:00.817 08:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:00.817 08:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:00.817 08:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:00.817 08:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjllNmE1ZmM2YTVmMzJjMWI0NGYxNDZiYzgxZWIwYmTgDZ1l: 00:30:00.817 08:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWJmYjFjZTQyZmFhNDM1MGNhOTFiMTViOTY5N2QzMDViY2YyNjdmOGJkM2RmMzIyMmY0NjQ5MDM4ZjBiMDhjORMpKbY=: 00:30:00.817 08:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:00.817 08:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:00.817 08:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjllNmE1ZmM2YTVmMzJjMWI0NGYxNDZiYzgxZWIwYmTgDZ1l: 00:30:00.817 08:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWJmYjFjZTQyZmFhNDM1MGNhOTFiMTViOTY5N2QzMDViY2YyNjdmOGJkM2RmMzIyMmY0NjQ5MDM4ZjBiMDhjORMpKbY=: ]] 00:30:00.818 08:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWJmYjFjZTQyZmFhNDM1MGNhOTFiMTViOTY5N2QzMDViY2YyNjdmOGJkM2RmMzIyMmY0NjQ5MDM4ZjBiMDhjORMpKbY=: 00:30:00.818 08:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:30:00.818 08:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:00.818 08:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:00.818 08:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:00.818 08:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:00.818 08:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:00.818 08:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:00.818 08:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.818 08:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.818 08:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.818 08:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:00.818 08:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:30:00.818 08:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:30:00.818 08:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:30:00.818 08:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:00.818 08:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:00.818 08:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:30:00.818 08:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:00.818 08:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:30:00.818 08:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:30:00.818 08:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:30:00.818 08:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:00.818 08:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.818 08:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:01.388 nvme0n1 00:30:01.388 08:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.388 08:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:01.388 08:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:01.388 08:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.388 08:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:01.388 08:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.648 08:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:01.648 08:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:01.648 08:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.648 08:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:01.648 08:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.648 08:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:01.648 08:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:30:01.648 08:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:01.648 08:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:01.648 08:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:01.648 08:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:01.648 08:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGNjNDdmN2FiYzZkNTAyYzQ2MDJmZGZkN2Y3YTQ1MjQ3ODgyZmJiMTgwMDdiNWYzQUhcRw==: 00:30:01.648 08:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzg1MGU2MzA3NjBkMjc2ZmEyYzk3OWUyMzQzNWE0NTEzMDYyOGE1NzNjZDAxYzc0rwM5Jg==: 00:30:01.648 08:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:01.648 08:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:01.648 08:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGNjNDdmN2FiYzZkNTAyYzQ2MDJmZGZkN2Y3YTQ1MjQ3ODgyZmJiMTgwMDdiNWYzQUhcRw==: 00:30:01.648 08:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzg1MGU2MzA3NjBkMjc2ZmEyYzk3OWUyMzQzNWE0NTEzMDYyOGE1NzNjZDAxYzc0rwM5Jg==: ]] 00:30:01.648 08:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzg1MGU2MzA3NjBkMjc2ZmEyYzk3OWUyMzQzNWE0NTEzMDYyOGE1NzNjZDAxYzc0rwM5Jg==: 00:30:01.648 08:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:30:01.648 08:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:01.648 08:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:01.648 08:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:01.648 08:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:01.648 08:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:01.648 08:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:01.648 08:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.648 08:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:01.648 08:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.648 08:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:01.648 08:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:30:01.648 08:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:30:01.648 08:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:30:01.648 08:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:01.648 08:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:01.648 08:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:30:01.648 08:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:01.648 08:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:30:01.648 08:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:30:01.648 08:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:30:01.648 08:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:01.648 08:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.648 08:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.218 nvme0n1 00:30:02.218 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:02.219 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:02.219 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:02.219 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:02.219 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.219 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:02.479 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:02.479 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:02.479 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:02.479 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.479 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:02.479 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:02.479 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:30:02.479 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:02.479 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:02.479 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:02.479 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:02.479 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWI2NDZjYWZjYWJlNWI0YmYzZWY0NGY3N2UxZWI0YjUrS2Fp: 00:30:02.479 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWY1MmU5NzdjMjgwMjI5YTM2NzdjM2ZkOTdiNDAyODIWO896: 00:30:02.479 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:02.479 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:02.479 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWI2NDZjYWZjYWJlNWI0YmYzZWY0NGY3N2UxZWI0YjUrS2Fp: 00:30:02.479 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWY1MmU5NzdjMjgwMjI5YTM2NzdjM2ZkOTdiNDAyODIWO896: ]] 00:30:02.479 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWY1MmU5NzdjMjgwMjI5YTM2NzdjM2ZkOTdiNDAyODIWO896: 00:30:02.479 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:30:02.479 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:02.479 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:02.479 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:02.479 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:02.479 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:02.479 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:02.479 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:02.479 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.479 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:02.479 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:02.479 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:30:02.479 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:30:02.479 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:30:02.479 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:02.479 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:02.479 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:30:02.479 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:02.479 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:30:02.479 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:30:02.479 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:30:02.479 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:02.479 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:02.479 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.049 nvme0n1 00:30:03.049 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.049 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:03.049 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:03.049 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.049 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.049 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.309 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:03.309 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:03.309 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.309 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.309 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.309 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:03.309 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:30:03.309 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:03.309 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:03.309 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:03.309 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:03.309 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTVhODI2ODk4NWIxZTBkZGFjZjQ2ZWNhZTlkMGRjZjE1YzEwOWI4NzYyOGE5ZmIxygktRw==: 00:30:03.309 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmUxZmQ4MzA3MTBhMGUwN2EwYjhlM2UwNmZlNWRhZTnkEg2G: 00:30:03.309 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:03.309 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:03.309 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTVhODI2ODk4NWIxZTBkZGFjZjQ2ZWNhZTlkMGRjZjE1YzEwOWI4NzYyOGE5ZmIxygktRw==: 00:30:03.309 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmUxZmQ4MzA3MTBhMGUwN2EwYjhlM2UwNmZlNWRhZTnkEg2G: ]] 00:30:03.309 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmUxZmQ4MzA3MTBhMGUwN2EwYjhlM2UwNmZlNWRhZTnkEg2G: 00:30:03.309 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:30:03.309 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:03.310 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:03.310 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:03.310 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:03.310 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:03.310 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:03.310 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.310 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.310 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.310 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:03.310 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:30:03.310 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:30:03.310 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:30:03.310 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:03.310 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:03.310 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:30:03.310 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:03.310 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:30:03.310 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:30:03.310 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:30:03.310 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:03.310 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.310 08:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.880 nvme0n1 00:30:03.880 08:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.880 08:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:03.880 08:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:03.880 08:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.880 08:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.880 08:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:04.140 08:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:04.140 08:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:04.140 08:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.140 08:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.140 08:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:04.140 08:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:04.140 08:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:30:04.140 08:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:04.140 08:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:04.140 08:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:04.140 08:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:04.140 08:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzgyYTJiOTk4YTIwOTcyOWY5YWY3MjJlNzg3ZjlhMjQzOTE1NDNmZTQ4MTgxODU3Nzk5ODY3ODE2NjgxYTAyZDvMMPo=: 00:30:04.140 08:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:04.140 08:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:04.140 08:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:04.140 08:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzgyYTJiOTk4YTIwOTcyOWY5YWY3MjJlNzg3ZjlhMjQzOTE1NDNmZTQ4MTgxODU3Nzk5ODY3ODE2NjgxYTAyZDvMMPo=: 00:30:04.140 08:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:04.140 08:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:30:04.140 08:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:04.140 08:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:04.140 08:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:04.140 08:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:04.140 08:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:04.140 08:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:04.140 08:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.140 08:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.140 08:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:04.140 08:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:04.140 08:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:30:04.140 08:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:30:04.140 08:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:30:04.140 08:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:04.140 08:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:04.140 08:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:30:04.140 08:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:04.140 08:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:30:04.140 08:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:30:04.140 08:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:30:04.140 08:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:04.140 08:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.140 08:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.712 nvme0n1 00:30:04.712 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:04.712 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:04.712 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:04.712 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.712 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.712 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:04.973 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:04.973 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:04.973 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.973 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.973 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:04.973 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:30:04.973 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:04.973 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:04.973 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:04.973 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:04.973 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGNjNDdmN2FiYzZkNTAyYzQ2MDJmZGZkN2Y3YTQ1MjQ3ODgyZmJiMTgwMDdiNWYzQUhcRw==: 00:30:04.973 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzg1MGU2MzA3NjBkMjc2ZmEyYzk3OWUyMzQzNWE0NTEzMDYyOGE1NzNjZDAxYzc0rwM5Jg==: 00:30:04.973 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:04.973 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:04.973 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGNjNDdmN2FiYzZkNTAyYzQ2MDJmZGZkN2Y3YTQ1MjQ3ODgyZmJiMTgwMDdiNWYzQUhcRw==: 00:30:04.973 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzg1MGU2MzA3NjBkMjc2ZmEyYzk3OWUyMzQzNWE0NTEzMDYyOGE1NzNjZDAxYzc0rwM5Jg==: ]] 00:30:04.973 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzg1MGU2MzA3NjBkMjc2ZmEyYzk3OWUyMzQzNWE0NTEzMDYyOGE1NzNjZDAxYzc0rwM5Jg==: 00:30:04.973 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:04.973 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.973 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.973 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:04.973 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:30:04.973 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:30:04.973 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:30:04.973 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:30:04.973 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:04.973 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:04.973 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:30:04.973 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:04.973 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:30:04.973 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:30:04.973 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:30:04.973 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:30:04.973 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:30:04.973 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:30:04.973 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:04.973 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:04.973 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:04.973 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:04.973 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:30:04.973 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.973 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.973 request: 00:30:04.973 { 00:30:04.973 "name": "nvme0", 00:30:04.973 "trtype": "tcp", 00:30:04.973 "traddr": "10.0.0.1", 00:30:04.973 "adrfam": "ipv4", 00:30:04.973 "trsvcid": "4420", 00:30:04.973 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:30:04.973 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:30:04.973 "prchk_reftag": false, 00:30:04.973 "prchk_guard": false, 00:30:04.973 "hdgst": false, 00:30:04.973 "ddgst": false, 00:30:04.973 "allow_unrecognized_csi": false, 00:30:04.973 "method": "bdev_nvme_attach_controller", 00:30:04.973 "req_id": 1 00:30:04.973 } 00:30:04.973 Got JSON-RPC error response 00:30:04.973 response: 00:30:04.973 { 00:30:04.973 "code": -5, 00:30:04.973 "message": "Input/output error" 00:30:04.973 } 00:30:04.973 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:04.973 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:30:04.973 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:04.974 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:04.974 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:04.974 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:30:04.974 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:30:04.974 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.974 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.974 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:04.974 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:30:04.974 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:30:04.974 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:30:04.974 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:30:04.974 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:30:04.974 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:04.974 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:04.974 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:30:04.974 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:04.974 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:30:04.974 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:30:04.974 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:30:04.974 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:04.974 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:30:04.974 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:04.974 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:04.974 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:04.974 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:04.974 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:04.974 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:04.974 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.974 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.974 request: 00:30:04.974 { 00:30:04.974 "name": "nvme0", 00:30:04.974 "trtype": "tcp", 00:30:04.974 "traddr": "10.0.0.1", 00:30:04.974 "adrfam": "ipv4", 00:30:04.974 "trsvcid": "4420", 00:30:04.974 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:30:04.974 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:30:04.974 "prchk_reftag": false, 00:30:04.974 "prchk_guard": false, 00:30:04.974 "hdgst": false, 00:30:04.974 "ddgst": false, 00:30:04.974 "dhchap_key": "key2", 00:30:04.974 "allow_unrecognized_csi": false, 00:30:04.974 "method": "bdev_nvme_attach_controller", 00:30:04.974 "req_id": 1 00:30:04.974 } 00:30:04.974 Got JSON-RPC error response 00:30:04.974 response: 00:30:04.974 { 00:30:04.974 "code": -5, 00:30:04.974 "message": "Input/output error" 00:30:04.974 } 00:30:04.974 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:04.974 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:30:04.974 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:04.974 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:04.974 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:04.974 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:30:04.974 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:30:04.974 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.974 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.974 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.235 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:30:05.235 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:30:05.235 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:30:05.235 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:30:05.235 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:30:05.235 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:05.235 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:05.235 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:30:05.235 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:05.235 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:30:05.235 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:30:05.235 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:30:05.235 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:05.235 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:30:05.235 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:05.235 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:05.235 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:05.235 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:05.235 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:05.235 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:05.235 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.235 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.235 request: 00:30:05.235 { 00:30:05.235 "name": "nvme0", 00:30:05.235 "trtype": "tcp", 00:30:05.235 "traddr": "10.0.0.1", 00:30:05.235 "adrfam": "ipv4", 00:30:05.235 "trsvcid": "4420", 00:30:05.235 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:30:05.235 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:30:05.235 "prchk_reftag": false, 00:30:05.235 "prchk_guard": false, 00:30:05.235 "hdgst": false, 00:30:05.235 "ddgst": false, 00:30:05.235 "dhchap_key": "key1", 00:30:05.235 "dhchap_ctrlr_key": "ckey2", 00:30:05.235 "allow_unrecognized_csi": false, 00:30:05.235 "method": "bdev_nvme_attach_controller", 00:30:05.235 "req_id": 1 00:30:05.235 } 00:30:05.235 Got JSON-RPC error response 00:30:05.235 response: 00:30:05.235 { 00:30:05.235 "code": -5, 00:30:05.235 "message": "Input/output error" 00:30:05.235 } 00:30:05.235 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:05.235 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:30:05.235 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:05.235 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:05.235 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:05.235 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:30:05.235 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:30:05.235 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:30:05.235 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:30:05.235 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:05.235 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:05.235 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:30:05.235 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:05.235 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:30:05.235 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:30:05.235 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:30:05.235 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:30:05.236 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.236 08:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.236 nvme0n1 00:30:05.236 08:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.236 08:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:30:05.236 08:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:05.236 08:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:05.236 08:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:05.236 08:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:05.236 08:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWI2NDZjYWZjYWJlNWI0YmYzZWY0NGY3N2UxZWI0YjUrS2Fp: 00:30:05.236 08:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWY1MmU5NzdjMjgwMjI5YTM2NzdjM2ZkOTdiNDAyODIWO896: 00:30:05.236 08:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:05.236 08:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:05.236 08:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWI2NDZjYWZjYWJlNWI0YmYzZWY0NGY3N2UxZWI0YjUrS2Fp: 00:30:05.236 08:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWY1MmU5NzdjMjgwMjI5YTM2NzdjM2ZkOTdiNDAyODIWO896: ]] 00:30:05.236 08:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWY1MmU5NzdjMjgwMjI5YTM2NzdjM2ZkOTdiNDAyODIWO896: 00:30:05.236 08:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:05.236 08:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.236 08:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.496 08:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.496 08:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:30:05.496 08:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:30:05.496 08:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.496 08:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.496 08:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.496 08:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:05.496 08:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:05.496 08:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:30:05.496 08:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:05.496 08:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:05.496 08:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:05.496 08:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:05.496 08:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:05.496 08:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:05.496 08:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.496 08:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.496 request: 00:30:05.496 { 00:30:05.496 "name": "nvme0", 00:30:05.496 "dhchap_key": "key1", 00:30:05.496 "dhchap_ctrlr_key": "ckey2", 00:30:05.496 "method": "bdev_nvme_set_keys", 00:30:05.496 "req_id": 1 00:30:05.496 } 00:30:05.496 Got JSON-RPC error response 00:30:05.496 response: 00:30:05.496 { 00:30:05.496 "code": -13, 00:30:05.496 "message": "Permission denied" 00:30:05.496 } 00:30:05.496 08:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:05.496 08:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:30:05.497 08:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:05.497 08:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:05.497 08:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:05.497 08:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:30:05.497 08:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.497 08:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:30:05.497 08:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.497 08:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.497 08:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:30:05.497 08:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:30:06.878 08:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:30:06.878 08:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:30:06.878 08:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.878 08:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.878 08:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.878 08:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:30:06.878 08:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:30:07.816 08:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:30:07.816 08:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:30:07.817 08:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.817 08:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.817 08:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.817 08:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:30:07.817 08:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:30:07.817 08:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:07.817 08:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:07.817 08:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:07.817 08:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:07.817 08:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGNjNDdmN2FiYzZkNTAyYzQ2MDJmZGZkN2Y3YTQ1MjQ3ODgyZmJiMTgwMDdiNWYzQUhcRw==: 00:30:07.817 08:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzg1MGU2MzA3NjBkMjc2ZmEyYzk3OWUyMzQzNWE0NTEzMDYyOGE1NzNjZDAxYzc0rwM5Jg==: 00:30:07.817 08:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:07.817 08:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:07.817 08:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGNjNDdmN2FiYzZkNTAyYzQ2MDJmZGZkN2Y3YTQ1MjQ3ODgyZmJiMTgwMDdiNWYzQUhcRw==: 00:30:07.817 08:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzg1MGU2MzA3NjBkMjc2ZmEyYzk3OWUyMzQzNWE0NTEzMDYyOGE1NzNjZDAxYzc0rwM5Jg==: ]] 00:30:07.817 08:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzg1MGU2MzA3NjBkMjc2ZmEyYzk3OWUyMzQzNWE0NTEzMDYyOGE1NzNjZDAxYzc0rwM5Jg==: 00:30:07.817 08:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:30:07.817 08:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:30:07.817 08:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:30:07.817 08:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:30:07.817 08:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:07.817 08:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:07.817 08:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:30:07.817 08:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:07.817 08:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:30:07.817 08:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:30:07.817 08:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:30:07.817 08:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:30:07.817 08:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.817 08:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.817 nvme0n1 00:30:07.817 08:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.817 08:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:30:07.817 08:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:07.817 08:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:07.817 08:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:07.817 08:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:07.817 08:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWI2NDZjYWZjYWJlNWI0YmYzZWY0NGY3N2UxZWI0YjUrS2Fp: 00:30:07.817 08:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWY1MmU5NzdjMjgwMjI5YTM2NzdjM2ZkOTdiNDAyODIWO896: 00:30:07.817 08:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:07.817 08:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:07.817 08:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWI2NDZjYWZjYWJlNWI0YmYzZWY0NGY3N2UxZWI0YjUrS2Fp: 00:30:07.817 08:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWY1MmU5NzdjMjgwMjI5YTM2NzdjM2ZkOTdiNDAyODIWO896: ]] 00:30:07.817 08:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWY1MmU5NzdjMjgwMjI5YTM2NzdjM2ZkOTdiNDAyODIWO896: 00:30:07.817 08:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:30:07.817 08:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:30:07.817 08:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:30:07.817 08:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:07.817 08:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:07.817 08:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:07.817 08:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:07.817 08:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:30:07.817 08:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.817 08:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.817 request: 00:30:07.817 { 00:30:07.817 "name": "nvme0", 00:30:07.817 "dhchap_key": "key2", 00:30:07.817 "dhchap_ctrlr_key": "ckey1", 00:30:07.817 "method": "bdev_nvme_set_keys", 00:30:07.817 "req_id": 1 00:30:07.817 } 00:30:07.817 Got JSON-RPC error response 00:30:07.817 response: 00:30:07.817 { 00:30:07.817 "code": -13, 00:30:07.817 "message": "Permission denied" 00:30:07.817 } 00:30:07.817 08:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:07.817 08:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:30:07.817 08:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:07.817 08:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:07.817 08:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:07.817 08:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:30:07.817 08:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:30:07.817 08:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.817 08:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.817 08:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.077 08:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:30:08.077 08:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:30:09.016 08:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:30:09.016 08:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:30:09.016 08:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.016 08:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.016 08:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.016 08:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:30:09.016 08:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:30:09.016 08:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:30:09.016 08:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:30:09.016 08:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # nvmfcleanup 00:30:09.016 08:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:30:09.016 08:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:09.016 08:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:30:09.016 08:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:09.016 08:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:09.016 rmmod nvme_tcp 00:30:09.016 rmmod nvme_fabrics 00:30:09.016 08:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:09.016 08:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:30:09.016 08:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:30:09.016 08:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@513 -- # '[' -n 3907029 ']' 00:30:09.016 08:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # killprocess 3907029 00:30:09.016 08:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 3907029 ']' 00:30:09.016 08:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 3907029 00:30:09.016 08:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:30:09.016 08:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:09.016 08:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3907029 00:30:09.016 08:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:09.016 08:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:09.016 08:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3907029' 00:30:09.016 killing process with pid 3907029 00:30:09.016 08:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 3907029 00:30:09.016 08:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 3907029 00:30:09.276 08:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:30:09.276 08:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:30:09.276 08:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:30:09.276 08:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:30:09.276 08:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # iptables-save 00:30:09.276 08:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:30:09.276 08:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # iptables-restore 00:30:09.276 08:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:09.276 08:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:09.276 08:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:09.276 08:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:09.276 08:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:11.823 08:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:11.823 08:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:30:11.823 08:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:30:11.823 08:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:30:11.823 08:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:30:11.823 08:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # echo 0 00:30:11.823 08:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:11.823 08:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:30:11.823 08:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:30:11.823 08:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:11.823 08:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:30:11.823 08:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:30:11.823 08:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@722 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:15.119 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:30:15.119 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:30:15.119 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:30:15.119 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:30:15.119 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:30:15.119 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:30:15.119 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:30:15.119 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:30:15.119 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:30:15.119 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:30:15.119 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:30:15.119 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:30:15.119 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:30:15.119 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:30:15.119 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:30:15.119 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:30:15.119 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:30:15.380 08:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.kCm /tmp/spdk.key-null.glL /tmp/spdk.key-sha256.EAO /tmp/spdk.key-sha384.1LV /tmp/spdk.key-sha512.CQd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:30:15.380 08:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:18.681 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:30:18.681 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:30:18.681 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:30:18.681 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:30:18.681 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:30:18.681 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:30:18.681 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:30:18.681 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:30:18.681 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:30:18.681 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:30:18.681 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:30:18.681 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:30:18.681 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:30:18.681 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:30:18.681 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:30:18.681 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:30:18.681 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:30:19.020 00:30:19.020 real 1m3.177s 00:30:19.020 user 0m57.020s 00:30:19.020 sys 0m15.747s 00:30:19.020 08:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:19.020 08:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:19.020 ************************************ 00:30:19.020 END TEST nvmf_auth_host 00:30:19.020 ************************************ 00:30:19.347 08:45:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:30:19.347 08:45:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:30:19.347 08:45:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:19.347 08:45:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:19.347 08:45:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:19.347 ************************************ 00:30:19.347 START TEST nvmf_digest 00:30:19.347 ************************************ 00:30:19.347 08:45:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:30:19.347 * Looking for test storage... 00:30:19.347 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:19.347 08:45:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:19.347 08:45:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lcov --version 00:30:19.347 08:45:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:19.347 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:19.347 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:19.347 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:19.347 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:19.347 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:30:19.347 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:30:19.347 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:30:19.347 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:30:19.347 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:30:19.347 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:30:19.347 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:30:19.347 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:19.347 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:30:19.347 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:30:19.347 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:19.347 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:19.347 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:30:19.347 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:30:19.347 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:19.347 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:30:19.347 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:30:19.347 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:30:19.347 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:30:19.347 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:19.347 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:30:19.347 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:30:19.347 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:19.347 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:19.347 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:30:19.347 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:19.347 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:19.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:19.347 --rc genhtml_branch_coverage=1 00:30:19.347 --rc genhtml_function_coverage=1 00:30:19.347 --rc genhtml_legend=1 00:30:19.347 --rc geninfo_all_blocks=1 00:30:19.347 --rc geninfo_unexecuted_blocks=1 00:30:19.347 00:30:19.347 ' 00:30:19.347 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:19.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:19.347 --rc genhtml_branch_coverage=1 00:30:19.347 --rc genhtml_function_coverage=1 00:30:19.347 --rc genhtml_legend=1 00:30:19.347 --rc geninfo_all_blocks=1 00:30:19.347 --rc geninfo_unexecuted_blocks=1 00:30:19.347 00:30:19.347 ' 00:30:19.347 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:19.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:19.347 --rc genhtml_branch_coverage=1 00:30:19.347 --rc genhtml_function_coverage=1 00:30:19.347 --rc genhtml_legend=1 00:30:19.347 --rc geninfo_all_blocks=1 00:30:19.347 --rc geninfo_unexecuted_blocks=1 00:30:19.347 00:30:19.347 ' 00:30:19.347 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:19.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:19.347 --rc genhtml_branch_coverage=1 00:30:19.347 --rc genhtml_function_coverage=1 00:30:19.347 --rc genhtml_legend=1 00:30:19.347 --rc geninfo_all_blocks=1 00:30:19.347 --rc geninfo_unexecuted_blocks=1 00:30:19.347 00:30:19.347 ' 00:30:19.347 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:19.347 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:30:19.347 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:19.347 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:19.347 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:19.347 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:19.347 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:19.347 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:19.347 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:19.347 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:19.347 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:19.347 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:19.347 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:19.347 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:19.347 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:19.347 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:19.347 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:19.347 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:19.347 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:19.347 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:30:19.347 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:19.347 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:19.347 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:19.347 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.347 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.347 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.347 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:30:19.347 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.347 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:30:19.347 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:19.347 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:19.347 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:19.348 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:19.348 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:19.348 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:19.348 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:19.348 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:19.348 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:19.348 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:19.348 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:30:19.348 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:30:19.348 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:30:19.348 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:30:19.348 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:30:19.348 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:30:19.348 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:19.348 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@472 -- # prepare_net_devs 00:30:19.348 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@434 -- # local -g is_hw=no 00:30:19.348 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@436 -- # remove_spdk_ns 00:30:19.348 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:19.348 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:19.348 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:19.348 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:30:19.348 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:30:19.348 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:30:19.348 08:45:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:25.930 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:25.930 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ up == up ]] 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:25.930 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ up == up ]] 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:25.930 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # is_hw=yes 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:25.930 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:25.931 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:25.931 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:25.931 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:25.931 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:25.931 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:25.931 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:25.931 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:25.931 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:25.931 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:25.931 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:25.931 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:26.192 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:26.192 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:26.192 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:26.192 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:26.192 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:26.192 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:26.192 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:26.192 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:26.192 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:26.192 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.639 ms 00:30:26.192 00:30:26.192 --- 10.0.0.2 ping statistics --- 00:30:26.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:26.192 rtt min/avg/max/mdev = 0.639/0.639/0.639/0.000 ms 00:30:26.192 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:26.192 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:26.192 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.334 ms 00:30:26.192 00:30:26.192 --- 10.0.0.1 ping statistics --- 00:30:26.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:26.192 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:30:26.192 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:26.192 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # return 0 00:30:26.192 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:30:26.192 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:26.192 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:30:26.192 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:30:26.192 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:26.192 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:30:26.192 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:30:26.192 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:26.192 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:30:26.192 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:30:26.192 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:26.192 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:26.192 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:26.192 ************************************ 00:30:26.192 START TEST nvmf_digest_clean 00:30:26.192 ************************************ 00:30:26.192 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:30:26.192 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:30:26.192 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:30:26.192 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:30:26.192 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:30:26.192 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:30:26.192 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:30:26.192 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:26.192 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:26.192 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@505 -- # nvmfpid=3925154 00:30:26.192 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@506 -- # waitforlisten 3925154 00:30:26.192 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3925154 ']' 00:30:26.192 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:26.192 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:26.192 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:26.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:26.192 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:26.192 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:26.192 08:45:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:30:26.453 [2024-10-01 08:45:18.043344] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:30:26.453 [2024-10-01 08:45:18.043408] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:26.453 [2024-10-01 08:45:18.114687] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:26.453 [2024-10-01 08:45:18.187169] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:26.453 [2024-10-01 08:45:18.187208] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:26.453 [2024-10-01 08:45:18.187216] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:26.453 [2024-10-01 08:45:18.187222] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:26.453 [2024-10-01 08:45:18.187229] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:26.453 [2024-10-01 08:45:18.187839] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:27.022 08:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:27.022 08:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:30:27.022 08:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:30:27.022 08:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:27.022 08:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:27.283 08:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:27.283 08:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:30:27.283 08:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:30:27.283 08:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:30:27.283 08:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.283 08:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:27.283 null0 00:30:27.283 [2024-10-01 08:45:18.935828] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:27.283 [2024-10-01 08:45:18.960048] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:27.283 08:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.283 08:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:30:27.283 08:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:27.283 08:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:27.283 08:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:30:27.283 08:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:30:27.283 08:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:30:27.283 08:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:27.283 08:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3925294 00:30:27.283 08:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3925294 /var/tmp/bperf.sock 00:30:27.283 08:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3925294 ']' 00:30:27.283 08:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:27.283 08:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:27.283 08:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:27.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:27.283 08:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:27.283 08:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:27.283 08:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:30:27.283 [2024-10-01 08:45:19.014095] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:30:27.283 [2024-10-01 08:45:19.014144] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3925294 ] 00:30:27.283 [2024-10-01 08:45:19.092126] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:27.542 [2024-10-01 08:45:19.156218] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:30:28.112 08:45:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:28.112 08:45:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:30:28.112 08:45:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:28.112 08:45:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:28.112 08:45:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:28.371 08:45:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:28.371 08:45:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:28.631 nvme0n1 00:30:28.631 08:45:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:28.631 08:45:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:28.631 Running I/O for 2 seconds... 00:30:30.951 19239.00 IOPS, 75.15 MiB/s 19585.50 IOPS, 76.51 MiB/s 00:30:30.951 Latency(us) 00:30:30.951 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:30.951 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:30.951 nvme0n1 : 2.00 19605.92 76.59 0.00 0.00 6520.97 3003.73 20753.07 00:30:30.951 =================================================================================================================== 00:30:30.951 Total : 19605.92 76.59 0.00 0.00 6520.97 3003.73 20753.07 00:30:30.951 { 00:30:30.951 "results": [ 00:30:30.951 { 00:30:30.951 "job": "nvme0n1", 00:30:30.951 "core_mask": "0x2", 00:30:30.951 "workload": "randread", 00:30:30.951 "status": "finished", 00:30:30.951 "queue_depth": 128, 00:30:30.951 "io_size": 4096, 00:30:30.951 "runtime": 2.004446, 00:30:30.951 "iops": 19605.91604862391, 00:30:30.951 "mibps": 76.58560956493714, 00:30:30.951 "io_failed": 0, 00:30:30.951 "io_timeout": 0, 00:30:30.951 "avg_latency_us": 6520.972566901618, 00:30:30.951 "min_latency_us": 3003.733333333333, 00:30:30.951 "max_latency_us": 20753.066666666666 00:30:30.951 } 00:30:30.951 ], 00:30:30.951 "core_count": 1 00:30:30.951 } 00:30:30.951 08:45:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:30.951 08:45:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:30.951 08:45:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:30.951 08:45:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:30.951 | select(.opcode=="crc32c") 00:30:30.951 | "\(.module_name) \(.executed)"' 00:30:30.951 08:45:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:30.951 08:45:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:30.951 08:45:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:30.951 08:45:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:30.951 08:45:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:30.951 08:45:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3925294 00:30:30.951 08:45:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3925294 ']' 00:30:30.951 08:45:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3925294 00:30:30.951 08:45:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:30:30.951 08:45:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:30.951 08:45:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3925294 00:30:30.951 08:45:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:30.951 08:45:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:30.951 08:45:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3925294' 00:30:30.951 killing process with pid 3925294 00:30:30.951 08:45:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3925294 00:30:30.951 Received shutdown signal, test time was about 2.000000 seconds 00:30:30.951 00:30:30.951 Latency(us) 00:30:30.951 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:30.951 =================================================================================================================== 00:30:30.951 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:30.951 08:45:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3925294 00:30:31.211 08:45:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:30:31.211 08:45:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:31.211 08:45:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:31.211 08:45:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:30:31.211 08:45:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:30:31.211 08:45:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:30:31.211 08:45:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:31.211 08:45:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3925995 00:30:31.211 08:45:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3925995 /var/tmp/bperf.sock 00:30:31.211 08:45:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3925995 ']' 00:30:31.211 08:45:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:30:31.211 08:45:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:31.211 08:45:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:31.211 08:45:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:31.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:31.211 08:45:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:31.211 08:45:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:31.211 [2024-10-01 08:45:22.853246] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:30:31.211 [2024-10-01 08:45:22.853303] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3925995 ] 00:30:31.211 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:31.211 Zero copy mechanism will not be used. 00:30:31.211 [2024-10-01 08:45:22.931342] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:31.211 [2024-10-01 08:45:22.996107] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:30:32.151 08:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:32.151 08:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:30:32.151 08:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:32.151 08:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:32.151 08:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:32.151 08:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:32.151 08:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:32.411 nvme0n1 00:30:32.672 08:45:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:32.672 08:45:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:32.672 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:32.672 Zero copy mechanism will not be used. 00:30:32.672 Running I/O for 2 seconds... 00:30:34.552 3277.00 IOPS, 409.62 MiB/s 3474.50 IOPS, 434.31 MiB/s 00:30:34.553 Latency(us) 00:30:34.553 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:34.553 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:30:34.553 nvme0n1 : 2.01 3479.12 434.89 0.00 0.00 4594.45 750.93 7809.71 00:30:34.553 =================================================================================================================== 00:30:34.553 Total : 3479.12 434.89 0.00 0.00 4594.45 750.93 7809.71 00:30:34.553 { 00:30:34.553 "results": [ 00:30:34.553 { 00:30:34.553 "job": "nvme0n1", 00:30:34.553 "core_mask": "0x2", 00:30:34.553 "workload": "randread", 00:30:34.553 "status": "finished", 00:30:34.553 "queue_depth": 16, 00:30:34.553 "io_size": 131072, 00:30:34.553 "runtime": 2.005968, 00:30:34.553 "iops": 3479.118310960095, 00:30:34.553 "mibps": 434.8897888700119, 00:30:34.553 "io_failed": 0, 00:30:34.553 "io_timeout": 0, 00:30:34.553 "avg_latency_us": 4594.4469293595075, 00:30:34.553 "min_latency_us": 750.9333333333333, 00:30:34.553 "max_latency_us": 7809.706666666667 00:30:34.553 } 00:30:34.553 ], 00:30:34.553 "core_count": 1 00:30:34.553 } 00:30:34.553 08:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:34.553 08:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:34.553 08:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:34.553 08:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:34.553 | select(.opcode=="crc32c") 00:30:34.553 | "\(.module_name) \(.executed)"' 00:30:34.553 08:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:34.812 08:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:34.813 08:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:34.813 08:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:34.813 08:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:34.813 08:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3925995 00:30:34.813 08:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3925995 ']' 00:30:34.813 08:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3925995 00:30:34.813 08:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:30:34.813 08:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:34.813 08:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3925995 00:30:34.813 08:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:34.813 08:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:34.813 08:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3925995' 00:30:34.813 killing process with pid 3925995 00:30:34.813 08:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3925995 00:30:34.813 Received shutdown signal, test time was about 2.000000 seconds 00:30:34.813 00:30:34.813 Latency(us) 00:30:34.813 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:34.813 =================================================================================================================== 00:30:34.813 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:34.813 08:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3925995 00:30:35.073 08:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:30:35.073 08:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:35.073 08:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:35.073 08:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:30:35.073 08:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:30:35.073 08:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:30:35.073 08:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:35.073 08:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3926842 00:30:35.073 08:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3926842 /var/tmp/bperf.sock 00:30:35.074 08:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3926842 ']' 00:30:35.074 08:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:35.074 08:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:35.074 08:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:35.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:35.074 08:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:35.074 08:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:35.074 08:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:30:35.074 [2024-10-01 08:45:26.771999] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:30:35.074 [2024-10-01 08:45:26.772059] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3926842 ] 00:30:35.074 [2024-10-01 08:45:26.845774] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:35.333 [2024-10-01 08:45:26.899247] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:30:35.902 08:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:35.902 08:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:30:35.902 08:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:35.902 08:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:35.902 08:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:36.162 08:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:36.162 08:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:36.422 nvme0n1 00:30:36.422 08:45:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:36.422 08:45:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:36.422 Running I/O for 2 seconds... 00:30:38.368 21613.00 IOPS, 84.43 MiB/s 21702.00 IOPS, 84.77 MiB/s 00:30:38.368 Latency(us) 00:30:38.368 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:38.368 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:38.368 nvme0n1 : 2.00 21719.73 84.84 0.00 0.00 5888.61 3249.49 11687.25 00:30:38.368 =================================================================================================================== 00:30:38.368 Total : 21719.73 84.84 0.00 0.00 5888.61 3249.49 11687.25 00:30:38.368 { 00:30:38.368 "results": [ 00:30:38.368 { 00:30:38.368 "job": "nvme0n1", 00:30:38.368 "core_mask": "0x2", 00:30:38.368 "workload": "randwrite", 00:30:38.368 "status": "finished", 00:30:38.368 "queue_depth": 128, 00:30:38.368 "io_size": 4096, 00:30:38.368 "runtime": 2.004261, 00:30:38.368 "iops": 21719.726123493896, 00:30:38.368 "mibps": 84.84268016989803, 00:30:38.368 "io_failed": 0, 00:30:38.368 "io_timeout": 0, 00:30:38.368 "avg_latency_us": 5888.608773316181, 00:30:38.368 "min_latency_us": 3249.4933333333333, 00:30:38.368 "max_latency_us": 11687.253333333334 00:30:38.368 } 00:30:38.368 ], 00:30:38.368 "core_count": 1 00:30:38.368 } 00:30:38.368 08:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:38.368 08:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:38.368 08:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:38.368 08:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:38.368 | select(.opcode=="crc32c") 00:30:38.368 | "\(.module_name) \(.executed)"' 00:30:38.368 08:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:38.628 08:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:38.628 08:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:38.628 08:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:38.628 08:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:38.628 08:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3926842 00:30:38.628 08:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3926842 ']' 00:30:38.628 08:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3926842 00:30:38.628 08:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:30:38.628 08:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:38.628 08:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3926842 00:30:38.628 08:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:38.628 08:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:38.628 08:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3926842' 00:30:38.628 killing process with pid 3926842 00:30:38.628 08:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3926842 00:30:38.628 Received shutdown signal, test time was about 2.000000 seconds 00:30:38.628 00:30:38.628 Latency(us) 00:30:38.628 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:38.628 =================================================================================================================== 00:30:38.628 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:38.628 08:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3926842 00:30:38.888 08:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:30:38.888 08:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:38.888 08:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:38.888 08:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:30:38.888 08:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:30:38.888 08:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:30:38.888 08:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:38.888 08:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3927659 00:30:38.888 08:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3927659 /var/tmp/bperf.sock 00:30:38.888 08:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3927659 ']' 00:30:38.888 08:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:38.888 08:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:38.888 08:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:38.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:38.888 08:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:38.888 08:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:38.888 08:45:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:30:38.888 [2024-10-01 08:45:30.566460] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:30:38.888 [2024-10-01 08:45:30.566518] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3927659 ] 00:30:38.888 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:38.888 Zero copy mechanism will not be used. 00:30:38.888 [2024-10-01 08:45:30.640800] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:38.888 [2024-10-01 08:45:30.693951] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:30:39.826 08:45:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:39.826 08:45:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:30:39.826 08:45:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:39.826 08:45:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:39.826 08:45:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:39.826 08:45:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:39.826 08:45:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:40.085 nvme0n1 00:30:40.085 08:45:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:40.085 08:45:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:40.085 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:40.085 Zero copy mechanism will not be used. 00:30:40.085 Running I/O for 2 seconds... 00:30:42.406 3992.00 IOPS, 499.00 MiB/s 4630.00 IOPS, 578.75 MiB/s 00:30:42.406 Latency(us) 00:30:42.406 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:42.406 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:30:42.406 nvme0n1 : 2.01 4627.78 578.47 0.00 0.00 3451.56 1556.48 15510.19 00:30:42.406 =================================================================================================================== 00:30:42.406 Total : 4627.78 578.47 0.00 0.00 3451.56 1556.48 15510.19 00:30:42.406 { 00:30:42.406 "results": [ 00:30:42.406 { 00:30:42.406 "job": "nvme0n1", 00:30:42.406 "core_mask": "0x2", 00:30:42.406 "workload": "randwrite", 00:30:42.406 "status": "finished", 00:30:42.406 "queue_depth": 16, 00:30:42.406 "io_size": 131072, 00:30:42.406 "runtime": 2.005282, 00:30:42.406 "iops": 4627.778038201111, 00:30:42.406 "mibps": 578.4722547751388, 00:30:42.406 "io_failed": 0, 00:30:42.406 "io_timeout": 0, 00:30:42.406 "avg_latency_us": 3451.558252873563, 00:30:42.406 "min_latency_us": 1556.48, 00:30:42.406 "max_latency_us": 15510.186666666666 00:30:42.406 } 00:30:42.406 ], 00:30:42.406 "core_count": 1 00:30:42.406 } 00:30:42.406 08:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:42.406 08:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:42.406 08:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:42.406 08:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:42.406 | select(.opcode=="crc32c") 00:30:42.406 | "\(.module_name) \(.executed)"' 00:30:42.406 08:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:42.406 08:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:42.406 08:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:42.406 08:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:42.406 08:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:42.406 08:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3927659 00:30:42.406 08:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3927659 ']' 00:30:42.406 08:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3927659 00:30:42.406 08:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:30:42.406 08:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:42.406 08:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3927659 00:30:42.406 08:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:42.406 08:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:42.406 08:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3927659' 00:30:42.406 killing process with pid 3927659 00:30:42.406 08:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3927659 00:30:42.406 Received shutdown signal, test time was about 2.000000 seconds 00:30:42.406 00:30:42.406 Latency(us) 00:30:42.406 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:42.406 =================================================================================================================== 00:30:42.406 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:42.406 08:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3927659 00:30:42.665 08:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3925154 00:30:42.665 08:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3925154 ']' 00:30:42.665 08:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3925154 00:30:42.665 08:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:30:42.665 08:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:42.665 08:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3925154 00:30:42.665 08:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:42.665 08:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:42.665 08:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3925154' 00:30:42.665 killing process with pid 3925154 00:30:42.665 08:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3925154 00:30:42.665 08:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3925154 00:30:42.925 00:30:42.925 real 0m16.520s 00:30:42.925 user 0m32.689s 00:30:42.925 sys 0m3.508s 00:30:42.925 08:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:42.925 08:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:42.925 ************************************ 00:30:42.925 END TEST nvmf_digest_clean 00:30:42.925 ************************************ 00:30:42.925 08:45:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:30:42.925 08:45:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:42.925 08:45:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:42.925 08:45:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:42.925 ************************************ 00:30:42.925 START TEST nvmf_digest_error 00:30:42.925 ************************************ 00:30:42.925 08:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:30:42.925 08:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:30:42.926 08:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:30:42.926 08:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:42.926 08:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:42.926 08:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@505 -- # nvmfpid=3928373 00:30:42.926 08:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@506 -- # waitforlisten 3928373 00:30:42.926 08:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:30:42.926 08:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3928373 ']' 00:30:42.926 08:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:42.926 08:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:42.926 08:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:42.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:42.926 08:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:42.926 08:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:42.926 [2024-10-01 08:45:34.643170] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:30:42.926 [2024-10-01 08:45:34.643230] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:42.926 [2024-10-01 08:45:34.715860] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:43.185 [2024-10-01 08:45:34.787202] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:43.185 [2024-10-01 08:45:34.787244] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:43.185 [2024-10-01 08:45:34.787251] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:43.185 [2024-10-01 08:45:34.787258] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:43.185 [2024-10-01 08:45:34.787264] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:43.185 [2024-10-01 08:45:34.787876] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:43.753 08:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:43.753 08:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:30:43.753 08:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:30:43.753 08:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:43.753 08:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:43.753 08:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:43.753 08:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:30:43.753 08:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.753 08:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:43.753 [2024-10-01 08:45:35.477869] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:30:43.753 08:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.753 08:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:30:43.753 08:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:30:43.753 08:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.753 08:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:43.753 null0 00:30:43.753 [2024-10-01 08:45:35.559693] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:44.013 [2024-10-01 08:45:35.583925] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:44.013 08:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.013 08:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:30:44.013 08:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:44.013 08:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:30:44.013 08:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:30:44.013 08:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:30:44.013 08:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3928631 00:30:44.013 08:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3928631 /var/tmp/bperf.sock 00:30:44.013 08:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3928631 ']' 00:30:44.013 08:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:30:44.013 08:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:44.013 08:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:44.013 08:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:44.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:44.013 08:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:44.013 08:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:44.013 [2024-10-01 08:45:35.640684] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:30:44.013 [2024-10-01 08:45:35.640734] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3928631 ] 00:30:44.013 [2024-10-01 08:45:35.716281] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:44.013 [2024-10-01 08:45:35.769986] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:30:44.953 08:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:44.953 08:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:30:44.953 08:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:44.953 08:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:44.953 08:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:44.953 08:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.953 08:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:44.953 08:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.953 08:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:44.953 08:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:45.213 nvme0n1 00:30:45.213 08:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:30:45.213 08:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.213 08:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:45.473 08:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.473 08:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:45.473 08:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:45.473 Running I/O for 2 seconds... 00:30:45.473 [2024-10-01 08:45:37.143417] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:45.473 [2024-10-01 08:45:37.143447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.473 [2024-10-01 08:45:37.143456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.473 [2024-10-01 08:45:37.153148] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:45.473 [2024-10-01 08:45:37.153168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.473 [2024-10-01 08:45:37.153175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.473 [2024-10-01 08:45:37.167259] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:45.473 [2024-10-01 08:45:37.167279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:5029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.473 [2024-10-01 08:45:37.167286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.473 [2024-10-01 08:45:37.179245] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:45.473 [2024-10-01 08:45:37.179263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:19835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.473 [2024-10-01 08:45:37.179270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.473 [2024-10-01 08:45:37.191009] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:45.473 [2024-10-01 08:45:37.191028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:5912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.473 [2024-10-01 08:45:37.191035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.473 [2024-10-01 08:45:37.204632] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:45.473 [2024-10-01 08:45:37.204651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:6203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.474 [2024-10-01 08:45:37.204657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.474 [2024-10-01 08:45:37.218148] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:45.474 [2024-10-01 08:45:37.218166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:25525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.474 [2024-10-01 08:45:37.218173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.474 [2024-10-01 08:45:37.230714] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:45.474 [2024-10-01 08:45:37.230732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:15338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.474 [2024-10-01 08:45:37.230739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.474 [2024-10-01 08:45:37.243610] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:45.474 [2024-10-01 08:45:37.243627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.474 [2024-10-01 08:45:37.243634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.474 [2024-10-01 08:45:37.254288] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:45.474 [2024-10-01 08:45:37.254317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.474 [2024-10-01 08:45:37.254324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.474 [2024-10-01 08:45:37.267804] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:45.474 [2024-10-01 08:45:37.267822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:9922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.474 [2024-10-01 08:45:37.267829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.474 [2024-10-01 08:45:37.281286] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:45.474 [2024-10-01 08:45:37.281305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.474 [2024-10-01 08:45:37.281312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.474 [2024-10-01 08:45:37.293371] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:45.474 [2024-10-01 08:45:37.293390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:10409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.474 [2024-10-01 08:45:37.293396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.734 [2024-10-01 08:45:37.305629] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:45.734 [2024-10-01 08:45:37.305647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:17416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.734 [2024-10-01 08:45:37.305654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.734 [2024-10-01 08:45:37.318913] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:45.734 [2024-10-01 08:45:37.318931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.734 [2024-10-01 08:45:37.318938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.734 [2024-10-01 08:45:37.330659] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:45.734 [2024-10-01 08:45:37.330681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.734 [2024-10-01 08:45:37.330688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.734 [2024-10-01 08:45:37.341632] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:45.734 [2024-10-01 08:45:37.341649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:12016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.734 [2024-10-01 08:45:37.341655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.734 [2024-10-01 08:45:37.354463] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:45.734 [2024-10-01 08:45:37.354481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:5871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.734 [2024-10-01 08:45:37.354488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.734 [2024-10-01 08:45:37.368565] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:45.734 [2024-10-01 08:45:37.368583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:13835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.734 [2024-10-01 08:45:37.368590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.734 [2024-10-01 08:45:37.381985] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:45.734 [2024-10-01 08:45:37.382007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.734 [2024-10-01 08:45:37.382014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.734 [2024-10-01 08:45:37.393991] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:45.734 [2024-10-01 08:45:37.394012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:12086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.734 [2024-10-01 08:45:37.394019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.734 [2024-10-01 08:45:37.406963] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:45.734 [2024-10-01 08:45:37.406981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:9723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.734 [2024-10-01 08:45:37.406988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.734 [2024-10-01 08:45:37.418745] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:45.734 [2024-10-01 08:45:37.418764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.734 [2024-10-01 08:45:37.418771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.734 [2024-10-01 08:45:37.431972] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:45.734 [2024-10-01 08:45:37.431990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.734 [2024-10-01 08:45:37.432001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.734 [2024-10-01 08:45:37.444548] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:45.734 [2024-10-01 08:45:37.444566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:12917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.735 [2024-10-01 08:45:37.444572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.735 [2024-10-01 08:45:37.456573] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:45.735 [2024-10-01 08:45:37.456591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.735 [2024-10-01 08:45:37.456597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.735 [2024-10-01 08:45:37.468477] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:45.735 [2024-10-01 08:45:37.468496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:19932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.735 [2024-10-01 08:45:37.468502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.735 [2024-10-01 08:45:37.480171] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:45.735 [2024-10-01 08:45:37.480189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:2623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.735 [2024-10-01 08:45:37.480195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.735 [2024-10-01 08:45:37.493942] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:45.735 [2024-10-01 08:45:37.493961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:2522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.735 [2024-10-01 08:45:37.493967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.735 [2024-10-01 08:45:37.507886] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:45.735 [2024-10-01 08:45:37.507904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.735 [2024-10-01 08:45:37.507911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.735 [2024-10-01 08:45:37.517697] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:45.735 [2024-10-01 08:45:37.517715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:7687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.735 [2024-10-01 08:45:37.517721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.735 [2024-10-01 08:45:37.532825] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:45.735 [2024-10-01 08:45:37.532843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:10191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.735 [2024-10-01 08:45:37.532849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.735 [2024-10-01 08:45:37.545480] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:45.735 [2024-10-01 08:45:37.545497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:22719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.735 [2024-10-01 08:45:37.545507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.995 [2024-10-01 08:45:37.557291] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:45.995 [2024-10-01 08:45:37.557309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:16790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.995 [2024-10-01 08:45:37.557315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.995 [2024-10-01 08:45:37.568527] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:45.995 [2024-10-01 08:45:37.568544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:15762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.995 [2024-10-01 08:45:37.568551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.995 [2024-10-01 08:45:37.581889] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:45.995 [2024-10-01 08:45:37.581906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:23303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.995 [2024-10-01 08:45:37.581913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.995 [2024-10-01 08:45:37.595111] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:45.995 [2024-10-01 08:45:37.595129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:25266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.995 [2024-10-01 08:45:37.595135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.996 [2024-10-01 08:45:37.608858] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:45.996 [2024-10-01 08:45:37.608875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.996 [2024-10-01 08:45:37.608881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.996 [2024-10-01 08:45:37.620725] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:45.996 [2024-10-01 08:45:37.620742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.996 [2024-10-01 08:45:37.620749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.996 [2024-10-01 08:45:37.634054] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:45.996 [2024-10-01 08:45:37.634072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:7112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.996 [2024-10-01 08:45:37.634079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.996 [2024-10-01 08:45:37.645927] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:45.996 [2024-10-01 08:45:37.645944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:24940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.996 [2024-10-01 08:45:37.645951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.996 [2024-10-01 08:45:37.656863] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:45.996 [2024-10-01 08:45:37.656885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:8068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.996 [2024-10-01 08:45:37.656892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.996 [2024-10-01 08:45:37.669936] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:45.996 [2024-10-01 08:45:37.669953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.996 [2024-10-01 08:45:37.669959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.996 [2024-10-01 08:45:37.683789] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:45.996 [2024-10-01 08:45:37.683807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:2209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.996 [2024-10-01 08:45:37.683814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.996 [2024-10-01 08:45:37.696286] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:45.996 [2024-10-01 08:45:37.696304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:23257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.996 [2024-10-01 08:45:37.696311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.996 [2024-10-01 08:45:37.708274] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:45.996 [2024-10-01 08:45:37.708293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.996 [2024-10-01 08:45:37.708299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.996 [2024-10-01 08:45:37.720458] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:45.996 [2024-10-01 08:45:37.720476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:21836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.996 [2024-10-01 08:45:37.720482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.996 [2024-10-01 08:45:37.733765] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:45.996 [2024-10-01 08:45:37.733783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:4307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.996 [2024-10-01 08:45:37.733790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.996 [2024-10-01 08:45:37.744502] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:45.996 [2024-10-01 08:45:37.744519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:23318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.996 [2024-10-01 08:45:37.744526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.996 [2024-10-01 08:45:37.757873] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:45.996 [2024-10-01 08:45:37.757891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:1065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.996 [2024-10-01 08:45:37.757898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.996 [2024-10-01 08:45:37.770231] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:45.996 [2024-10-01 08:45:37.770249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.996 [2024-10-01 08:45:37.770256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.996 [2024-10-01 08:45:37.784807] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:45.996 [2024-10-01 08:45:37.784825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:11850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.996 [2024-10-01 08:45:37.784831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.996 [2024-10-01 08:45:37.796810] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:45.996 [2024-10-01 08:45:37.796827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:17005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.996 [2024-10-01 08:45:37.796834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:45.996 [2024-10-01 08:45:37.807132] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:45.996 [2024-10-01 08:45:37.807150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.996 [2024-10-01 08:45:37.807156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.257 [2024-10-01 08:45:37.821915] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:46.257 [2024-10-01 08:45:37.821932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:10401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.257 [2024-10-01 08:45:37.821939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.257 [2024-10-01 08:45:37.832754] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:46.257 [2024-10-01 08:45:37.832771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:19369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.257 [2024-10-01 08:45:37.832778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.257 [2024-10-01 08:45:37.846371] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:46.257 [2024-10-01 08:45:37.846388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:13145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.257 [2024-10-01 08:45:37.846395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.257 [2024-10-01 08:45:37.859607] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:46.257 [2024-10-01 08:45:37.859625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.257 [2024-10-01 08:45:37.859631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.257 [2024-10-01 08:45:37.870625] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:46.257 [2024-10-01 08:45:37.870642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:11946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.257 [2024-10-01 08:45:37.870652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.257 [2024-10-01 08:45:37.882889] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:46.257 [2024-10-01 08:45:37.882907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:16958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.257 [2024-10-01 08:45:37.882913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.257 [2024-10-01 08:45:37.895527] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:46.257 [2024-10-01 08:45:37.895544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.257 [2024-10-01 08:45:37.895551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.257 [2024-10-01 08:45:37.909092] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:46.257 [2024-10-01 08:45:37.909109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.257 [2024-10-01 08:45:37.909116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.257 [2024-10-01 08:45:37.922370] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:46.257 [2024-10-01 08:45:37.922388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:10556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.257 [2024-10-01 08:45:37.922394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.257 [2024-10-01 08:45:37.932324] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:46.257 [2024-10-01 08:45:37.932341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:24583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.257 [2024-10-01 08:45:37.932348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.257 [2024-10-01 08:45:37.946831] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:46.257 [2024-10-01 08:45:37.946849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.257 [2024-10-01 08:45:37.946856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.257 [2024-10-01 08:45:37.959360] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:46.257 [2024-10-01 08:45:37.959378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:17034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.257 [2024-10-01 08:45:37.959385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.257 [2024-10-01 08:45:37.971126] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:46.257 [2024-10-01 08:45:37.971145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:9099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.257 [2024-10-01 08:45:37.971152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.257 [2024-10-01 08:45:37.983768] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:46.257 [2024-10-01 08:45:37.983788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:10678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.257 [2024-10-01 08:45:37.983795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.257 [2024-10-01 08:45:37.997785] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:46.257 [2024-10-01 08:45:37.997803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.257 [2024-10-01 08:45:37.997809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.257 [2024-10-01 08:45:38.010759] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:46.257 [2024-10-01 08:45:38.010776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.257 [2024-10-01 08:45:38.010783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.257 [2024-10-01 08:45:38.022756] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:46.257 [2024-10-01 08:45:38.022773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:2043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.257 [2024-10-01 08:45:38.022780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.257 [2024-10-01 08:45:38.033666] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:46.257 [2024-10-01 08:45:38.033683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:6314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.257 [2024-10-01 08:45:38.033690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.257 [2024-10-01 08:45:38.046323] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:46.257 [2024-10-01 08:45:38.046341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:5407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.257 [2024-10-01 08:45:38.046348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.257 [2024-10-01 08:45:38.059488] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:46.257 [2024-10-01 08:45:38.059506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:61 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.257 [2024-10-01 08:45:38.059512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.257 [2024-10-01 08:45:38.072677] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:46.257 [2024-10-01 08:45:38.072694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:8733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.257 [2024-10-01 08:45:38.072701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.519 [2024-10-01 08:45:38.086263] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:46.519 [2024-10-01 08:45:38.086282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.519 [2024-10-01 08:45:38.086288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.519 [2024-10-01 08:45:38.099473] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:46.519 [2024-10-01 08:45:38.099490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:17349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.519 [2024-10-01 08:45:38.099497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.519 [2024-10-01 08:45:38.111062] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:46.519 [2024-10-01 08:45:38.111079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:95 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.519 [2024-10-01 08:45:38.111085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.519 20117.00 IOPS, 78.58 MiB/s [2024-10-01 08:45:38.124357] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:46.519 [2024-10-01 08:45:38.124374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:1207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.519 [2024-10-01 08:45:38.124381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.519 [2024-10-01 08:45:38.135975] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:46.519 [2024-10-01 08:45:38.135992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:17229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.519 [2024-10-01 08:45:38.136003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.519 [2024-10-01 08:45:38.148347] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:46.519 [2024-10-01 08:45:38.148365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.519 [2024-10-01 08:45:38.148371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.519 [2024-10-01 08:45:38.161469] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:46.519 [2024-10-01 08:45:38.161486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:8343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.519 [2024-10-01 08:45:38.161493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.519 [2024-10-01 08:45:38.174465] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:46.519 [2024-10-01 08:45:38.174483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:13244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.519 [2024-10-01 08:45:38.174490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.519 [2024-10-01 08:45:38.187256] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:46.519 [2024-10-01 08:45:38.187274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.519 [2024-10-01 08:45:38.187281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.519 [2024-10-01 08:45:38.199806] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:46.519 [2024-10-01 08:45:38.199826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:16927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.519 [2024-10-01 08:45:38.199832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.519 [2024-10-01 08:45:38.211789] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:46.519 [2024-10-01 08:45:38.211807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:6892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.519 [2024-10-01 08:45:38.211813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.519 [2024-10-01 08:45:38.225547] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:46.519 [2024-10-01 08:45:38.225564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.519 [2024-10-01 08:45:38.225571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.519 [2024-10-01 08:45:38.235142] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:46.519 [2024-10-01 08:45:38.235160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.519 [2024-10-01 08:45:38.235166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.519 [2024-10-01 08:45:38.248477] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:46.519 [2024-10-01 08:45:38.248495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.519 [2024-10-01 08:45:38.248501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.519 [2024-10-01 08:45:38.261851] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:46.519 [2024-10-01 08:45:38.261869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:10517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.519 [2024-10-01 08:45:38.261875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.519 [2024-10-01 08:45:38.274964] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:46.519 [2024-10-01 08:45:38.274981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.519 [2024-10-01 08:45:38.274988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.519 [2024-10-01 08:45:38.287014] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:46.520 [2024-10-01 08:45:38.287031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.520 [2024-10-01 08:45:38.287038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.520 [2024-10-01 08:45:38.299917] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:46.520 [2024-10-01 08:45:38.299935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.520 [2024-10-01 08:45:38.299941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.520 [2024-10-01 08:45:38.313269] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:46.520 [2024-10-01 08:45:38.313286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:3705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.520 [2024-10-01 08:45:38.313292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.520 [2024-10-01 08:45:38.324114] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:46.520 [2024-10-01 08:45:38.324131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:17053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.520 [2024-10-01 08:45:38.324138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.520 [2024-10-01 08:45:38.334926] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:46.520 [2024-10-01 08:45:38.334945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:15491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.520 [2024-10-01 08:45:38.334951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.781 [2024-10-01 08:45:38.348102] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:46.781 [2024-10-01 08:45:38.348120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.781 [2024-10-01 08:45:38.348126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.781 [2024-10-01 08:45:38.360538] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:46.781 [2024-10-01 08:45:38.360555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.781 [2024-10-01 08:45:38.360562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.781 [2024-10-01 08:45:38.374215] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:46.781 [2024-10-01 08:45:38.374232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:6287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.781 [2024-10-01 08:45:38.374238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.781 [2024-10-01 08:45:38.387627] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:46.781 [2024-10-01 08:45:38.387644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.781 [2024-10-01 08:45:38.387650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.781 [2024-10-01 08:45:38.399687] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:46.781 [2024-10-01 08:45:38.399704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.781 [2024-10-01 08:45:38.399711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.781 [2024-10-01 08:45:38.412488] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:46.781 [2024-10-01 08:45:38.412505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:18692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.781 [2024-10-01 08:45:38.412514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.781 [2024-10-01 08:45:38.425089] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:46.781 [2024-10-01 08:45:38.425107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.781 [2024-10-01 08:45:38.425113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.781 [2024-10-01 08:45:38.437332] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:46.781 [2024-10-01 08:45:38.437349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:11661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.781 [2024-10-01 08:45:38.437355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.781 [2024-10-01 08:45:38.448660] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:46.781 [2024-10-01 08:45:38.448678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:11179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.781 [2024-10-01 08:45:38.448685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.781 [2024-10-01 08:45:38.461343] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:46.781 [2024-10-01 08:45:38.461360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:7017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.781 [2024-10-01 08:45:38.461367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.781 [2024-10-01 08:45:38.475166] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:46.781 [2024-10-01 08:45:38.475183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:16763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.781 [2024-10-01 08:45:38.475190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.781 [2024-10-01 08:45:38.487284] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:46.781 [2024-10-01 08:45:38.487302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:7460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.781 [2024-10-01 08:45:38.487308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.781 [2024-10-01 08:45:38.499667] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:46.781 [2024-10-01 08:45:38.499684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.781 [2024-10-01 08:45:38.499691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.781 [2024-10-01 08:45:38.512038] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:46.781 [2024-10-01 08:45:38.512056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:2951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.781 [2024-10-01 08:45:38.512063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.781 [2024-10-01 08:45:38.525474] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:46.781 [2024-10-01 08:45:38.525495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:21965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.781 [2024-10-01 08:45:38.525501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.781 [2024-10-01 08:45:38.538415] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:46.781 [2024-10-01 08:45:38.538432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:20873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.781 [2024-10-01 08:45:38.538439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.781 [2024-10-01 08:45:38.551017] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:46.781 [2024-10-01 08:45:38.551035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.781 [2024-10-01 08:45:38.551042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.781 [2024-10-01 08:45:38.563514] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:46.781 [2024-10-01 08:45:38.563531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.781 [2024-10-01 08:45:38.563538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.781 [2024-10-01 08:45:38.573529] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:46.781 [2024-10-01 08:45:38.573547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.781 [2024-10-01 08:45:38.573554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.781 [2024-10-01 08:45:38.587797] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:46.781 [2024-10-01 08:45:38.587815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.781 [2024-10-01 08:45:38.587822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:46.781 [2024-10-01 08:45:38.600021] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:46.781 [2024-10-01 08:45:38.600039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:23884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.781 [2024-10-01 08:45:38.600045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.043 [2024-10-01 08:45:38.612107] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:47.043 [2024-10-01 08:45:38.612126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:2561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.043 [2024-10-01 08:45:38.612132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.043 [2024-10-01 08:45:38.624217] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:47.043 [2024-10-01 08:45:38.624234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:24423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.043 [2024-10-01 08:45:38.624241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.043 [2024-10-01 08:45:38.637170] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:47.043 [2024-10-01 08:45:38.637188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:17713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.043 [2024-10-01 08:45:38.637195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.043 [2024-10-01 08:45:38.650844] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:47.043 [2024-10-01 08:45:38.650862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:16948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.043 [2024-10-01 08:45:38.650869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.043 [2024-10-01 08:45:38.662087] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:47.043 [2024-10-01 08:45:38.662105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:20250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.043 [2024-10-01 08:45:38.662112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.043 [2024-10-01 08:45:38.674890] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:47.043 [2024-10-01 08:45:38.674908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:2855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.043 [2024-10-01 08:45:38.674914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.043 [2024-10-01 08:45:38.687624] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:47.043 [2024-10-01 08:45:38.687642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.043 [2024-10-01 08:45:38.687648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.044 [2024-10-01 08:45:38.700453] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:47.044 [2024-10-01 08:45:38.700472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:12180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.044 [2024-10-01 08:45:38.700478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.044 [2024-10-01 08:45:38.712720] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:47.044 [2024-10-01 08:45:38.712737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.044 [2024-10-01 08:45:38.712744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.044 [2024-10-01 08:45:38.725669] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:47.044 [2024-10-01 08:45:38.725687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:17703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.044 [2024-10-01 08:45:38.725693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.044 [2024-10-01 08:45:38.737161] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:47.044 [2024-10-01 08:45:38.737178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:9281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.044 [2024-10-01 08:45:38.737188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.044 [2024-10-01 08:45:38.750009] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:47.044 [2024-10-01 08:45:38.750028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:1355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.044 [2024-10-01 08:45:38.750035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.044 [2024-10-01 08:45:38.762187] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:47.044 [2024-10-01 08:45:38.762205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.044 [2024-10-01 08:45:38.762211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.044 [2024-10-01 08:45:38.775283] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:47.044 [2024-10-01 08:45:38.775302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:7069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.044 [2024-10-01 08:45:38.775309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.044 [2024-10-01 08:45:38.787882] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:47.044 [2024-10-01 08:45:38.787900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:24188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.044 [2024-10-01 08:45:38.787907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.044 [2024-10-01 08:45:38.801113] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:47.044 [2024-10-01 08:45:38.801131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:10081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.044 [2024-10-01 08:45:38.801138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.044 [2024-10-01 08:45:38.813677] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:47.044 [2024-10-01 08:45:38.813695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:8642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.044 [2024-10-01 08:45:38.813702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.044 [2024-10-01 08:45:38.824317] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:47.044 [2024-10-01 08:45:38.824335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:6123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.044 [2024-10-01 08:45:38.824341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.044 [2024-10-01 08:45:38.837940] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:47.044 [2024-10-01 08:45:38.837959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:24949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.044 [2024-10-01 08:45:38.837966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.044 [2024-10-01 08:45:38.851505] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:47.044 [2024-10-01 08:45:38.851523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:4115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.044 [2024-10-01 08:45:38.851530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.305 [2024-10-01 08:45:38.865564] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:47.305 [2024-10-01 08:45:38.865582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:8200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.305 [2024-10-01 08:45:38.865589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.305 [2024-10-01 08:45:38.876401] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:47.305 [2024-10-01 08:45:38.876418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:24740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.305 [2024-10-01 08:45:38.876425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.305 [2024-10-01 08:45:38.888242] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:47.305 [2024-10-01 08:45:38.888259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.305 [2024-10-01 08:45:38.888266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.305 [2024-10-01 08:45:38.901008] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:47.305 [2024-10-01 08:45:38.901025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:1485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.305 [2024-10-01 08:45:38.901032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.305 [2024-10-01 08:45:38.913959] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:47.305 [2024-10-01 08:45:38.913977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:16899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.305 [2024-10-01 08:45:38.913983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.305 [2024-10-01 08:45:38.927450] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:47.305 [2024-10-01 08:45:38.927468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.305 [2024-10-01 08:45:38.927475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.305 [2024-10-01 08:45:38.941098] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:47.305 [2024-10-01 08:45:38.941115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:10889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.305 [2024-10-01 08:45:38.941122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.305 [2024-10-01 08:45:38.953922] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:47.305 [2024-10-01 08:45:38.953940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.305 [2024-10-01 08:45:38.953950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.305 [2024-10-01 08:45:38.963939] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:47.305 [2024-10-01 08:45:38.963957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:4875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.305 [2024-10-01 08:45:38.963964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.305 [2024-10-01 08:45:38.976315] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:47.305 [2024-10-01 08:45:38.976333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:7065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.305 [2024-10-01 08:45:38.976340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.305 [2024-10-01 08:45:38.990723] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:47.305 [2024-10-01 08:45:38.990742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:11988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.305 [2024-10-01 08:45:38.990748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.305 [2024-10-01 08:45:39.003998] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:47.305 [2024-10-01 08:45:39.004016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:20213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.305 [2024-10-01 08:45:39.004022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.305 [2024-10-01 08:45:39.017135] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:47.305 [2024-10-01 08:45:39.017153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.305 [2024-10-01 08:45:39.017160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.305 [2024-10-01 08:45:39.029557] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:47.305 [2024-10-01 08:45:39.029574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:16106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.305 [2024-10-01 08:45:39.029581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.305 [2024-10-01 08:45:39.040948] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:47.305 [2024-10-01 08:45:39.040966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:20537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.305 [2024-10-01 08:45:39.040972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.305 [2024-10-01 08:45:39.052672] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:47.305 [2024-10-01 08:45:39.052690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:16937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.305 [2024-10-01 08:45:39.052697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.305 [2024-10-01 08:45:39.066531] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:47.305 [2024-10-01 08:45:39.066555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:23059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.305 [2024-10-01 08:45:39.066561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.305 [2024-10-01 08:45:39.079758] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:47.305 [2024-10-01 08:45:39.079777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.305 [2024-10-01 08:45:39.079783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.305 [2024-10-01 08:45:39.093740] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:47.305 [2024-10-01 08:45:39.093758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:14473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.305 [2024-10-01 08:45:39.093765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.306 [2024-10-01 08:45:39.104148] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:47.306 [2024-10-01 08:45:39.104166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.306 [2024-10-01 08:45:39.104173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.306 [2024-10-01 08:45:39.118912] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddfed0) 00:30:47.306 [2024-10-01 08:45:39.118930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.306 [2024-10-01 08:45:39.118936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:47.565 20206.50 IOPS, 78.93 MiB/s 00:30:47.565 Latency(us) 00:30:47.565 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:47.565 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:47.565 nvme0n1 : 2.00 20238.75 79.06 0.00 0.00 6319.66 2307.41 17694.72 00:30:47.565 =================================================================================================================== 00:30:47.565 Total : 20238.75 79.06 0.00 0.00 6319.66 2307.41 17694.72 00:30:47.565 { 00:30:47.565 "results": [ 00:30:47.565 { 00:30:47.565 "job": "nvme0n1", 00:30:47.565 "core_mask": "0x2", 00:30:47.565 "workload": "randread", 00:30:47.565 "status": "finished", 00:30:47.565 "queue_depth": 128, 00:30:47.565 "io_size": 4096, 00:30:47.565 "runtime": 2.003138, 00:30:47.565 "iops": 20238.745408454135, 00:30:47.565 "mibps": 79.05759925177396, 00:30:47.565 "io_failed": 0, 00:30:47.565 "io_timeout": 0, 00:30:47.565 "avg_latency_us": 6319.655870353469, 00:30:47.565 "min_latency_us": 2307.4133333333334, 00:30:47.565 "max_latency_us": 17694.72 00:30:47.565 } 00:30:47.565 ], 00:30:47.565 "core_count": 1 00:30:47.565 } 00:30:47.565 08:45:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:47.565 08:45:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:47.565 08:45:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:47.565 | .driver_specific 00:30:47.565 | .nvme_error 00:30:47.565 | .status_code 00:30:47.565 | .command_transient_transport_error' 00:30:47.566 08:45:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:47.566 08:45:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 158 > 0 )) 00:30:47.566 08:45:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3928631 00:30:47.566 08:45:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3928631 ']' 00:30:47.566 08:45:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3928631 00:30:47.566 08:45:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:30:47.566 08:45:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:47.566 08:45:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3928631 00:30:47.825 08:45:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:47.825 08:45:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:47.825 08:45:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3928631' 00:30:47.825 killing process with pid 3928631 00:30:47.825 08:45:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3928631 00:30:47.825 Received shutdown signal, test time was about 2.000000 seconds 00:30:47.825 00:30:47.825 Latency(us) 00:30:47.825 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:47.825 =================================================================================================================== 00:30:47.825 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:47.825 08:45:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3928631 00:30:47.825 08:45:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:30:47.825 08:45:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:47.825 08:45:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:30:47.825 08:45:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:30:47.825 08:45:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:30:47.825 08:45:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3929406 00:30:47.825 08:45:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3929406 /var/tmp/bperf.sock 00:30:47.825 08:45:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3929406 ']' 00:30:47.825 08:45:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:47.825 08:45:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:47.825 08:45:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:47.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:47.825 08:45:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:47.825 08:45:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:47.825 08:45:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:30:47.825 [2024-10-01 08:45:39.559763] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:30:47.825 [2024-10-01 08:45:39.559820] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3929406 ] 00:30:47.825 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:47.825 Zero copy mechanism will not be used. 00:30:47.825 [2024-10-01 08:45:39.634980] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:48.085 [2024-10-01 08:45:39.688690] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:30:48.654 08:45:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:48.654 08:45:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:30:48.654 08:45:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:48.654 08:45:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:48.914 08:45:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:48.914 08:45:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.914 08:45:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:48.914 08:45:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.914 08:45:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:48.914 08:45:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:48.914 nvme0n1 00:30:48.914 08:45:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:30:48.914 08:45:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.914 08:45:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:49.176 08:45:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:49.176 08:45:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:49.176 08:45:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:49.176 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:49.176 Zero copy mechanism will not be used. 00:30:49.176 Running I/O for 2 seconds... 00:30:49.176 [2024-10-01 08:45:40.839652] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.176 [2024-10-01 08:45:40.839684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.176 [2024-10-01 08:45:40.839693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:49.176 [2024-10-01 08:45:40.850603] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.176 [2024-10-01 08:45:40.850626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.176 [2024-10-01 08:45:40.850633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:49.176 [2024-10-01 08:45:40.860226] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.176 [2024-10-01 08:45:40.860246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.176 [2024-10-01 08:45:40.860253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:49.176 [2024-10-01 08:45:40.872403] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.176 [2024-10-01 08:45:40.872422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.176 [2024-10-01 08:45:40.872429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:49.176 [2024-10-01 08:45:40.884159] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.176 [2024-10-01 08:45:40.884177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.176 [2024-10-01 08:45:40.884185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:49.176 [2024-10-01 08:45:40.895273] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.176 [2024-10-01 08:45:40.895292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.176 [2024-10-01 08:45:40.895298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:49.176 [2024-10-01 08:45:40.906859] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.176 [2024-10-01 08:45:40.906878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.176 [2024-10-01 08:45:40.906884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:49.176 [2024-10-01 08:45:40.919162] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.176 [2024-10-01 08:45:40.919180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.176 [2024-10-01 08:45:40.919187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:49.177 [2024-10-01 08:45:40.930937] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.177 [2024-10-01 08:45:40.930956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.177 [2024-10-01 08:45:40.930962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:49.177 [2024-10-01 08:45:40.944627] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.177 [2024-10-01 08:45:40.944646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.177 [2024-10-01 08:45:40.944653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:49.177 [2024-10-01 08:45:40.955623] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.177 [2024-10-01 08:45:40.955642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.177 [2024-10-01 08:45:40.955649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:49.177 [2024-10-01 08:45:40.965706] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.177 [2024-10-01 08:45:40.965725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.177 [2024-10-01 08:45:40.965736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:49.177 [2024-10-01 08:45:40.976777] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.177 [2024-10-01 08:45:40.976795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.177 [2024-10-01 08:45:40.976802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:49.177 [2024-10-01 08:45:40.987687] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.177 [2024-10-01 08:45:40.987705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.177 [2024-10-01 08:45:40.987712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:49.177 [2024-10-01 08:45:40.996993] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.177 [2024-10-01 08:45:40.997016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.177 [2024-10-01 08:45:40.997023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:49.439 [2024-10-01 08:45:41.006971] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.439 [2024-10-01 08:45:41.006990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.439 [2024-10-01 08:45:41.007001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:49.439 [2024-10-01 08:45:41.018877] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.439 [2024-10-01 08:45:41.018895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.439 [2024-10-01 08:45:41.018902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:49.439 [2024-10-01 08:45:41.029801] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.439 [2024-10-01 08:45:41.029819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.439 [2024-10-01 08:45:41.029826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:49.439 [2024-10-01 08:45:41.039961] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.439 [2024-10-01 08:45:41.039978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.439 [2024-10-01 08:45:41.039985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:49.439 [2024-10-01 08:45:41.051098] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.439 [2024-10-01 08:45:41.051117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.439 [2024-10-01 08:45:41.051123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:49.439 [2024-10-01 08:45:41.062621] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.439 [2024-10-01 08:45:41.062639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.439 [2024-10-01 08:45:41.062646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:49.439 [2024-10-01 08:45:41.072772] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.439 [2024-10-01 08:45:41.072791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.439 [2024-10-01 08:45:41.072798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:49.439 [2024-10-01 08:45:41.082709] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.439 [2024-10-01 08:45:41.082727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.439 [2024-10-01 08:45:41.082733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:49.439 [2024-10-01 08:45:41.092546] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.439 [2024-10-01 08:45:41.092564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.439 [2024-10-01 08:45:41.092571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:49.439 [2024-10-01 08:45:41.104891] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.439 [2024-10-01 08:45:41.104909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.439 [2024-10-01 08:45:41.104915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:49.439 [2024-10-01 08:45:41.116123] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.439 [2024-10-01 08:45:41.116141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.439 [2024-10-01 08:45:41.116147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:49.439 [2024-10-01 08:45:41.123214] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.439 [2024-10-01 08:45:41.123232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.439 [2024-10-01 08:45:41.123239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:49.439 [2024-10-01 08:45:41.135067] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.439 [2024-10-01 08:45:41.135086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.439 [2024-10-01 08:45:41.135092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:49.439 [2024-10-01 08:45:41.146743] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.439 [2024-10-01 08:45:41.146762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.439 [2024-10-01 08:45:41.146772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:49.439 [2024-10-01 08:45:41.158688] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.439 [2024-10-01 08:45:41.158706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.439 [2024-10-01 08:45:41.158713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:49.439 [2024-10-01 08:45:41.169961] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.439 [2024-10-01 08:45:41.169980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.439 [2024-10-01 08:45:41.169986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:49.439 [2024-10-01 08:45:41.182084] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.439 [2024-10-01 08:45:41.182103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.439 [2024-10-01 08:45:41.182110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:49.439 [2024-10-01 08:45:41.193078] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.439 [2024-10-01 08:45:41.193097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.439 [2024-10-01 08:45:41.193103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:49.439 [2024-10-01 08:45:41.204118] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.440 [2024-10-01 08:45:41.204137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.440 [2024-10-01 08:45:41.204144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:49.440 [2024-10-01 08:45:41.213728] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.440 [2024-10-01 08:45:41.213747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.440 [2024-10-01 08:45:41.213753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:49.440 [2024-10-01 08:45:41.224500] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.440 [2024-10-01 08:45:41.224518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.440 [2024-10-01 08:45:41.224525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:49.440 [2024-10-01 08:45:41.235890] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.440 [2024-10-01 08:45:41.235909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.440 [2024-10-01 08:45:41.235915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:49.440 [2024-10-01 08:45:41.245783] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.440 [2024-10-01 08:45:41.245806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.440 [2024-10-01 08:45:41.245812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:49.440 [2024-10-01 08:45:41.254742] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.440 [2024-10-01 08:45:41.254762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.440 [2024-10-01 08:45:41.254768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:49.701 [2024-10-01 08:45:41.265682] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.701 [2024-10-01 08:45:41.265700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.701 [2024-10-01 08:45:41.265707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:49.701 [2024-10-01 08:45:41.276379] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.701 [2024-10-01 08:45:41.276398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.701 [2024-10-01 08:45:41.276405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:49.701 [2024-10-01 08:45:41.287901] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.701 [2024-10-01 08:45:41.287920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.701 [2024-10-01 08:45:41.287927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:49.701 [2024-10-01 08:45:41.300080] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.701 [2024-10-01 08:45:41.300099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.701 [2024-10-01 08:45:41.300105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:49.701 [2024-10-01 08:45:41.312231] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.701 [2024-10-01 08:45:41.312250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.701 [2024-10-01 08:45:41.312257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:49.701 [2024-10-01 08:45:41.321286] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.701 [2024-10-01 08:45:41.321305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.701 [2024-10-01 08:45:41.321311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:49.701 [2024-10-01 08:45:41.327987] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.701 [2024-10-01 08:45:41.328010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.701 [2024-10-01 08:45:41.328016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:49.701 [2024-10-01 08:45:41.335750] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.701 [2024-10-01 08:45:41.335769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.701 [2024-10-01 08:45:41.335775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:49.701 [2024-10-01 08:45:41.345459] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.701 [2024-10-01 08:45:41.345478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.701 [2024-10-01 08:45:41.345484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:49.701 [2024-10-01 08:45:41.354323] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.701 [2024-10-01 08:45:41.354342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.701 [2024-10-01 08:45:41.354348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:49.701 [2024-10-01 08:45:41.363662] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.701 [2024-10-01 08:45:41.363681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.701 [2024-10-01 08:45:41.363687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:49.701 [2024-10-01 08:45:41.372345] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.701 [2024-10-01 08:45:41.372364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.701 [2024-10-01 08:45:41.372370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:49.701 [2024-10-01 08:45:41.383399] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.701 [2024-10-01 08:45:41.383418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.701 [2024-10-01 08:45:41.383424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:49.701 [2024-10-01 08:45:41.393046] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.701 [2024-10-01 08:45:41.393064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.701 [2024-10-01 08:45:41.393070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:49.701 [2024-10-01 08:45:41.402515] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.701 [2024-10-01 08:45:41.402534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.701 [2024-10-01 08:45:41.402540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:49.701 [2024-10-01 08:45:41.412071] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.701 [2024-10-01 08:45:41.412089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.701 [2024-10-01 08:45:41.412099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:49.701 [2024-10-01 08:45:41.422557] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.701 [2024-10-01 08:45:41.422576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.701 [2024-10-01 08:45:41.422582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:49.701 [2024-10-01 08:45:41.431166] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.701 [2024-10-01 08:45:41.431184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.701 [2024-10-01 08:45:41.431191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:49.701 [2024-10-01 08:45:41.441059] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.701 [2024-10-01 08:45:41.441077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.701 [2024-10-01 08:45:41.441084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:49.702 [2024-10-01 08:45:41.446583] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.702 [2024-10-01 08:45:41.446602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.702 [2024-10-01 08:45:41.446608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:49.702 [2024-10-01 08:45:41.453804] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.702 [2024-10-01 08:45:41.453822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.702 [2024-10-01 08:45:41.453829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:49.702 [2024-10-01 08:45:41.464113] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.702 [2024-10-01 08:45:41.464131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.702 [2024-10-01 08:45:41.464138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:49.702 [2024-10-01 08:45:41.471251] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.702 [2024-10-01 08:45:41.471270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.702 [2024-10-01 08:45:41.471277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:49.702 [2024-10-01 08:45:41.482376] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.702 [2024-10-01 08:45:41.482394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.702 [2024-10-01 08:45:41.482401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:49.702 [2024-10-01 08:45:41.494466] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.702 [2024-10-01 08:45:41.494488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.702 [2024-10-01 08:45:41.494494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:49.702 [2024-10-01 08:45:41.503136] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.702 [2024-10-01 08:45:41.503154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.702 [2024-10-01 08:45:41.503161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:49.702 [2024-10-01 08:45:41.513232] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.702 [2024-10-01 08:45:41.513251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.702 [2024-10-01 08:45:41.513257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:49.702 [2024-10-01 08:45:41.522411] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.702 [2024-10-01 08:45:41.522429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.702 [2024-10-01 08:45:41.522436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:49.963 [2024-10-01 08:45:41.529496] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.963 [2024-10-01 08:45:41.529516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.963 [2024-10-01 08:45:41.529522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:49.963 [2024-10-01 08:45:41.538617] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.963 [2024-10-01 08:45:41.538636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.963 [2024-10-01 08:45:41.538643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:49.963 [2024-10-01 08:45:41.550270] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.963 [2024-10-01 08:45:41.550289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.963 [2024-10-01 08:45:41.550295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:49.963 [2024-10-01 08:45:41.557733] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.963 [2024-10-01 08:45:41.557752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.963 [2024-10-01 08:45:41.557758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:49.963 [2024-10-01 08:45:41.566928] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.963 [2024-10-01 08:45:41.566947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.963 [2024-10-01 08:45:41.566953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:49.963 [2024-10-01 08:45:41.572551] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.963 [2024-10-01 08:45:41.572569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.963 [2024-10-01 08:45:41.572575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:49.963 [2024-10-01 08:45:41.578416] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.963 [2024-10-01 08:45:41.578435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.963 [2024-10-01 08:45:41.578441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:49.963 [2024-10-01 08:45:41.587236] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.963 [2024-10-01 08:45:41.587254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.963 [2024-10-01 08:45:41.587261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:49.963 [2024-10-01 08:45:41.593415] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.963 [2024-10-01 08:45:41.593434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.963 [2024-10-01 08:45:41.593440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:49.963 [2024-10-01 08:45:41.599694] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.963 [2024-10-01 08:45:41.599713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.963 [2024-10-01 08:45:41.599720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:49.963 [2024-10-01 08:45:41.605414] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.963 [2024-10-01 08:45:41.605433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.963 [2024-10-01 08:45:41.605440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:49.963 [2024-10-01 08:45:41.614348] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.963 [2024-10-01 08:45:41.614367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.963 [2024-10-01 08:45:41.614373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:49.963 [2024-10-01 08:45:41.622953] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.963 [2024-10-01 08:45:41.622972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.963 [2024-10-01 08:45:41.622978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:49.963 [2024-10-01 08:45:41.631191] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.963 [2024-10-01 08:45:41.631213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.963 [2024-10-01 08:45:41.631219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:49.963 [2024-10-01 08:45:41.640833] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.963 [2024-10-01 08:45:41.640851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.963 [2024-10-01 08:45:41.640857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:49.963 [2024-10-01 08:45:41.646183] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.964 [2024-10-01 08:45:41.646202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.964 [2024-10-01 08:45:41.646208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:49.964 [2024-10-01 08:45:41.654167] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.964 [2024-10-01 08:45:41.654185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.964 [2024-10-01 08:45:41.654192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:49.964 [2024-10-01 08:45:41.661958] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.964 [2024-10-01 08:45:41.661977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.964 [2024-10-01 08:45:41.661983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:49.964 [2024-10-01 08:45:41.670651] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.964 [2024-10-01 08:45:41.670670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.964 [2024-10-01 08:45:41.670676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:49.964 [2024-10-01 08:45:41.681010] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.964 [2024-10-01 08:45:41.681027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.964 [2024-10-01 08:45:41.681034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:49.964 [2024-10-01 08:45:41.691949] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.964 [2024-10-01 08:45:41.691967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.964 [2024-10-01 08:45:41.691974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:49.964 [2024-10-01 08:45:41.700712] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.964 [2024-10-01 08:45:41.700731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.964 [2024-10-01 08:45:41.700737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:49.964 [2024-10-01 08:45:41.708423] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.964 [2024-10-01 08:45:41.708442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.964 [2024-10-01 08:45:41.708449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:49.964 [2024-10-01 08:45:41.714127] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.964 [2024-10-01 08:45:41.714145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.964 [2024-10-01 08:45:41.714151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:49.964 [2024-10-01 08:45:41.721988] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.964 [2024-10-01 08:45:41.722012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.964 [2024-10-01 08:45:41.722019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:49.964 [2024-10-01 08:45:41.728238] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.964 [2024-10-01 08:45:41.728257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.964 [2024-10-01 08:45:41.728263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:49.964 [2024-10-01 08:45:41.736668] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.964 [2024-10-01 08:45:41.736686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.964 [2024-10-01 08:45:41.736693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:49.964 [2024-10-01 08:45:41.744334] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.964 [2024-10-01 08:45:41.744352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.964 [2024-10-01 08:45:41.744359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:49.964 [2024-10-01 08:45:41.751874] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.964 [2024-10-01 08:45:41.751892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.964 [2024-10-01 08:45:41.751899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:49.964 [2024-10-01 08:45:41.762539] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.964 [2024-10-01 08:45:41.762558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.964 [2024-10-01 08:45:41.762565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:49.964 [2024-10-01 08:45:41.773365] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.964 [2024-10-01 08:45:41.773383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.964 [2024-10-01 08:45:41.773393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:49.964 [2024-10-01 08:45:41.778364] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:49.964 [2024-10-01 08:45:41.778383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.964 [2024-10-01 08:45:41.778389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.226 [2024-10-01 08:45:41.785562] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.226 [2024-10-01 08:45:41.785581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.226 [2024-10-01 08:45:41.785588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.226 [2024-10-01 08:45:41.791257] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.226 [2024-10-01 08:45:41.791276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.226 [2024-10-01 08:45:41.791283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.226 [2024-10-01 08:45:41.794031] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.226 [2024-10-01 08:45:41.794049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.226 [2024-10-01 08:45:41.794056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.226 [2024-10-01 08:45:41.804092] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.226 [2024-10-01 08:45:41.804110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.226 [2024-10-01 08:45:41.804116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.226 [2024-10-01 08:45:41.811725] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.226 [2024-10-01 08:45:41.811743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.226 [2024-10-01 08:45:41.811750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.226 [2024-10-01 08:45:41.820081] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.226 [2024-10-01 08:45:41.820100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.226 [2024-10-01 08:45:41.820106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.226 3286.00 IOPS, 410.75 MiB/s [2024-10-01 08:45:41.831491] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.226 [2024-10-01 08:45:41.831508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.226 [2024-10-01 08:45:41.831515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.226 [2024-10-01 08:45:41.841045] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.226 [2024-10-01 08:45:41.841067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.226 [2024-10-01 08:45:41.841073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.226 [2024-10-01 08:45:41.851686] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.226 [2024-10-01 08:45:41.851704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.226 [2024-10-01 08:45:41.851711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.226 [2024-10-01 08:45:41.861352] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.226 [2024-10-01 08:45:41.861370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.226 [2024-10-01 08:45:41.861377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.226 [2024-10-01 08:45:41.868816] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.226 [2024-10-01 08:45:41.868835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.226 [2024-10-01 08:45:41.868842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.226 [2024-10-01 08:45:41.874933] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.226 [2024-10-01 08:45:41.874952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.226 [2024-10-01 08:45:41.874958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.226 [2024-10-01 08:45:41.881507] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.226 [2024-10-01 08:45:41.881526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.226 [2024-10-01 08:45:41.881533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.226 [2024-10-01 08:45:41.890189] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.226 [2024-10-01 08:45:41.890208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.226 [2024-10-01 08:45:41.890214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.226 [2024-10-01 08:45:41.899662] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.226 [2024-10-01 08:45:41.899681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.226 [2024-10-01 08:45:41.899688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.226 [2024-10-01 08:45:41.907369] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.226 [2024-10-01 08:45:41.907388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.226 [2024-10-01 08:45:41.907394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.226 [2024-10-01 08:45:41.917114] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.226 [2024-10-01 08:45:41.917133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.226 [2024-10-01 08:45:41.917140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.226 [2024-10-01 08:45:41.926916] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.226 [2024-10-01 08:45:41.926935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.226 [2024-10-01 08:45:41.926941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.226 [2024-10-01 08:45:41.936968] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.226 [2024-10-01 08:45:41.936987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.226 [2024-10-01 08:45:41.936998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.226 [2024-10-01 08:45:41.945518] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.226 [2024-10-01 08:45:41.945537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.226 [2024-10-01 08:45:41.945543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.226 [2024-10-01 08:45:41.953178] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.226 [2024-10-01 08:45:41.953197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.226 [2024-10-01 08:45:41.953203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.226 [2024-10-01 08:45:41.959140] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.226 [2024-10-01 08:45:41.959159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.226 [2024-10-01 08:45:41.959165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.226 [2024-10-01 08:45:41.969500] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.226 [2024-10-01 08:45:41.969519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.226 [2024-10-01 08:45:41.969525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.226 [2024-10-01 08:45:41.977931] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.226 [2024-10-01 08:45:41.977950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.226 [2024-10-01 08:45:41.977957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.226 [2024-10-01 08:45:41.988426] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.226 [2024-10-01 08:45:41.988445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.227 [2024-10-01 08:45:41.988454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.227 [2024-10-01 08:45:41.995778] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.227 [2024-10-01 08:45:41.995797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.227 [2024-10-01 08:45:41.995804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.227 [2024-10-01 08:45:42.001541] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.227 [2024-10-01 08:45:42.001560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.227 [2024-10-01 08:45:42.001566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.227 [2024-10-01 08:45:42.009963] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.227 [2024-10-01 08:45:42.009982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.227 [2024-10-01 08:45:42.009988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.227 [2024-10-01 08:45:42.015959] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.227 [2024-10-01 08:45:42.015977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.227 [2024-10-01 08:45:42.015984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.227 [2024-10-01 08:45:42.023253] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.227 [2024-10-01 08:45:42.023271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.227 [2024-10-01 08:45:42.023277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.227 [2024-10-01 08:45:42.032743] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.227 [2024-10-01 08:45:42.032761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.227 [2024-10-01 08:45:42.032768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.227 [2024-10-01 08:45:42.043964] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.227 [2024-10-01 08:45:42.043982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.227 [2024-10-01 08:45:42.043989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.487 [2024-10-01 08:45:42.052668] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.487 [2024-10-01 08:45:42.052688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.487 [2024-10-01 08:45:42.052694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.487 [2024-10-01 08:45:42.058658] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.487 [2024-10-01 08:45:42.058678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.487 [2024-10-01 08:45:42.058684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.487 [2024-10-01 08:45:42.064973] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.487 [2024-10-01 08:45:42.064992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.487 [2024-10-01 08:45:42.065009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.487 [2024-10-01 08:45:42.073504] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.487 [2024-10-01 08:45:42.073523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.487 [2024-10-01 08:45:42.073529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.487 [2024-10-01 08:45:42.085476] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.487 [2024-10-01 08:45:42.085496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.487 [2024-10-01 08:45:42.085503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.487 [2024-10-01 08:45:42.096190] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.487 [2024-10-01 08:45:42.096210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.487 [2024-10-01 08:45:42.096217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.487 [2024-10-01 08:45:42.108111] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.487 [2024-10-01 08:45:42.108131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.487 [2024-10-01 08:45:42.108137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.487 [2024-10-01 08:45:42.117392] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.488 [2024-10-01 08:45:42.117412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.488 [2024-10-01 08:45:42.117419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.488 [2024-10-01 08:45:42.121902] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.488 [2024-10-01 08:45:42.121921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.488 [2024-10-01 08:45:42.121928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.488 [2024-10-01 08:45:42.131686] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.488 [2024-10-01 08:45:42.131706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.488 [2024-10-01 08:45:42.131716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.488 [2024-10-01 08:45:42.135744] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.488 [2024-10-01 08:45:42.135763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.488 [2024-10-01 08:45:42.135770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.488 [2024-10-01 08:45:42.143590] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.488 [2024-10-01 08:45:42.143609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.488 [2024-10-01 08:45:42.143616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.488 [2024-10-01 08:45:42.151243] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.488 [2024-10-01 08:45:42.151262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.488 [2024-10-01 08:45:42.151268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.488 [2024-10-01 08:45:42.162843] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.488 [2024-10-01 08:45:42.162862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.488 [2024-10-01 08:45:42.162869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.488 [2024-10-01 08:45:42.174436] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.488 [2024-10-01 08:45:42.174455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.488 [2024-10-01 08:45:42.174461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.488 [2024-10-01 08:45:42.186589] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.488 [2024-10-01 08:45:42.186608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.488 [2024-10-01 08:45:42.186615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.488 [2024-10-01 08:45:42.195932] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.488 [2024-10-01 08:45:42.195951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.488 [2024-10-01 08:45:42.195957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.488 [2024-10-01 08:45:42.207670] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.488 [2024-10-01 08:45:42.207689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.488 [2024-10-01 08:45:42.207695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.488 [2024-10-01 08:45:42.219715] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.488 [2024-10-01 08:45:42.219740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.488 [2024-10-01 08:45:42.219746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.488 [2024-10-01 08:45:42.231514] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.488 [2024-10-01 08:45:42.231533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.488 [2024-10-01 08:45:42.231539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.488 [2024-10-01 08:45:42.238248] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.488 [2024-10-01 08:45:42.238267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.488 [2024-10-01 08:45:42.238273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.488 [2024-10-01 08:45:42.246573] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.488 [2024-10-01 08:45:42.246592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.488 [2024-10-01 08:45:42.246599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.488 [2024-10-01 08:45:42.252724] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.488 [2024-10-01 08:45:42.252742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.488 [2024-10-01 08:45:42.252749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.488 [2024-10-01 08:45:42.261498] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.488 [2024-10-01 08:45:42.261517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.488 [2024-10-01 08:45:42.261523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.488 [2024-10-01 08:45:42.269189] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.488 [2024-10-01 08:45:42.269208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.488 [2024-10-01 08:45:42.269214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.488 [2024-10-01 08:45:42.275338] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.488 [2024-10-01 08:45:42.275358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.488 [2024-10-01 08:45:42.275364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.488 [2024-10-01 08:45:42.280411] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.488 [2024-10-01 08:45:42.280429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.488 [2024-10-01 08:45:42.280436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.488 [2024-10-01 08:45:42.288379] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.488 [2024-10-01 08:45:42.288398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.488 [2024-10-01 08:45:42.288404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.488 [2024-10-01 08:45:42.299315] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.488 [2024-10-01 08:45:42.299334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.488 [2024-10-01 08:45:42.299340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.488 [2024-10-01 08:45:42.308046] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.488 [2024-10-01 08:45:42.308065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.488 [2024-10-01 08:45:42.308071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.749 [2024-10-01 08:45:42.315070] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.749 [2024-10-01 08:45:42.315090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.749 [2024-10-01 08:45:42.315096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.749 [2024-10-01 08:45:42.322423] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.749 [2024-10-01 08:45:42.322442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.749 [2024-10-01 08:45:42.322449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.749 [2024-10-01 08:45:42.330257] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.749 [2024-10-01 08:45:42.330276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.749 [2024-10-01 08:45:42.330283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.749 [2024-10-01 08:45:42.339026] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.749 [2024-10-01 08:45:42.339044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.749 [2024-10-01 08:45:42.339051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.749 [2024-10-01 08:45:42.346693] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.749 [2024-10-01 08:45:42.346712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.749 [2024-10-01 08:45:42.346719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.749 [2024-10-01 08:45:42.351926] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.749 [2024-10-01 08:45:42.351944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.749 [2024-10-01 08:45:42.351955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.749 [2024-10-01 08:45:42.359777] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.749 [2024-10-01 08:45:42.359796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.749 [2024-10-01 08:45:42.359802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.749 [2024-10-01 08:45:42.365172] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.749 [2024-10-01 08:45:42.365191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.749 [2024-10-01 08:45:42.365198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.749 [2024-10-01 08:45:42.372492] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.750 [2024-10-01 08:45:42.372511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.750 [2024-10-01 08:45:42.372517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.750 [2024-10-01 08:45:42.383169] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.750 [2024-10-01 08:45:42.383188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.750 [2024-10-01 08:45:42.383194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.750 [2024-10-01 08:45:42.392325] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.750 [2024-10-01 08:45:42.392344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.750 [2024-10-01 08:45:42.392350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.750 [2024-10-01 08:45:42.404683] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.750 [2024-10-01 08:45:42.404702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.750 [2024-10-01 08:45:42.404709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.750 [2024-10-01 08:45:42.416928] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.750 [2024-10-01 08:45:42.416947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.750 [2024-10-01 08:45:42.416954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.750 [2024-10-01 08:45:42.424282] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.750 [2024-10-01 08:45:42.424301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.750 [2024-10-01 08:45:42.424307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.750 [2024-10-01 08:45:42.430973] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.750 [2024-10-01 08:45:42.431001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.750 [2024-10-01 08:45:42.431008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.750 [2024-10-01 08:45:42.440883] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.750 [2024-10-01 08:45:42.440902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.750 [2024-10-01 08:45:42.440909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.750 [2024-10-01 08:45:42.450548] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.750 [2024-10-01 08:45:42.450568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.750 [2024-10-01 08:45:42.450574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.750 [2024-10-01 08:45:42.462962] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.750 [2024-10-01 08:45:42.462981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.750 [2024-10-01 08:45:42.462987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.750 [2024-10-01 08:45:42.475510] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.750 [2024-10-01 08:45:42.475529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.750 [2024-10-01 08:45:42.475536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.750 [2024-10-01 08:45:42.485941] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.750 [2024-10-01 08:45:42.485960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.750 [2024-10-01 08:45:42.485966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.750 [2024-10-01 08:45:42.495198] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.750 [2024-10-01 08:45:42.495217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.750 [2024-10-01 08:45:42.495224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.750 [2024-10-01 08:45:42.504629] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.750 [2024-10-01 08:45:42.504648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.750 [2024-10-01 08:45:42.504654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.750 [2024-10-01 08:45:42.516878] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.750 [2024-10-01 08:45:42.516897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.750 [2024-10-01 08:45:42.516904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.750 [2024-10-01 08:45:42.528811] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.750 [2024-10-01 08:45:42.528830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.750 [2024-10-01 08:45:42.528837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.750 [2024-10-01 08:45:42.540259] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.750 [2024-10-01 08:45:42.540278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.750 [2024-10-01 08:45:42.540285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.750 [2024-10-01 08:45:42.552585] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.750 [2024-10-01 08:45:42.552605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.750 [2024-10-01 08:45:42.552612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.750 [2024-10-01 08:45:42.561017] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.750 [2024-10-01 08:45:42.561036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.750 [2024-10-01 08:45:42.561042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.750 [2024-10-01 08:45:42.569991] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:50.750 [2024-10-01 08:45:42.570015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.750 [2024-10-01 08:45:42.570022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.011 [2024-10-01 08:45:42.581083] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:51.011 [2024-10-01 08:45:42.581102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.011 [2024-10-01 08:45:42.581108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.011 [2024-10-01 08:45:42.592676] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:51.011 [2024-10-01 08:45:42.592696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.011 [2024-10-01 08:45:42.592702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.011 [2024-10-01 08:45:42.601294] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:51.011 [2024-10-01 08:45:42.601314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.011 [2024-10-01 08:45:42.601320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.011 [2024-10-01 08:45:42.610459] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:51.011 [2024-10-01 08:45:42.610478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.011 [2024-10-01 08:45:42.610488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.011 [2024-10-01 08:45:42.621214] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:51.011 [2024-10-01 08:45:42.621233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.011 [2024-10-01 08:45:42.621239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.011 [2024-10-01 08:45:42.631674] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:51.011 [2024-10-01 08:45:42.631693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.011 [2024-10-01 08:45:42.631699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.011 [2024-10-01 08:45:42.643940] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:51.011 [2024-10-01 08:45:42.643959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.011 [2024-10-01 08:45:42.643966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.011 [2024-10-01 08:45:42.654979] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:51.011 [2024-10-01 08:45:42.655002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.011 [2024-10-01 08:45:42.655009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.011 [2024-10-01 08:45:42.665724] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:51.011 [2024-10-01 08:45:42.665742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.011 [2024-10-01 08:45:42.665749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.011 [2024-10-01 08:45:42.677547] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:51.011 [2024-10-01 08:45:42.677566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.011 [2024-10-01 08:45:42.677573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.011 [2024-10-01 08:45:42.686690] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:51.011 [2024-10-01 08:45:42.686709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.011 [2024-10-01 08:45:42.686716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.011 [2024-10-01 08:45:42.695894] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:51.011 [2024-10-01 08:45:42.695913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.011 [2024-10-01 08:45:42.695920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.011 [2024-10-01 08:45:42.705989] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:51.011 [2024-10-01 08:45:42.706016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.011 [2024-10-01 08:45:42.706023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.011 [2024-10-01 08:45:42.717813] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:51.011 [2024-10-01 08:45:42.717832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.011 [2024-10-01 08:45:42.717839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.011 [2024-10-01 08:45:42.727054] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:51.011 [2024-10-01 08:45:42.727072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.011 [2024-10-01 08:45:42.727079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.011 [2024-10-01 08:45:42.735979] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:51.011 [2024-10-01 08:45:42.736002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.011 [2024-10-01 08:45:42.736009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.011 [2024-10-01 08:45:42.745967] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:51.012 [2024-10-01 08:45:42.745986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.012 [2024-10-01 08:45:42.745992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.012 [2024-10-01 08:45:42.757249] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:51.012 [2024-10-01 08:45:42.757268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.012 [2024-10-01 08:45:42.757274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.012 [2024-10-01 08:45:42.768306] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:51.012 [2024-10-01 08:45:42.768324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.012 [2024-10-01 08:45:42.768331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.012 [2024-10-01 08:45:42.779108] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:51.012 [2024-10-01 08:45:42.779127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.012 [2024-10-01 08:45:42.779134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.012 [2024-10-01 08:45:42.789781] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:51.012 [2024-10-01 08:45:42.789800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.012 [2024-10-01 08:45:42.789807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.012 [2024-10-01 08:45:42.800257] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:51.012 [2024-10-01 08:45:42.800276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.012 [2024-10-01 08:45:42.800283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.012 [2024-10-01 08:45:42.810457] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:51.012 [2024-10-01 08:45:42.810477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.012 [2024-10-01 08:45:42.810483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.012 [2024-10-01 08:45:42.820975] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:51.012 [2024-10-01 08:45:42.821001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.012 [2024-10-01 08:45:42.821008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.012 [2024-10-01 08:45:42.831498] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d18e0) 00:30:51.012 [2024-10-01 08:45:42.831517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.012 [2024-10-01 08:45:42.831524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.272 3318.50 IOPS, 414.81 MiB/s 00:30:51.273 Latency(us) 00:30:51.273 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:51.273 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:30:51.273 nvme0n1 : 2.05 3250.92 406.37 0.00 0.00 4826.47 744.11 46749.01 00:30:51.273 =================================================================================================================== 00:30:51.273 Total : 3250.92 406.37 0.00 0.00 4826.47 744.11 46749.01 00:30:51.273 { 00:30:51.273 "results": [ 00:30:51.273 { 00:30:51.273 "job": "nvme0n1", 00:30:51.273 "core_mask": "0x2", 00:30:51.273 "workload": "randread", 00:30:51.273 "status": "finished", 00:30:51.273 "queue_depth": 16, 00:30:51.273 "io_size": 131072, 00:30:51.273 "runtime": 2.046496, 00:30:51.273 "iops": 3250.9225524994918, 00:30:51.273 "mibps": 406.36531906243647, 00:30:51.273 "io_failed": 0, 00:30:51.273 "io_timeout": 0, 00:30:51.273 "avg_latency_us": 4826.474624981211, 00:30:51.273 "min_latency_us": 744.1066666666667, 00:30:51.273 "max_latency_us": 46749.013333333336 00:30:51.273 } 00:30:51.273 ], 00:30:51.273 "core_count": 1 00:30:51.273 } 00:30:51.273 08:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:51.273 08:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:51.273 08:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:51.273 | .driver_specific 00:30:51.273 | .nvme_error 00:30:51.273 | .status_code 00:30:51.273 | .command_transient_transport_error' 00:30:51.273 08:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:51.273 08:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 214 > 0 )) 00:30:51.273 08:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3929406 00:30:51.273 08:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3929406 ']' 00:30:51.273 08:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3929406 00:30:51.273 08:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:30:51.273 08:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:51.273 08:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3929406 00:30:51.534 08:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:51.534 08:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:51.534 08:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3929406' 00:30:51.534 killing process with pid 3929406 00:30:51.534 08:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3929406 00:30:51.534 Received shutdown signal, test time was about 2.000000 seconds 00:30:51.534 00:30:51.534 Latency(us) 00:30:51.534 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:51.534 =================================================================================================================== 00:30:51.534 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:51.534 08:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3929406 00:30:51.534 08:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:30:51.534 08:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:51.534 08:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:30:51.534 08:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:30:51.534 08:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:30:51.534 08:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3930091 00:30:51.534 08:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3930091 /var/tmp/bperf.sock 00:30:51.534 08:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3930091 ']' 00:30:51.534 08:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:30:51.534 08:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:51.534 08:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:51.534 08:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:51.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:51.534 08:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:51.534 08:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:51.534 [2024-10-01 08:45:43.322920] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:30:51.534 [2024-10-01 08:45:43.322980] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3930091 ] 00:30:51.794 [2024-10-01 08:45:43.397088] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:51.794 [2024-10-01 08:45:43.450664] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:30:52.365 08:45:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:52.365 08:45:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:30:52.365 08:45:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:52.365 08:45:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:52.625 08:45:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:52.625 08:45:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.625 08:45:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:52.625 08:45:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.625 08:45:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:52.625 08:45:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:52.885 nvme0n1 00:30:52.885 08:45:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:30:52.885 08:45:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.885 08:45:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:52.885 08:45:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.885 08:45:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:52.885 08:45:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:52.885 Running I/O for 2 seconds... 00:30:53.147 [2024-10-01 08:45:44.708021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198ebfd0 00:30:53.147 [2024-10-01 08:45:44.709780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:3359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.147 [2024-10-01 08:45:44.709808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:53.147 [2024-10-01 08:45:44.718465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198f8618 00:30:53.147 [2024-10-01 08:45:44.719561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:22633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.147 [2024-10-01 08:45:44.719580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:53.147 [2024-10-01 08:45:44.730422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198f8618 00:30:53.147 [2024-10-01 08:45:44.731519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:21134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.147 [2024-10-01 08:45:44.731536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:53.147 [2024-10-01 08:45:44.742367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198f8618 00:30:53.147 [2024-10-01 08:45:44.743458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:24721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.147 [2024-10-01 08:45:44.743479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:53.147 [2024-10-01 08:45:44.754312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198f8618 00:30:53.147 [2024-10-01 08:45:44.755403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:23539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.147 [2024-10-01 08:45:44.755420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:53.147 [2024-10-01 08:45:44.766255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198f8618 00:30:53.147 [2024-10-01 08:45:44.767371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:12393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.147 [2024-10-01 08:45:44.767388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:53.147 [2024-10-01 08:45:44.778189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198f8618 00:30:53.147 [2024-10-01 08:45:44.779263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:20680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.147 [2024-10-01 08:45:44.779280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:53.147 [2024-10-01 08:45:44.790126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198f8618 00:30:53.147 [2024-10-01 08:45:44.791225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:24433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.147 [2024-10-01 08:45:44.791242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:53.148 [2024-10-01 08:45:44.802036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198eb760 00:30:53.148 [2024-10-01 08:45:44.803148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.148 [2024-10-01 08:45:44.803165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:53.148 [2024-10-01 08:45:44.813202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198feb58 00:30:53.148 [2024-10-01 08:45:44.814276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.148 [2024-10-01 08:45:44.814293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:53.148 [2024-10-01 08:45:44.825941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198ef6a8 00:30:53.148 [2024-10-01 08:45:44.827039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.148 [2024-10-01 08:45:44.827056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:53.148 [2024-10-01 08:45:44.837101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198f7da8 00:30:53.148 [2024-10-01 08:45:44.838169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.148 [2024-10-01 08:45:44.838185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:53.148 [2024-10-01 08:45:44.849833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198eb760 00:30:53.148 [2024-10-01 08:45:44.850927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.148 [2024-10-01 08:45:44.850946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:53.148 [2024-10-01 08:45:44.861000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198feb58 00:30:53.148 [2024-10-01 08:45:44.862049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.148 [2024-10-01 08:45:44.862065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:53.148 [2024-10-01 08:45:44.873706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198ef6a8 00:30:53.148 [2024-10-01 08:45:44.874804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:23436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.148 [2024-10-01 08:45:44.874821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:53.148 [2024-10-01 08:45:44.885663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198f8618 00:30:53.148 [2024-10-01 08:45:44.886742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.148 [2024-10-01 08:45:44.886758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:53.148 [2024-10-01 08:45:44.897609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198eaef0 00:30:53.148 [2024-10-01 08:45:44.898715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.148 [2024-10-01 08:45:44.898731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:53.148 [2024-10-01 08:45:44.909577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198e9e10 00:30:53.148 [2024-10-01 08:45:44.910667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:24027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.148 [2024-10-01 08:45:44.910684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:53.148 [2024-10-01 08:45:44.921509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198feb58 00:30:53.148 [2024-10-01 08:45:44.922587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.148 [2024-10-01 08:45:44.922604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:53.148 [2024-10-01 08:45:44.933452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198fe2e8 00:30:53.148 [2024-10-01 08:45:44.934527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.148 [2024-10-01 08:45:44.934544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:53.148 [2024-10-01 08:45:44.945404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198ee5c8 00:30:53.148 [2024-10-01 08:45:44.946500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.148 [2024-10-01 08:45:44.946516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:53.148 [2024-10-01 08:45:44.957338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198f7538 00:30:53.148 [2024-10-01 08:45:44.958487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:9580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.148 [2024-10-01 08:45:44.958503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:53.409 [2024-10-01 08:45:44.971098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198ee5c8 00:30:53.409 [2024-10-01 08:45:44.972820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:18608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.409 [2024-10-01 08:45:44.972836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:53.410 [2024-10-01 08:45:44.981067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198eea00 00:30:53.410 [2024-10-01 08:45:44.982304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:23390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.410 [2024-10-01 08:45:44.982320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:53.410 [2024-10-01 08:45:44.993784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198e6300 00:30:53.410 [2024-10-01 08:45:44.995043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:12518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.410 [2024-10-01 08:45:44.995060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:53.410 [2024-10-01 08:45:45.007215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198de8a8 00:30:53.410 [2024-10-01 08:45:45.009096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:9551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.410 [2024-10-01 08:45:45.009113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:53.410 [2024-10-01 08:45:45.017566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198eea00 00:30:53.410 [2024-10-01 08:45:45.018810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.410 [2024-10-01 08:45:45.018826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:53.410 [2024-10-01 08:45:45.029503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198eea00 00:30:53.410 [2024-10-01 08:45:45.030734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.410 [2024-10-01 08:45:45.030751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:53.410 [2024-10-01 08:45:45.041423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198eea00 00:30:53.410 [2024-10-01 08:45:45.042653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:15752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.410 [2024-10-01 08:45:45.042669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:53.410 [2024-10-01 08:45:45.053333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198eea00 00:30:53.410 [2024-10-01 08:45:45.054565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.410 [2024-10-01 08:45:45.054581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:53.410 [2024-10-01 08:45:45.065258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198eea00 00:30:53.410 [2024-10-01 08:45:45.066488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:3635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.410 [2024-10-01 08:45:45.066505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:53.410 [2024-10-01 08:45:45.077164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198eea00 00:30:53.410 [2024-10-01 08:45:45.078395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.410 [2024-10-01 08:45:45.078412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:53.410 [2024-10-01 08:45:45.088274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198e4140 00:30:53.410 [2024-10-01 08:45:45.089485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:17922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.410 [2024-10-01 08:45:45.089501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:53.410 [2024-10-01 08:45:45.100990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198f35f0 00:30:53.410 [2024-10-01 08:45:45.102225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:4708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.410 [2024-10-01 08:45:45.102241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:53.410 [2024-10-01 08:45:45.112132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198f57b0 00:30:53.410 [2024-10-01 08:45:45.113347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.410 [2024-10-01 08:45:45.113363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:53.410 [2024-10-01 08:45:45.126405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198de470 00:30:53.410 [2024-10-01 08:45:45.128273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.410 [2024-10-01 08:45:45.128289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:53.410 [2024-10-01 08:45:45.136008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198f4b08 00:30:53.410 [2024-10-01 08:45:45.137233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:24291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.410 [2024-10-01 08:45:45.137249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:53.410 [2024-10-01 08:45:45.150326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198e38d0 00:30:53.410 [2024-10-01 08:45:45.152196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.410 [2024-10-01 08:45:45.152212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:53.410 [2024-10-01 08:45:45.160748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198df550 00:30:53.410 [2024-10-01 08:45:45.161989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.410 [2024-10-01 08:45:45.162012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:53.410 [2024-10-01 08:45:45.174231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198e4de8 00:30:53.410 [2024-10-01 08:45:45.176097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:22037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.410 [2024-10-01 08:45:45.176113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:53.410 [2024-10-01 08:45:45.185015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198e27f0 00:30:53.410 [2024-10-01 08:45:45.186411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:11629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.410 [2024-10-01 08:45:45.186428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:53.410 [2024-10-01 08:45:45.198666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198de8a8 00:30:53.410 [2024-10-01 08:45:45.200695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.410 [2024-10-01 08:45:45.200712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:53.410 [2024-10-01 08:45:45.208277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198e88f8 00:30:53.410 [2024-10-01 08:45:45.209650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.410 [2024-10-01 08:45:45.209666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:53.410 [2024-10-01 08:45:45.221011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198e99d8 00:30:53.410 [2024-10-01 08:45:45.222391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:6986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.410 [2024-10-01 08:45:45.222407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:53.672 [2024-10-01 08:45:45.232947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198eaab8 00:30:53.672 [2024-10-01 08:45:45.234333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:23950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.672 [2024-10-01 08:45:45.234349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:53.672 [2024-10-01 08:45:45.244897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198ef270 00:30:53.672 [2024-10-01 08:45:45.246278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.672 [2024-10-01 08:45:45.246294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:53.672 [2024-10-01 08:45:45.258415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198f4298 00:30:53.672 [2024-10-01 08:45:45.260405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.672 [2024-10-01 08:45:45.260421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:53.672 [2024-10-01 08:45:45.268819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198e1f80 00:30:53.672 [2024-10-01 08:45:45.270235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:10243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.672 [2024-10-01 08:45:45.270252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:53.672 [2024-10-01 08:45:45.279984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198fc998 00:30:53.672 [2024-10-01 08:45:45.281330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:10212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.672 [2024-10-01 08:45:45.281347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:53.672 [2024-10-01 08:45:45.292708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198ed920 00:30:53.672 [2024-10-01 08:45:45.294066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.672 [2024-10-01 08:45:45.294083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:53.672 [2024-10-01 08:45:45.304649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198e73e0 00:30:53.672 [2024-10-01 08:45:45.306027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:6456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.672 [2024-10-01 08:45:45.306043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:53.672 [2024-10-01 08:45:45.316586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198e38d0 00:30:53.672 [2024-10-01 08:45:45.317959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:18704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.672 [2024-10-01 08:45:45.317976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:53.672 [2024-10-01 08:45:45.328508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198de8a8 00:30:53.672 [2024-10-01 08:45:45.329884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:20417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.672 [2024-10-01 08:45:45.329900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:53.672 [2024-10-01 08:45:45.340462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198f5be8 00:30:53.672 [2024-10-01 08:45:45.341842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:24254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.672 [2024-10-01 08:45:45.341859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:53.672 [2024-10-01 08:45:45.352451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198f4298 00:30:53.672 [2024-10-01 08:45:45.353832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:7616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.672 [2024-10-01 08:45:45.353849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:53.672 [2024-10-01 08:45:45.364415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198ef270 00:30:53.672 [2024-10-01 08:45:45.365756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.672 [2024-10-01 08:45:45.365772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:53.672 [2024-10-01 08:45:45.376363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198eaab8 00:30:53.672 [2024-10-01 08:45:45.377743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.672 [2024-10-01 08:45:45.377759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:53.672 [2024-10-01 08:45:45.388314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198e99d8 00:30:53.672 [2024-10-01 08:45:45.389693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:4230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.672 [2024-10-01 08:45:45.389710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:53.672 [2024-10-01 08:45:45.400243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198e88f8 00:30:53.672 [2024-10-01 08:45:45.401636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:21105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.672 [2024-10-01 08:45:45.401652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:53.672 [2024-10-01 08:45:45.412170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198fef90 00:30:53.672 [2024-10-01 08:45:45.413552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.673 [2024-10-01 08:45:45.413569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:53.673 [2024-10-01 08:45:45.424120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198fda78 00:30:53.673 [2024-10-01 08:45:45.425499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.673 [2024-10-01 08:45:45.425515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:53.673 [2024-10-01 08:45:45.436057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198fa3a0 00:30:53.673 [2024-10-01 08:45:45.437435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:22342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.673 [2024-10-01 08:45:45.437452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:53.673 [2024-10-01 08:45:45.448006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198dfdc0 00:30:53.673 [2024-10-01 08:45:45.449394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:4239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.673 [2024-10-01 08:45:45.449410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:53.673 [2024-10-01 08:45:45.459948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198e5220 00:30:53.673 [2024-10-01 08:45:45.461325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.673 [2024-10-01 08:45:45.461342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:53.673 [2024-10-01 08:45:45.471938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198fd640 00:30:53.673 [2024-10-01 08:45:45.473321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.673 [2024-10-01 08:45:45.473340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:53.673 [2024-10-01 08:45:45.483902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198fc560 00:30:53.673 [2024-10-01 08:45:45.485284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:9274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.673 [2024-10-01 08:45:45.485301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:53.935 [2024-10-01 08:45:45.497396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198edd58 00:30:53.935 [2024-10-01 08:45:45.499392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.935 [2024-10-01 08:45:45.499407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:53.935 [2024-10-01 08:45:45.507786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198df118 00:30:53.935 [2024-10-01 08:45:45.509161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:14005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.935 [2024-10-01 08:45:45.509177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:53.935 [2024-10-01 08:45:45.521251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198eee38 00:30:53.935 [2024-10-01 08:45:45.523285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.935 [2024-10-01 08:45:45.523300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:53.935 [2024-10-01 08:45:45.533132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198e1710 00:30:53.935 [2024-10-01 08:45:45.535117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:3278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.935 [2024-10-01 08:45:45.535134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:53.935 [2024-10-01 08:45:45.543531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198ea680 00:30:53.935 [2024-10-01 08:45:45.544913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:21159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.935 [2024-10-01 08:45:45.544929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:53.935 [2024-10-01 08:45:45.555465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198df988 00:30:53.935 [2024-10-01 08:45:45.556834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:12131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.935 [2024-10-01 08:45:45.556850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:53.935 [2024-10-01 08:45:45.568962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198e6300 00:30:53.935 [2024-10-01 08:45:45.570945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.935 [2024-10-01 08:45:45.570962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:53.935 [2024-10-01 08:45:45.579340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198edd58 00:30:53.935 [2024-10-01 08:45:45.580726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.935 [2024-10-01 08:45:45.580743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:53.935 [2024-10-01 08:45:45.591252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198edd58 00:30:53.935 [2024-10-01 08:45:45.592626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:20384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.935 [2024-10-01 08:45:45.592642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:53.935 [2024-10-01 08:45:45.603168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198edd58 00:30:53.935 [2024-10-01 08:45:45.604545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:10828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.935 [2024-10-01 08:45:45.604561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:53.935 [2024-10-01 08:45:45.615111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198edd58 00:30:53.935 [2024-10-01 08:45:45.616443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:9969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.935 [2024-10-01 08:45:45.616459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:53.935 [2024-10-01 08:45:45.627019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198f7970 00:30:53.935 [2024-10-01 08:45:45.628382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:25456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.935 [2024-10-01 08:45:45.628399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:53.935 [2024-10-01 08:45:45.638980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198f6890 00:30:53.935 [2024-10-01 08:45:45.640339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:17413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.935 [2024-10-01 08:45:45.640355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:53.935 [2024-10-01 08:45:45.650955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198e9e10 00:30:53.935 [2024-10-01 08:45:45.652432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.935 [2024-10-01 08:45:45.652449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:53.935 [2024-10-01 08:45:45.663013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198dece0 00:30:53.935 [2024-10-01 08:45:45.664369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:6122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.935 [2024-10-01 08:45:45.664386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:53.935 [2024-10-01 08:45:45.675002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198eff18 00:30:53.935 [2024-10-01 08:45:45.676319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:25415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.935 [2024-10-01 08:45:45.676336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:53.935 [2024-10-01 08:45:45.686968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198e6738 00:30:53.935 21153.00 IOPS, 82.63 MiB/s [2024-10-01 08:45:45.688326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:19342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.935 [2024-10-01 08:45:45.688341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:53.935 [2024-10-01 08:45:45.699147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198f3e60 00:30:53.935 [2024-10-01 08:45:45.700481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.935 [2024-10-01 08:45:45.700497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:53.935 [2024-10-01 08:45:45.711099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198f5be8 00:30:53.936 [2024-10-01 08:45:45.712454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:11685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.936 [2024-10-01 08:45:45.712471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:53.936 [2024-10-01 08:45:45.723056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198edd58 00:30:53.936 [2024-10-01 08:45:45.724435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.936 [2024-10-01 08:45:45.724452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:53.936 [2024-10-01 08:45:45.736629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198f96f8 00:30:53.936 [2024-10-01 08:45:45.738642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.936 [2024-10-01 08:45:45.738659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:53.936 [2024-10-01 08:45:45.748525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198e9e10 00:30:53.936 [2024-10-01 08:45:45.750530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:7099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:53.936 [2024-10-01 08:45:45.750546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:54.197 [2024-10-01 08:45:45.758927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198e12d8 00:30:54.197 [2024-10-01 08:45:45.760300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.197 [2024-10-01 08:45:45.760317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:54.197 [2024-10-01 08:45:45.770900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198fc128 00:30:54.197 [2024-10-01 08:45:45.772264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.197 [2024-10-01 08:45:45.772281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:54.197 [2024-10-01 08:45:45.784415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198edd58 00:30:54.197 [2024-10-01 08:45:45.786422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:21851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.197 [2024-10-01 08:45:45.786441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:54.197 [2024-10-01 08:45:45.795186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198e5a90 00:30:54.197 [2024-10-01 08:45:45.796709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.197 [2024-10-01 08:45:45.796725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:54.197 [2024-10-01 08:45:45.804978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198eb760 00:30:54.197 [2024-10-01 08:45:45.805861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.197 [2024-10-01 08:45:45.805878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:54.197 [2024-10-01 08:45:45.818471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198df550 00:30:54.197 [2024-10-01 08:45:45.819976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:11130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.197 [2024-10-01 08:45:45.819993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:54.197 [2024-10-01 08:45:45.828859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198ef6a8 00:30:54.197 [2024-10-01 08:45:45.829721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:11135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.197 [2024-10-01 08:45:45.829738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:54.197 [2024-10-01 08:45:45.840772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198ef6a8 00:30:54.197 [2024-10-01 08:45:45.841637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:3509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.197 [2024-10-01 08:45:45.841654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:54.197 [2024-10-01 08:45:45.852711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198ef6a8 00:30:54.197 [2024-10-01 08:45:45.853580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:3535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.197 [2024-10-01 08:45:45.853596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:54.197 [2024-10-01 08:45:45.864645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198ef6a8 00:30:54.197 [2024-10-01 08:45:45.865515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.197 [2024-10-01 08:45:45.865532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:54.197 [2024-10-01 08:45:45.876577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198ef6a8 00:30:54.197 [2024-10-01 08:45:45.877445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.197 [2024-10-01 08:45:45.877463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:54.197 [2024-10-01 08:45:45.888504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198ef6a8 00:30:54.197 [2024-10-01 08:45:45.889379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.197 [2024-10-01 08:45:45.889396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:54.197 [2024-10-01 08:45:45.899626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198f6458 00:30:54.197 [2024-10-01 08:45:45.900479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.197 [2024-10-01 08:45:45.900495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:54.197 [2024-10-01 08:45:45.914444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198fd208 00:30:54.197 [2024-10-01 08:45:45.916106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:24600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.197 [2024-10-01 08:45:45.916122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:54.197 [2024-10-01 08:45:45.924809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198e27f0 00:30:54.197 [2024-10-01 08:45:45.925828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:24723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.197 [2024-10-01 08:45:45.925844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:54.197 [2024-10-01 08:45:45.936745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198e27f0 00:30:54.197 [2024-10-01 08:45:45.937771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.197 [2024-10-01 08:45:45.937788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:54.197 [2024-10-01 08:45:45.948671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198e27f0 00:30:54.197 [2024-10-01 08:45:45.949703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.197 [2024-10-01 08:45:45.949719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:54.197 [2024-10-01 08:45:45.960603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198e27f0 00:30:54.197 [2024-10-01 08:45:45.961625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:21019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.197 [2024-10-01 08:45:45.961642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:54.197 [2024-10-01 08:45:45.972543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198e27f0 00:30:54.197 [2024-10-01 08:45:45.973556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.197 [2024-10-01 08:45:45.973572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:54.197 [2024-10-01 08:45:45.984594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198e27f0 00:30:54.198 [2024-10-01 08:45:45.985617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:14298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.198 [2024-10-01 08:45:45.985634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:54.198 [2024-10-01 08:45:45.996521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198e27f0 00:30:54.198 [2024-10-01 08:45:45.997544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:16377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.198 [2024-10-01 08:45:45.997561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:54.198 [2024-10-01 08:45:46.008461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198e27f0 00:30:54.198 [2024-10-01 08:45:46.009488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:21570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.198 [2024-10-01 08:45:46.009505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:54.458 [2024-10-01 08:45:46.020361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198fc998 00:30:54.459 [2024-10-01 08:45:46.021397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:8925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.459 [2024-10-01 08:45:46.021414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:54.459 [2024-10-01 08:45:46.031533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198e99d8 00:30:54.459 [2024-10-01 08:45:46.032539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.459 [2024-10-01 08:45:46.032555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:54.459 [2024-10-01 08:45:46.045794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198e88f8 00:30:54.459 [2024-10-01 08:45:46.047458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:22388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.459 [2024-10-01 08:45:46.047475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:54.459 [2024-10-01 08:45:46.055400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198e1710 00:30:54.459 [2024-10-01 08:45:46.056402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.459 [2024-10-01 08:45:46.056419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:54.459 [2024-10-01 08:45:46.068108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198ed0b0 00:30:54.459 [2024-10-01 08:45:46.069142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.459 [2024-10-01 08:45:46.069159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:54.459 [2024-10-01 08:45:46.080072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198e6738 00:30:54.459 [2024-10-01 08:45:46.081057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:18517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.459 [2024-10-01 08:45:46.081073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:54.459 [2024-10-01 08:45:46.091231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198e9168 00:30:54.459 [2024-10-01 08:45:46.092237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.459 [2024-10-01 08:45:46.092257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:54.459 [2024-10-01 08:45:46.105850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198e3d08 00:30:54.459 [2024-10-01 08:45:46.107503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.459 [2024-10-01 08:45:46.107520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:54.459 [2024-10-01 08:45:46.115447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198e5ec8 00:30:54.459 [2024-10-01 08:45:46.116436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:20824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.459 [2024-10-01 08:45:46.116452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:54.459 [2024-10-01 08:45:46.128477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198f0350 00:30:54.459 [2024-10-01 08:45:46.129769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:9770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.459 [2024-10-01 08:45:46.129786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:54.459 [2024-10-01 08:45:46.141605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198e1710 00:30:54.459 [2024-10-01 08:45:46.143262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:20837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.459 [2024-10-01 08:45:46.143279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:54.459 [2024-10-01 08:45:46.151213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198e4140 00:30:54.459 [2024-10-01 08:45:46.152210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:18757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.459 [2024-10-01 08:45:46.152227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:54.459 [2024-10-01 08:45:46.164588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198e6300 00:30:54.459 [2024-10-01 08:45:46.165905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:23034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.459 [2024-10-01 08:45:46.165921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:54.459 [2024-10-01 08:45:46.175702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198fc560 00:30:54.459 [2024-10-01 08:45:46.176977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:17988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.459 [2024-10-01 08:45:46.176997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:54.459 [2024-10-01 08:45:46.189915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198fc560 00:30:54.459 [2024-10-01 08:45:46.191849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.459 [2024-10-01 08:45:46.191866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:54.459 [2024-10-01 08:45:46.199535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198f7da8 00:30:54.459 [2024-10-01 08:45:46.200807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.459 [2024-10-01 08:45:46.200824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:54.459 [2024-10-01 08:45:46.212265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198f7da8 00:30:54.459 [2024-10-01 08:45:46.213540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:13826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.459 [2024-10-01 08:45:46.213556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:54.459 [2024-10-01 08:45:46.224197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198f7da8 00:30:54.459 [2024-10-01 08:45:46.225467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.459 [2024-10-01 08:45:46.225483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:54.459 [2024-10-01 08:45:46.236141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198f7da8 00:30:54.459 [2024-10-01 08:45:46.237408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.459 [2024-10-01 08:45:46.237424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:54.459 [2024-10-01 08:45:46.248054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198f7da8 00:30:54.459 [2024-10-01 08:45:46.249333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:12151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.459 [2024-10-01 08:45:46.249349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:54.459 [2024-10-01 08:45:46.261510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198f7da8 00:30:54.459 [2024-10-01 08:45:46.263422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.459 [2024-10-01 08:45:46.263438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:54.459 [2024-10-01 08:45:46.271915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198f3a28 00:30:54.459 [2024-10-01 08:45:46.273206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.459 [2024-10-01 08:45:46.273222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:54.720 [2024-10-01 08:45:46.285417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198fc560 00:30:54.720 [2024-10-01 08:45:46.287327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:15623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.720 [2024-10-01 08:45:46.287345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:54.720 [2024-10-01 08:45:46.295823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198fb480 00:30:54.720 [2024-10-01 08:45:46.297058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:24605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.720 [2024-10-01 08:45:46.297075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:54.720 [2024-10-01 08:45:46.309416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198f7da8 00:30:54.720 [2024-10-01 08:45:46.311336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:18899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.720 [2024-10-01 08:45:46.311353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:54.720 [2024-10-01 08:45:46.319833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198f3a28 00:30:54.720 [2024-10-01 08:45:46.321124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:22018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.720 [2024-10-01 08:45:46.321140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:54.720 [2024-10-01 08:45:46.331786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198f2948 00:30:54.720 [2024-10-01 08:45:46.333056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.720 [2024-10-01 08:45:46.333073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:54.720 [2024-10-01 08:45:46.342929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198fcdd0 00:30:54.720 [2024-10-01 08:45:46.344205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.720 [2024-10-01 08:45:46.344221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:54.720 [2024-10-01 08:45:46.355628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198fcdd0 00:30:54.720 [2024-10-01 08:45:46.356890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.720 [2024-10-01 08:45:46.356906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:54.720 [2024-10-01 08:45:46.367540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198fcdd0 00:30:54.720 [2024-10-01 08:45:46.368805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:9969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.720 [2024-10-01 08:45:46.368821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:54.720 [2024-10-01 08:45:46.379459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198fcdd0 00:30:54.720 [2024-10-01 08:45:46.380722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:11171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.720 [2024-10-01 08:45:46.380738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:54.720 [2024-10-01 08:45:46.391376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198fcdd0 00:30:54.720 [2024-10-01 08:45:46.392645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:25486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.720 [2024-10-01 08:45:46.392661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:54.720 [2024-10-01 08:45:46.403308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198fcdd0 00:30:54.720 [2024-10-01 08:45:46.404568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.720 [2024-10-01 08:45:46.404584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:54.720 [2024-10-01 08:45:46.415215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198fcdd0 00:30:54.720 [2024-10-01 08:45:46.416440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.720 [2024-10-01 08:45:46.416456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:54.720 [2024-10-01 08:45:46.427110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198f20d8 00:30:54.720 [2024-10-01 08:45:46.428326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.720 [2024-10-01 08:45:46.428343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:54.720 [2024-10-01 08:45:46.439045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198e3d08 00:30:54.720 [2024-10-01 08:45:46.440314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:23072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.720 [2024-10-01 08:45:46.440331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:54.720 [2024-10-01 08:45:46.452511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198f8e88 00:30:54.720 [2024-10-01 08:45:46.454371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.720 [2024-10-01 08:45:46.454387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:54.720 [2024-10-01 08:45:46.462124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198f20d8 00:30:54.720 [2024-10-01 08:45:46.463374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.720 [2024-10-01 08:45:46.463389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:54.720 [2024-10-01 08:45:46.474874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198f31b8 00:30:54.720 [2024-10-01 08:45:46.476127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.720 [2024-10-01 08:45:46.476143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:54.720 [2024-10-01 08:45:46.488362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198df118 00:30:54.720 [2024-10-01 08:45:46.490259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:20502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.720 [2024-10-01 08:45:46.490275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:54.720 [2024-10-01 08:45:46.497894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198fc128 00:30:54.720 [2024-10-01 08:45:46.499135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:18914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.720 [2024-10-01 08:45:46.499150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:54.720 [2024-10-01 08:45:46.510601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198fd640 00:30:54.720 [2024-10-01 08:45:46.511822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:13508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.721 [2024-10-01 08:45:46.511841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:54.721 [2024-10-01 08:45:46.524100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198dece0 00:30:54.721 [2024-10-01 08:45:46.525988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.721 [2024-10-01 08:45:46.526007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:54.721 [2024-10-01 08:45:46.534462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198e3d08 00:30:54.721 [2024-10-01 08:45:46.535711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.721 [2024-10-01 08:45:46.535727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:54.982 [2024-10-01 08:45:46.546388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198e3d08 00:30:54.982 [2024-10-01 08:45:46.547631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:24395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.982 [2024-10-01 08:45:46.547647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:54.982 [2024-10-01 08:45:46.558290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198e3d08 00:30:54.982 [2024-10-01 08:45:46.559539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.982 [2024-10-01 08:45:46.559555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:54.982 [2024-10-01 08:45:46.570192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198e3d08 00:30:54.982 [2024-10-01 08:45:46.571431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:17190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.982 [2024-10-01 08:45:46.571447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:54.982 [2024-10-01 08:45:46.582103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198e3d08 00:30:54.982 [2024-10-01 08:45:46.583340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.982 [2024-10-01 08:45:46.583356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:54.982 [2024-10-01 08:45:46.594050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198e3d08 00:30:54.982 [2024-10-01 08:45:46.595293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.982 [2024-10-01 08:45:46.595309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:54.982 [2024-10-01 08:45:46.605956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198e3d08 00:30:54.982 [2024-10-01 08:45:46.607177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.982 [2024-10-01 08:45:46.607193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:54.982 [2024-10-01 08:45:46.617859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198e3d08 00:30:54.982 [2024-10-01 08:45:46.619112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.982 [2024-10-01 08:45:46.619129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:54.982 [2024-10-01 08:45:46.629754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198e3d08 00:30:54.982 [2024-10-01 08:45:46.630996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:14047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.982 [2024-10-01 08:45:46.631013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:54.982 [2024-10-01 08:45:46.641659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198e3d08 00:30:54.982 [2024-10-01 08:45:46.642865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:6845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.982 [2024-10-01 08:45:46.642881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:54.982 [2024-10-01 08:45:46.653580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198fbcf0 00:30:54.982 [2024-10-01 08:45:46.654810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.982 [2024-10-01 08:45:46.654826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:54.982 [2024-10-01 08:45:46.667184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198e0ea0 00:30:54.982 [2024-10-01 08:45:46.669050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:20868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.982 [2024-10-01 08:45:46.669066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:54.982 [2024-10-01 08:45:46.676779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198f20d8 00:30:54.982 [2024-10-01 08:45:46.678009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.982 [2024-10-01 08:45:46.678024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:54.982 [2024-10-01 08:45:46.689487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e5f0) with pdu=0x2000198e3d08 00:30:54.982 21279.50 IOPS, 83.12 MiB/s [2024-10-01 08:45:46.690751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:54.982 [2024-10-01 08:45:46.690765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:54.982 00:30:54.982 Latency(us) 00:30:54.982 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:54.982 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:54.982 nvme0n1 : 2.01 21287.45 83.15 0.00 0.00 6004.59 2102.61 17367.04 00:30:54.982 =================================================================================================================== 00:30:54.982 Total : 21287.45 83.15 0.00 0.00 6004.59 2102.61 17367.04 00:30:54.982 { 00:30:54.982 "results": [ 00:30:54.982 { 00:30:54.982 "job": "nvme0n1", 00:30:54.982 "core_mask": "0x2", 00:30:54.982 "workload": "randwrite", 00:30:54.982 "status": "finished", 00:30:54.982 "queue_depth": 128, 00:30:54.982 "io_size": 4096, 00:30:54.982 "runtime": 2.005266, 00:30:54.982 "iops": 21287.45014377145, 00:30:54.982 "mibps": 83.15410212410723, 00:30:54.982 "io_failed": 0, 00:30:54.982 "io_timeout": 0, 00:30:54.982 "avg_latency_us": 6004.588198436683, 00:30:54.982 "min_latency_us": 2102.6133333333332, 00:30:54.982 "max_latency_us": 17367.04 00:30:54.982 } 00:30:54.982 ], 00:30:54.982 "core_count": 1 00:30:54.982 } 00:30:54.982 08:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:54.982 08:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:54.982 08:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:54.982 | .driver_specific 00:30:54.982 | .nvme_error 00:30:54.982 | .status_code 00:30:54.982 | .command_transient_transport_error' 00:30:54.982 08:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:55.243 08:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 167 > 0 )) 00:30:55.243 08:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3930091 00:30:55.243 08:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3930091 ']' 00:30:55.243 08:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3930091 00:30:55.243 08:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:30:55.243 08:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:55.243 08:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3930091 00:30:55.243 08:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:55.243 08:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:55.243 08:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3930091' 00:30:55.243 killing process with pid 3930091 00:30:55.243 08:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3930091 00:30:55.243 Received shutdown signal, test time was about 2.000000 seconds 00:30:55.243 00:30:55.243 Latency(us) 00:30:55.243 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:55.243 =================================================================================================================== 00:30:55.243 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:55.243 08:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3930091 00:30:55.243 08:45:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:30:55.503 08:45:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:55.503 08:45:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:30:55.503 08:45:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:30:55.503 08:45:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:30:55.503 08:45:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3930772 00:30:55.503 08:45:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3930772 /var/tmp/bperf.sock 00:30:55.503 08:45:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3930772 ']' 00:30:55.504 08:45:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:30:55.504 08:45:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:55.504 08:45:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:55.504 08:45:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:55.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:55.504 08:45:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:55.504 08:45:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:55.504 [2024-10-01 08:45:47.116861] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:30:55.504 [2024-10-01 08:45:47.116917] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3930772 ] 00:30:55.504 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:55.504 Zero copy mechanism will not be used. 00:30:55.504 [2024-10-01 08:45:47.193048] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:55.504 [2024-10-01 08:45:47.246152] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:30:56.073 08:45:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:56.073 08:45:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:30:56.073 08:45:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:56.073 08:45:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:56.332 08:45:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:56.332 08:45:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:56.332 08:45:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:56.332 08:45:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:56.332 08:45:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:56.332 08:45:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:56.592 nvme0n1 00:30:56.592 08:45:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:30:56.592 08:45:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:56.592 08:45:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:56.592 08:45:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:56.592 08:45:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:56.592 08:45:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:56.853 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:56.853 Zero copy mechanism will not be used. 00:30:56.853 Running I/O for 2 seconds... 00:30:56.853 [2024-10-01 08:45:48.457161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:56.853 [2024-10-01 08:45:48.457521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.853 [2024-10-01 08:45:48.457551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:56.853 [2024-10-01 08:45:48.465556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:56.853 [2024-10-01 08:45:48.465904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.853 [2024-10-01 08:45:48.465925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:56.853 [2024-10-01 08:45:48.473351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:56.853 [2024-10-01 08:45:48.473687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.853 [2024-10-01 08:45:48.473707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:56.853 [2024-10-01 08:45:48.481126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:56.853 [2024-10-01 08:45:48.481460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.853 [2024-10-01 08:45:48.481478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.853 [2024-10-01 08:45:48.489315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:56.853 [2024-10-01 08:45:48.489632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.854 [2024-10-01 08:45:48.489651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:56.854 [2024-10-01 08:45:48.497198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:56.854 [2024-10-01 08:45:48.497540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.854 [2024-10-01 08:45:48.497559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:56.854 [2024-10-01 08:45:48.504929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:56.854 [2024-10-01 08:45:48.505136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.854 [2024-10-01 08:45:48.505154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:56.854 [2024-10-01 08:45:48.515487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:56.854 [2024-10-01 08:45:48.515780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.854 [2024-10-01 08:45:48.515798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.854 [2024-10-01 08:45:48.522035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:56.854 [2024-10-01 08:45:48.522365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.854 [2024-10-01 08:45:48.522383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:56.854 [2024-10-01 08:45:48.528456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:56.854 [2024-10-01 08:45:48.528768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.854 [2024-10-01 08:45:48.528790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:56.854 [2024-10-01 08:45:48.535647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:56.854 [2024-10-01 08:45:48.535933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.854 [2024-10-01 08:45:48.535952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:56.854 [2024-10-01 08:45:48.543282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:56.854 [2024-10-01 08:45:48.543574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.854 [2024-10-01 08:45:48.543592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.854 [2024-10-01 08:45:48.548274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:56.854 [2024-10-01 08:45:48.548476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.854 [2024-10-01 08:45:48.548493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:56.854 [2024-10-01 08:45:48.555213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:56.854 [2024-10-01 08:45:48.555416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.854 [2024-10-01 08:45:48.555433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:56.854 [2024-10-01 08:45:48.562548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:56.854 [2024-10-01 08:45:48.562886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.854 [2024-10-01 08:45:48.562904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:56.854 [2024-10-01 08:45:48.569753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:56.854 [2024-10-01 08:45:48.569953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.854 [2024-10-01 08:45:48.569971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.854 [2024-10-01 08:45:48.574259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:56.854 [2024-10-01 08:45:48.574460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.854 [2024-10-01 08:45:48.574477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:56.854 [2024-10-01 08:45:48.584327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:56.854 [2024-10-01 08:45:48.584626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.854 [2024-10-01 08:45:48.584645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:56.854 [2024-10-01 08:45:48.590815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:56.854 [2024-10-01 08:45:48.591118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.854 [2024-10-01 08:45:48.591136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:56.854 [2024-10-01 08:45:48.598100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:56.854 [2024-10-01 08:45:48.598360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.854 [2024-10-01 08:45:48.598376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.854 [2024-10-01 08:45:48.604815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:56.854 [2024-10-01 08:45:48.605023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.854 [2024-10-01 08:45:48.605040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:56.854 [2024-10-01 08:45:48.610552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:56.854 [2024-10-01 08:45:48.610640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.854 [2024-10-01 08:45:48.610655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:56.854 [2024-10-01 08:45:48.621894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:56.854 [2024-10-01 08:45:48.622250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.854 [2024-10-01 08:45:48.622268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:56.854 [2024-10-01 08:45:48.630742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:56.854 [2024-10-01 08:45:48.630954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.854 [2024-10-01 08:45:48.630970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:56.854 [2024-10-01 08:45:48.642073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:56.854 [2024-10-01 08:45:48.642422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.854 [2024-10-01 08:45:48.642440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:56.854 [2024-10-01 08:45:48.653102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:56.854 [2024-10-01 08:45:48.653394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.854 [2024-10-01 08:45:48.653413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:56.854 [2024-10-01 08:45:48.660966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:56.854 [2024-10-01 08:45:48.661176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.854 [2024-10-01 08:45:48.661196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:56.854 [2024-10-01 08:45:48.668368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:56.854 [2024-10-01 08:45:48.668663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.854 [2024-10-01 08:45:48.668681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.115 [2024-10-01 08:45:48.676832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.115 [2024-10-01 08:45:48.677162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.115 [2024-10-01 08:45:48.677180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:57.115 [2024-10-01 08:45:48.685276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.115 [2024-10-01 08:45:48.685607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.115 [2024-10-01 08:45:48.685625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:57.115 [2024-10-01 08:45:48.693198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.115 [2024-10-01 08:45:48.693409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.115 [2024-10-01 08:45:48.693487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:57.115 [2024-10-01 08:45:48.700490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.115 [2024-10-01 08:45:48.700843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.115 [2024-10-01 08:45:48.700861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.115 [2024-10-01 08:45:48.708897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.115 [2024-10-01 08:45:48.709195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.115 [2024-10-01 08:45:48.709212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:57.115 [2024-10-01 08:45:48.714584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.116 [2024-10-01 08:45:48.714915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.116 [2024-10-01 08:45:48.714933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:57.116 [2024-10-01 08:45:48.719207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.116 [2024-10-01 08:45:48.719407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.116 [2024-10-01 08:45:48.719423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:57.116 [2024-10-01 08:45:48.729078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.116 [2024-10-01 08:45:48.729286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.116 [2024-10-01 08:45:48.729304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.116 [2024-10-01 08:45:48.736244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.116 [2024-10-01 08:45:48.736583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.116 [2024-10-01 08:45:48.736601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:57.116 [2024-10-01 08:45:48.743277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.116 [2024-10-01 08:45:48.743629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.116 [2024-10-01 08:45:48.743647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:57.116 [2024-10-01 08:45:48.752361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.116 [2024-10-01 08:45:48.752701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.116 [2024-10-01 08:45:48.752719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:57.116 [2024-10-01 08:45:48.763370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.116 [2024-10-01 08:45:48.763718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.116 [2024-10-01 08:45:48.763736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.116 [2024-10-01 08:45:48.770826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.116 [2024-10-01 08:45:48.771152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.116 [2024-10-01 08:45:48.771170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:57.116 [2024-10-01 08:45:48.779270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.116 [2024-10-01 08:45:48.779601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.116 [2024-10-01 08:45:48.779619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:57.116 [2024-10-01 08:45:48.787368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.116 [2024-10-01 08:45:48.787568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.116 [2024-10-01 08:45:48.787585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:57.116 [2024-10-01 08:45:48.794602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.116 [2024-10-01 08:45:48.794953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.116 [2024-10-01 08:45:48.794971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.116 [2024-10-01 08:45:48.803925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.116 [2024-10-01 08:45:48.804254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.116 [2024-10-01 08:45:48.804272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:57.116 [2024-10-01 08:45:48.810792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.116 [2024-10-01 08:45:48.811112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.116 [2024-10-01 08:45:48.811130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:57.116 [2024-10-01 08:45:48.816524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.116 [2024-10-01 08:45:48.816848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.116 [2024-10-01 08:45:48.816866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:57.116 [2024-10-01 08:45:48.825500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.116 [2024-10-01 08:45:48.825816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.116 [2024-10-01 08:45:48.825834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.116 [2024-10-01 08:45:48.832444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.116 [2024-10-01 08:45:48.832776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.116 [2024-10-01 08:45:48.832794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:57.116 [2024-10-01 08:45:48.840186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.116 [2024-10-01 08:45:48.840518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.116 [2024-10-01 08:45:48.840535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:57.116 [2024-10-01 08:45:48.847145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.116 [2024-10-01 08:45:48.847489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.116 [2024-10-01 08:45:48.847507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:57.116 [2024-10-01 08:45:48.853927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.116 [2024-10-01 08:45:48.854247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.116 [2024-10-01 08:45:48.854265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.116 [2024-10-01 08:45:48.862513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.116 [2024-10-01 08:45:48.862804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.116 [2024-10-01 08:45:48.862825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:57.116 [2024-10-01 08:45:48.870214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.116 [2024-10-01 08:45:48.870477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.116 [2024-10-01 08:45:48.870494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:57.116 [2024-10-01 08:45:48.877953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.116 [2024-10-01 08:45:48.878277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.116 [2024-10-01 08:45:48.878295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:57.116 [2024-10-01 08:45:48.886804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.116 [2024-10-01 08:45:48.887124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.116 [2024-10-01 08:45:48.887143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.116 [2024-10-01 08:45:48.894291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.116 [2024-10-01 08:45:48.894587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.116 [2024-10-01 08:45:48.894605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:57.116 [2024-10-01 08:45:48.901368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.116 [2024-10-01 08:45:48.901690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.116 [2024-10-01 08:45:48.901707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:57.116 [2024-10-01 08:45:48.908978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.116 [2024-10-01 08:45:48.909185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.116 [2024-10-01 08:45:48.909202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:57.116 [2024-10-01 08:45:48.916160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.116 [2024-10-01 08:45:48.916361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.116 [2024-10-01 08:45:48.916378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.116 [2024-10-01 08:45:48.923233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.116 [2024-10-01 08:45:48.923568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.117 [2024-10-01 08:45:48.923586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:57.117 [2024-10-01 08:45:48.929242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.117 [2024-10-01 08:45:48.929448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.117 [2024-10-01 08:45:48.929466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:57.117 [2024-10-01 08:45:48.933484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.117 [2024-10-01 08:45:48.933774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.117 [2024-10-01 08:45:48.933792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:57.382 [2024-10-01 08:45:48.942269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.382 [2024-10-01 08:45:48.942579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.383 [2024-10-01 08:45:48.942597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.383 [2024-10-01 08:45:48.949472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.383 [2024-10-01 08:45:48.949685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.383 [2024-10-01 08:45:48.949702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:57.383 [2024-10-01 08:45:48.957770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.383 [2024-10-01 08:45:48.958073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.383 [2024-10-01 08:45:48.958090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:57.383 [2024-10-01 08:45:48.964742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.383 [2024-10-01 08:45:48.965083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.383 [2024-10-01 08:45:48.965101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:57.383 [2024-10-01 08:45:48.971247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.383 [2024-10-01 08:45:48.971457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.383 [2024-10-01 08:45:48.971474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.383 [2024-10-01 08:45:48.979252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.383 [2024-10-01 08:45:48.979591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.383 [2024-10-01 08:45:48.979609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:57.384 [2024-10-01 08:45:48.985560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.384 [2024-10-01 08:45:48.985866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.384 [2024-10-01 08:45:48.985883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:57.384 [2024-10-01 08:45:48.994325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.384 [2024-10-01 08:45:48.994672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.384 [2024-10-01 08:45:48.994690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:57.384 [2024-10-01 08:45:49.002581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.384 [2024-10-01 08:45:49.002928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.384 [2024-10-01 08:45:49.002946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.384 [2024-10-01 08:45:49.009852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.384 [2024-10-01 08:45:49.010191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.384 [2024-10-01 08:45:49.010209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:57.384 [2024-10-01 08:45:49.018754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.384 [2024-10-01 08:45:49.019005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.384 [2024-10-01 08:45:49.019021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:57.385 [2024-10-01 08:45:49.027061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.385 [2024-10-01 08:45:49.027381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.385 [2024-10-01 08:45:49.027399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:57.385 [2024-10-01 08:45:49.035637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.385 [2024-10-01 08:45:49.035961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.385 [2024-10-01 08:45:49.035979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.385 [2024-10-01 08:45:49.044623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.385 [2024-10-01 08:45:49.044950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.385 [2024-10-01 08:45:49.044968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:57.388 [2024-10-01 08:45:49.054322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.388 [2024-10-01 08:45:49.054657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.389 [2024-10-01 08:45:49.054675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:57.389 [2024-10-01 08:45:49.064822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.389 [2024-10-01 08:45:49.065163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.389 [2024-10-01 08:45:49.065184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:57.389 [2024-10-01 08:45:49.073693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.389 [2024-10-01 08:45:49.074068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.389 [2024-10-01 08:45:49.074086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.389 [2024-10-01 08:45:49.084280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.389 [2024-10-01 08:45:49.084596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.389 [2024-10-01 08:45:49.084614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:57.389 [2024-10-01 08:45:49.091105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.389 [2024-10-01 08:45:49.091473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.389 [2024-10-01 08:45:49.091490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:57.389 [2024-10-01 08:45:49.099133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.389 [2024-10-01 08:45:49.099427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.389 [2024-10-01 08:45:49.099445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:57.390 [2024-10-01 08:45:49.108797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.390 [2024-10-01 08:45:49.109099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.390 [2024-10-01 08:45:49.109117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.390 [2024-10-01 08:45:49.117697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.390 [2024-10-01 08:45:49.117990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.390 [2024-10-01 08:45:49.118013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:57.390 [2024-10-01 08:45:49.128597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.390 [2024-10-01 08:45:49.128798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.390 [2024-10-01 08:45:49.128815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:57.390 [2024-10-01 08:45:49.136642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.390 [2024-10-01 08:45:49.136944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.390 [2024-10-01 08:45:49.136962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:57.390 [2024-10-01 08:45:49.145811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.390 [2024-10-01 08:45:49.146138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.390 [2024-10-01 08:45:49.146156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.390 [2024-10-01 08:45:49.151168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.390 [2024-10-01 08:45:49.151370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.390 [2024-10-01 08:45:49.151387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:57.390 [2024-10-01 08:45:49.158448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.390 [2024-10-01 08:45:49.158752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.390 [2024-10-01 08:45:49.158769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:57.390 [2024-10-01 08:45:49.164906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.391 [2024-10-01 08:45:49.165232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.391 [2024-10-01 08:45:49.165251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:57.391 [2024-10-01 08:45:49.171879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.391 [2024-10-01 08:45:49.172086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.391 [2024-10-01 08:45:49.172103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.391 [2024-10-01 08:45:49.179811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.391 [2024-10-01 08:45:49.180140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.391 [2024-10-01 08:45:49.180159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:57.391 [2024-10-01 08:45:49.187868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.391 [2024-10-01 08:45:49.188185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.391 [2024-10-01 08:45:49.188203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:57.391 [2024-10-01 08:45:49.196655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.391 [2024-10-01 08:45:49.196997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.392 [2024-10-01 08:45:49.197015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:57.656 [2024-10-01 08:45:49.205403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.656 [2024-10-01 08:45:49.205704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.656 [2024-10-01 08:45:49.205723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.656 [2024-10-01 08:45:49.213338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.656 [2024-10-01 08:45:49.213674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.656 [2024-10-01 08:45:49.213692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:57.656 [2024-10-01 08:45:49.221725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.656 [2024-10-01 08:45:49.222070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.656 [2024-10-01 08:45:49.222089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:57.656 [2024-10-01 08:45:49.228508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.656 [2024-10-01 08:45:49.228849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.656 [2024-10-01 08:45:49.228867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:57.656 [2024-10-01 08:45:49.236818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.656 [2024-10-01 08:45:49.237189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.656 [2024-10-01 08:45:49.237207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.656 [2024-10-01 08:45:49.245869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.656 [2024-10-01 08:45:49.246118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.656 [2024-10-01 08:45:49.246134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:57.656 [2024-10-01 08:45:49.250446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.656 [2024-10-01 08:45:49.250647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.656 [2024-10-01 08:45:49.250664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:57.656 [2024-10-01 08:45:49.256931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.656 [2024-10-01 08:45:49.257136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.656 [2024-10-01 08:45:49.257153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:57.656 [2024-10-01 08:45:49.263533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.656 [2024-10-01 08:45:49.263734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.656 [2024-10-01 08:45:49.263751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.656 [2024-10-01 08:45:49.269246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.656 [2024-10-01 08:45:49.269542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.656 [2024-10-01 08:45:49.269563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:57.656 [2024-10-01 08:45:49.273494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.656 [2024-10-01 08:45:49.273694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.656 [2024-10-01 08:45:49.273711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:57.656 [2024-10-01 08:45:49.279349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.656 [2024-10-01 08:45:49.279550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.656 [2024-10-01 08:45:49.279567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:57.656 [2024-10-01 08:45:49.283136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.656 [2024-10-01 08:45:49.283336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.656 [2024-10-01 08:45:49.283353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.656 [2024-10-01 08:45:49.288518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.656 [2024-10-01 08:45:49.288719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.656 [2024-10-01 08:45:49.288736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:57.656 [2024-10-01 08:45:49.294769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.656 [2024-10-01 08:45:49.295104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.656 [2024-10-01 08:45:49.295122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:57.656 [2024-10-01 08:45:49.303919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.656 [2024-10-01 08:45:49.304235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.656 [2024-10-01 08:45:49.304253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:57.656 [2024-10-01 08:45:49.309503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.656 [2024-10-01 08:45:49.309573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.656 [2024-10-01 08:45:49.309588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.656 [2024-10-01 08:45:49.318402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.656 [2024-10-01 08:45:49.318683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.656 [2024-10-01 08:45:49.318700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:57.656 [2024-10-01 08:45:49.324726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.656 [2024-10-01 08:45:49.324941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.656 [2024-10-01 08:45:49.324959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:57.656 [2024-10-01 08:45:49.329768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.656 [2024-10-01 08:45:49.329977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.656 [2024-10-01 08:45:49.329999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:57.656 [2024-10-01 08:45:49.337128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.657 [2024-10-01 08:45:49.337319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.657 [2024-10-01 08:45:49.337337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.657 [2024-10-01 08:45:49.341321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.657 [2024-10-01 08:45:49.341511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.657 [2024-10-01 08:45:49.341528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:57.657 [2024-10-01 08:45:49.344954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.657 [2024-10-01 08:45:49.345058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.657 [2024-10-01 08:45:49.345074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:57.657 [2024-10-01 08:45:49.349145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.657 [2024-10-01 08:45:49.349328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.657 [2024-10-01 08:45:49.349345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:57.657 [2024-10-01 08:45:49.352970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.657 [2024-10-01 08:45:49.353162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.657 [2024-10-01 08:45:49.353178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.657 [2024-10-01 08:45:49.358109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.657 [2024-10-01 08:45:49.358309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.657 [2024-10-01 08:45:49.358325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:57.657 [2024-10-01 08:45:49.365045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.657 [2024-10-01 08:45:49.365388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.657 [2024-10-01 08:45:49.365406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:57.657 [2024-10-01 08:45:49.369066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.657 [2024-10-01 08:45:49.369250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.657 [2024-10-01 08:45:49.369267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:57.657 [2024-10-01 08:45:49.373227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.657 [2024-10-01 08:45:49.373408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.657 [2024-10-01 08:45:49.373425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.657 [2024-10-01 08:45:49.376942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.657 [2024-10-01 08:45:49.377133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.657 [2024-10-01 08:45:49.377150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:57.657 [2024-10-01 08:45:49.381805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.657 [2024-10-01 08:45:49.381984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.657 [2024-10-01 08:45:49.382007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:57.657 [2024-10-01 08:45:49.387649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.657 [2024-10-01 08:45:49.387834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.657 [2024-10-01 08:45:49.387852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:57.657 [2024-10-01 08:45:49.394986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.657 [2024-10-01 08:45:49.395178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.657 [2024-10-01 08:45:49.395195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.657 [2024-10-01 08:45:49.398843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.657 [2024-10-01 08:45:49.399031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.657 [2024-10-01 08:45:49.399047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:57.657 [2024-10-01 08:45:49.405632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.657 [2024-10-01 08:45:49.405944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.657 [2024-10-01 08:45:49.405962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:57.657 [2024-10-01 08:45:49.411034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.657 [2024-10-01 08:45:49.411219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.657 [2024-10-01 08:45:49.411239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:57.657 [2024-10-01 08:45:49.418108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.657 [2024-10-01 08:45:49.418430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.657 [2024-10-01 08:45:49.418447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.657 [2024-10-01 08:45:49.422343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.657 [2024-10-01 08:45:49.422699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.657 [2024-10-01 08:45:49.422717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:57.657 [2024-10-01 08:45:49.428602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.657 [2024-10-01 08:45:49.428942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.657 [2024-10-01 08:45:49.428960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:57.657 [2024-10-01 08:45:49.435093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.657 [2024-10-01 08:45:49.435277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.657 [2024-10-01 08:45:49.435294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:57.657 4208.00 IOPS, 526.00 MiB/s [2024-10-01 08:45:49.444935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.657 [2024-10-01 08:45:49.445122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.657 [2024-10-01 08:45:49.445139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.657 [2024-10-01 08:45:49.453721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.657 [2024-10-01 08:45:49.453990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.657 [2024-10-01 08:45:49.454016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:57.657 [2024-10-01 08:45:49.463412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.657 [2024-10-01 08:45:49.463759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.657 [2024-10-01 08:45:49.463776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:57.657 [2024-10-01 08:45:49.473268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.657 [2024-10-01 08:45:49.473625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.658 [2024-10-01 08:45:49.473644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:57.919 [2024-10-01 08:45:49.483453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.919 [2024-10-01 08:45:49.483692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.919 [2024-10-01 08:45:49.483709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.919 [2024-10-01 08:45:49.493474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.919 [2024-10-01 08:45:49.493747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.919 [2024-10-01 08:45:49.493764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:57.919 [2024-10-01 08:45:49.503337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.919 [2024-10-01 08:45:49.503606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.919 [2024-10-01 08:45:49.503624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:57.919 [2024-10-01 08:45:49.513071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.919 [2024-10-01 08:45:49.513290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.919 [2024-10-01 08:45:49.513307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:57.919 [2024-10-01 08:45:49.522888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.919 [2024-10-01 08:45:49.523158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.919 [2024-10-01 08:45:49.523176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.919 [2024-10-01 08:45:49.532763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.919 [2024-10-01 08:45:49.533132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.919 [2024-10-01 08:45:49.533151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:57.919 [2024-10-01 08:45:49.542579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.919 [2024-10-01 08:45:49.542918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.919 [2024-10-01 08:45:49.542936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:57.919 [2024-10-01 08:45:49.552081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.919 [2024-10-01 08:45:49.552361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.919 [2024-10-01 08:45:49.552378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:57.919 [2024-10-01 08:45:49.562387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.919 [2024-10-01 08:45:49.562618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.919 [2024-10-01 08:45:49.562637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.919 [2024-10-01 08:45:49.571935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.919 [2024-10-01 08:45:49.572172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.919 [2024-10-01 08:45:49.572188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:57.919 [2024-10-01 08:45:49.581913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.919 [2024-10-01 08:45:49.582210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.919 [2024-10-01 08:45:49.582228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:57.919 [2024-10-01 08:45:49.592598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.919 [2024-10-01 08:45:49.592822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.919 [2024-10-01 08:45:49.592838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:57.919 [2024-10-01 08:45:49.601463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.919 [2024-10-01 08:45:49.601633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.919 [2024-10-01 08:45:49.601649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.919 [2024-10-01 08:45:49.611422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.919 [2024-10-01 08:45:49.611706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.919 [2024-10-01 08:45:49.611723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:57.919 [2024-10-01 08:45:49.622274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.919 [2024-10-01 08:45:49.622593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.919 [2024-10-01 08:45:49.622610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:57.919 [2024-10-01 08:45:49.633090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.919 [2024-10-01 08:45:49.633370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.919 [2024-10-01 08:45:49.633387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:57.919 [2024-10-01 08:45:49.643923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.919 [2024-10-01 08:45:49.644175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.919 [2024-10-01 08:45:49.644192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.919 [2024-10-01 08:45:49.652448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.919 [2024-10-01 08:45:49.652508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.919 [2024-10-01 08:45:49.652526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:57.919 [2024-10-01 08:45:49.656949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.919 [2024-10-01 08:45:49.657009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.919 [2024-10-01 08:45:49.657024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:57.919 [2024-10-01 08:45:49.662926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.919 [2024-10-01 08:45:49.662984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.919 [2024-10-01 08:45:49.663005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:57.919 [2024-10-01 08:45:49.672108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.919 [2024-10-01 08:45:49.672181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.919 [2024-10-01 08:45:49.672196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.919 [2024-10-01 08:45:49.678045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.919 [2024-10-01 08:45:49.678102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.919 [2024-10-01 08:45:49.678117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:57.919 [2024-10-01 08:45:49.681899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.919 [2024-10-01 08:45:49.681951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.919 [2024-10-01 08:45:49.681966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:57.919 [2024-10-01 08:45:49.685519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.919 [2024-10-01 08:45:49.685571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.919 [2024-10-01 08:45:49.685587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:57.919 [2024-10-01 08:45:49.689406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.920 [2024-10-01 08:45:49.689464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.920 [2024-10-01 08:45:49.689480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.920 [2024-10-01 08:45:49.694196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.920 [2024-10-01 08:45:49.694252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.920 [2024-10-01 08:45:49.694267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:57.920 [2024-10-01 08:45:49.698406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.920 [2024-10-01 08:45:49.698487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.920 [2024-10-01 08:45:49.698502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:57.920 [2024-10-01 08:45:49.702303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.920 [2024-10-01 08:45:49.702367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.920 [2024-10-01 08:45:49.702382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:57.920 [2024-10-01 08:45:49.706375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.920 [2024-10-01 08:45:49.706440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.920 [2024-10-01 08:45:49.706455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.920 [2024-10-01 08:45:49.712784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.920 [2024-10-01 08:45:49.713049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.920 [2024-10-01 08:45:49.713065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:57.920 [2024-10-01 08:45:49.718143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.920 [2024-10-01 08:45:49.718415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.920 [2024-10-01 08:45:49.718432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:57.920 [2024-10-01 08:45:49.725876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.920 [2024-10-01 08:45:49.726148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.920 [2024-10-01 08:45:49.726165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:57.920 [2024-10-01 08:45:49.731948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.920 [2024-10-01 08:45:49.732022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.920 [2024-10-01 08:45:49.732037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:57.920 [2024-10-01 08:45:49.735907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.920 [2024-10-01 08:45:49.735966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.920 [2024-10-01 08:45:49.735982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:57.920 [2024-10-01 08:45:49.739902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:57.920 [2024-10-01 08:45:49.739969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:57.920 [2024-10-01 08:45:49.739985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:58.181 [2024-10-01 08:45:49.743943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.182 [2024-10-01 08:45:49.744003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.182 [2024-10-01 08:45:49.744018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:58.182 [2024-10-01 08:45:49.747502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.182 [2024-10-01 08:45:49.747561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.182 [2024-10-01 08:45:49.747577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:58.182 [2024-10-01 08:45:49.751059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.182 [2024-10-01 08:45:49.751113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.182 [2024-10-01 08:45:49.751129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:58.182 [2024-10-01 08:45:49.756185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.182 [2024-10-01 08:45:49.756261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.182 [2024-10-01 08:45:49.756276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:58.182 [2024-10-01 08:45:49.762392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.182 [2024-10-01 08:45:49.762455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.182 [2024-10-01 08:45:49.762471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:58.182 [2024-10-01 08:45:49.766037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.182 [2024-10-01 08:45:49.766097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.182 [2024-10-01 08:45:49.766112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:58.182 [2024-10-01 08:45:49.769692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.182 [2024-10-01 08:45:49.769747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.182 [2024-10-01 08:45:49.769762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:58.182 [2024-10-01 08:45:49.773286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.182 [2024-10-01 08:45:49.773347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.182 [2024-10-01 08:45:49.773363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:58.182 [2024-10-01 08:45:49.777056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.182 [2024-10-01 08:45:49.777107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.182 [2024-10-01 08:45:49.777128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:58.182 [2024-10-01 08:45:49.780634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.182 [2024-10-01 08:45:49.780689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.182 [2024-10-01 08:45:49.780704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:58.182 [2024-10-01 08:45:49.784172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.182 [2024-10-01 08:45:49.784226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.182 [2024-10-01 08:45:49.784241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:58.182 [2024-10-01 08:45:49.787700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.182 [2024-10-01 08:45:49.787752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.182 [2024-10-01 08:45:49.787768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:58.182 [2024-10-01 08:45:49.791396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.182 [2024-10-01 08:45:49.791452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.182 [2024-10-01 08:45:49.791468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:58.182 [2024-10-01 08:45:49.795512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.182 [2024-10-01 08:45:49.795568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.182 [2024-10-01 08:45:49.795584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:58.182 [2024-10-01 08:45:49.799190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.182 [2024-10-01 08:45:49.799289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.182 [2024-10-01 08:45:49.799305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:58.182 [2024-10-01 08:45:49.804644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.182 [2024-10-01 08:45:49.804728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.182 [2024-10-01 08:45:49.804744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:58.182 [2024-10-01 08:45:49.808529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.182 [2024-10-01 08:45:49.808581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.182 [2024-10-01 08:45:49.808597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:58.182 [2024-10-01 08:45:49.814353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.182 [2024-10-01 08:45:49.814420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.182 [2024-10-01 08:45:49.814436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:58.182 [2024-10-01 08:45:49.821397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.182 [2024-10-01 08:45:49.821472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.182 [2024-10-01 08:45:49.821487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:58.182 [2024-10-01 08:45:49.827619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.182 [2024-10-01 08:45:49.827689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.182 [2024-10-01 08:45:49.827705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:58.182 [2024-10-01 08:45:49.831641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.182 [2024-10-01 08:45:49.831697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.182 [2024-10-01 08:45:49.831713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:58.182 [2024-10-01 08:45:49.835621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.182 [2024-10-01 08:45:49.835689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.182 [2024-10-01 08:45:49.835704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:58.182 [2024-10-01 08:45:49.839445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.182 [2024-10-01 08:45:49.839502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.182 [2024-10-01 08:45:49.839518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:58.182 [2024-10-01 08:45:49.845442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.182 [2024-10-01 08:45:49.845499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.183 [2024-10-01 08:45:49.845515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:58.183 [2024-10-01 08:45:49.849317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.183 [2024-10-01 08:45:49.849373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.183 [2024-10-01 08:45:49.849388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:58.183 [2024-10-01 08:45:49.854729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.183 [2024-10-01 08:45:49.854782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.183 [2024-10-01 08:45:49.854797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:58.183 [2024-10-01 08:45:49.858578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.183 [2024-10-01 08:45:49.858638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.183 [2024-10-01 08:45:49.858654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:58.183 [2024-10-01 08:45:49.862529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.183 [2024-10-01 08:45:49.862589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.183 [2024-10-01 08:45:49.862605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:58.183 [2024-10-01 08:45:49.866477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.183 [2024-10-01 08:45:49.866531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.183 [2024-10-01 08:45:49.866546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:58.183 [2024-10-01 08:45:49.870334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.183 [2024-10-01 08:45:49.870393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.183 [2024-10-01 08:45:49.870408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:58.183 [2024-10-01 08:45:49.873905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.183 [2024-10-01 08:45:49.873957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.183 [2024-10-01 08:45:49.873974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:58.183 [2024-10-01 08:45:49.877433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.183 [2024-10-01 08:45:49.877488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.183 [2024-10-01 08:45:49.877503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:58.183 [2024-10-01 08:45:49.881021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.183 [2024-10-01 08:45:49.881082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.183 [2024-10-01 08:45:49.881097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:58.183 [2024-10-01 08:45:49.884540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.183 [2024-10-01 08:45:49.884604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.183 [2024-10-01 08:45:49.884619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:58.183 [2024-10-01 08:45:49.888095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.183 [2024-10-01 08:45:49.888154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.183 [2024-10-01 08:45:49.888173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:58.183 [2024-10-01 08:45:49.892250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.183 [2024-10-01 08:45:49.892314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.183 [2024-10-01 08:45:49.892329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:58.183 [2024-10-01 08:45:49.896136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.183 [2024-10-01 08:45:49.896193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.183 [2024-10-01 08:45:49.896208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:58.183 [2024-10-01 08:45:49.899880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.183 [2024-10-01 08:45:49.899931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.183 [2024-10-01 08:45:49.899946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:58.183 [2024-10-01 08:45:49.903370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.183 [2024-10-01 08:45:49.903427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.183 [2024-10-01 08:45:49.903443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:58.183 [2024-10-01 08:45:49.906925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.183 [2024-10-01 08:45:49.906981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.183 [2024-10-01 08:45:49.907000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:58.183 [2024-10-01 08:45:49.910468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.183 [2024-10-01 08:45:49.910521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.183 [2024-10-01 08:45:49.910536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:58.183 [2024-10-01 08:45:49.914233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.183 [2024-10-01 08:45:49.914285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.183 [2024-10-01 08:45:49.914301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:58.183 [2024-10-01 08:45:49.917746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.183 [2024-10-01 08:45:49.917797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.183 [2024-10-01 08:45:49.917811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:58.183 [2024-10-01 08:45:49.921257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.183 [2024-10-01 08:45:49.921316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.183 [2024-10-01 08:45:49.921331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:58.183 [2024-10-01 08:45:49.925120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.183 [2024-10-01 08:45:49.925174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.183 [2024-10-01 08:45:49.925190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:58.183 [2024-10-01 08:45:49.928646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.183 [2024-10-01 08:45:49.928704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.183 [2024-10-01 08:45:49.928719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:58.183 [2024-10-01 08:45:49.933877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.183 [2024-10-01 08:45:49.933941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.183 [2024-10-01 08:45:49.933955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:58.183 [2024-10-01 08:45:49.937394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.183 [2024-10-01 08:45:49.937451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.183 [2024-10-01 08:45:49.937467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:58.183 [2024-10-01 08:45:49.942799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.183 [2024-10-01 08:45:49.942867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.183 [2024-10-01 08:45:49.942882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:58.183 [2024-10-01 08:45:49.948426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.183 [2024-10-01 08:45:49.948478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.183 [2024-10-01 08:45:49.948493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:58.183 [2024-10-01 08:45:49.955707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.183 [2024-10-01 08:45:49.955777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.184 [2024-10-01 08:45:49.955793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:58.184 [2024-10-01 08:45:49.962451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.184 [2024-10-01 08:45:49.962700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.184 [2024-10-01 08:45:49.962717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:58.184 [2024-10-01 08:45:49.969016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.184 [2024-10-01 08:45:49.969087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.184 [2024-10-01 08:45:49.969102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:58.184 [2024-10-01 08:45:49.976009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.184 [2024-10-01 08:45:49.976276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.184 [2024-10-01 08:45:49.976292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:58.184 [2024-10-01 08:45:49.983945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.184 [2024-10-01 08:45:49.984022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.184 [2024-10-01 08:45:49.984039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:58.184 [2024-10-01 08:45:49.988790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.184 [2024-10-01 08:45:49.988844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.184 [2024-10-01 08:45:49.988860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:58.184 [2024-10-01 08:45:49.996173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.184 [2024-10-01 08:45:49.996394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.184 [2024-10-01 08:45:49.996409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:58.184 [2024-10-01 08:45:50.002365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.184 [2024-10-01 08:45:50.002416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.184 [2024-10-01 08:45:50.002432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:58.445 [2024-10-01 08:45:50.006296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.445 [2024-10-01 08:45:50.006353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.445 [2024-10-01 08:45:50.006368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:58.445 [2024-10-01 08:45:50.009896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.445 [2024-10-01 08:45:50.009954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.445 [2024-10-01 08:45:50.009969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:58.445 [2024-10-01 08:45:50.013796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.445 [2024-10-01 08:45:50.013848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.445 [2024-10-01 08:45:50.013866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:58.445 [2024-10-01 08:45:50.017602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.445 [2024-10-01 08:45:50.017655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.445 [2024-10-01 08:45:50.017671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:58.445 [2024-10-01 08:45:50.021164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.445 [2024-10-01 08:45:50.021229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.445 [2024-10-01 08:45:50.021244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:58.445 [2024-10-01 08:45:50.024710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.446 [2024-10-01 08:45:50.024766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.446 [2024-10-01 08:45:50.024782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:58.446 [2024-10-01 08:45:50.028253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.446 [2024-10-01 08:45:50.028345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.446 [2024-10-01 08:45:50.028361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:58.446 [2024-10-01 08:45:50.035298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.446 [2024-10-01 08:45:50.035363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.446 [2024-10-01 08:45:50.035378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:58.446 [2024-10-01 08:45:50.040858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.446 [2024-10-01 08:45:50.040915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.446 [2024-10-01 08:45:50.040931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:58.446 [2024-10-01 08:45:50.044534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.446 [2024-10-01 08:45:50.044587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.446 [2024-10-01 08:45:50.044603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:58.446 [2024-10-01 08:45:50.048372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.446 [2024-10-01 08:45:50.048437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.446 [2024-10-01 08:45:50.048453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:58.446 [2024-10-01 08:45:50.052105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.446 [2024-10-01 08:45:50.052163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.446 [2024-10-01 08:45:50.052179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:58.446 [2024-10-01 08:45:50.055928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.446 [2024-10-01 08:45:50.055981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.446 [2024-10-01 08:45:50.056001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:58.446 [2024-10-01 08:45:50.059968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.446 [2024-10-01 08:45:50.060033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.446 [2024-10-01 08:45:50.060049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:58.446 [2024-10-01 08:45:50.064895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.446 [2024-10-01 08:45:50.064961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.446 [2024-10-01 08:45:50.064976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:58.446 [2024-10-01 08:45:50.068476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.446 [2024-10-01 08:45:50.068535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.446 [2024-10-01 08:45:50.068550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:58.446 [2024-10-01 08:45:50.072026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.446 [2024-10-01 08:45:50.072100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.446 [2024-10-01 08:45:50.072115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:58.446 [2024-10-01 08:45:50.075590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.446 [2024-10-01 08:45:50.075656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.446 [2024-10-01 08:45:50.075672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:58.446 [2024-10-01 08:45:50.079656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.446 [2024-10-01 08:45:50.079764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.446 [2024-10-01 08:45:50.079780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:58.446 [2024-10-01 08:45:50.084425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.446 [2024-10-01 08:45:50.084508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.446 [2024-10-01 08:45:50.084524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:58.446 [2024-10-01 08:45:50.090965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.446 [2024-10-01 08:45:50.091041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.446 [2024-10-01 08:45:50.091056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:58.446 [2024-10-01 08:45:50.096758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.446 [2024-10-01 08:45:50.096825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.446 [2024-10-01 08:45:50.096841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:58.446 [2024-10-01 08:45:50.100771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.446 [2024-10-01 08:45:50.100832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.446 [2024-10-01 08:45:50.100848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:58.446 [2024-10-01 08:45:50.104813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.446 [2024-10-01 08:45:50.104875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.446 [2024-10-01 08:45:50.104891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:58.446 [2024-10-01 08:45:50.108929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.446 [2024-10-01 08:45:50.108984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.446 [2024-10-01 08:45:50.109005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:58.446 [2024-10-01 08:45:50.114793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.446 [2024-10-01 08:45:50.114864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.446 [2024-10-01 08:45:50.114879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:58.446 [2024-10-01 08:45:50.121020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.446 [2024-10-01 08:45:50.121085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.446 [2024-10-01 08:45:50.121101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:58.446 [2024-10-01 08:45:50.124734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.446 [2024-10-01 08:45:50.124793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.446 [2024-10-01 08:45:50.124809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:58.446 [2024-10-01 08:45:50.128347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.446 [2024-10-01 08:45:50.128409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.446 [2024-10-01 08:45:50.128427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:58.446 [2024-10-01 08:45:50.135954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.446 [2024-10-01 08:45:50.136151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.446 [2024-10-01 08:45:50.136167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:58.446 [2024-10-01 08:45:50.142946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.446 [2024-10-01 08:45:50.143017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.446 [2024-10-01 08:45:50.143032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:58.447 [2024-10-01 08:45:50.149079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.447 [2024-10-01 08:45:50.149141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.447 [2024-10-01 08:45:50.149157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:58.447 [2024-10-01 08:45:50.153127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.447 [2024-10-01 08:45:50.153199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.447 [2024-10-01 08:45:50.153214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:58.447 [2024-10-01 08:45:50.158724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.447 [2024-10-01 08:45:50.158777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.447 [2024-10-01 08:45:50.158792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:58.447 [2024-10-01 08:45:50.162520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.447 [2024-10-01 08:45:50.162581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.447 [2024-10-01 08:45:50.162597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:58.447 [2024-10-01 08:45:50.166524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.447 [2024-10-01 08:45:50.166616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.447 [2024-10-01 08:45:50.166632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:58.447 [2024-10-01 08:45:50.170107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.447 [2024-10-01 08:45:50.170165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.447 [2024-10-01 08:45:50.170180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:58.447 [2024-10-01 08:45:50.173670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.447 [2024-10-01 08:45:50.173733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.447 [2024-10-01 08:45:50.173749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:58.447 [2024-10-01 08:45:50.177245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.447 [2024-10-01 08:45:50.177302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.447 [2024-10-01 08:45:50.177318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:58.447 [2024-10-01 08:45:50.180819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.447 [2024-10-01 08:45:50.180875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.447 [2024-10-01 08:45:50.180891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:58.447 [2024-10-01 08:45:50.184354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.447 [2024-10-01 08:45:50.184411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.447 [2024-10-01 08:45:50.184427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:58.447 [2024-10-01 08:45:50.189168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.447 [2024-10-01 08:45:50.189245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.447 [2024-10-01 08:45:50.189260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:58.447 [2024-10-01 08:45:50.193111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.447 [2024-10-01 08:45:50.193174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.447 [2024-10-01 08:45:50.193190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:58.447 [2024-10-01 08:45:50.196645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.447 [2024-10-01 08:45:50.196715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.447 [2024-10-01 08:45:50.196731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:58.447 [2024-10-01 08:45:50.200592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.447 [2024-10-01 08:45:50.200643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.447 [2024-10-01 08:45:50.200658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:58.447 [2024-10-01 08:45:50.204544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.447 [2024-10-01 08:45:50.204649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.447 [2024-10-01 08:45:50.204664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:58.447 [2024-10-01 08:45:50.211929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.447 [2024-10-01 08:45:50.212032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.447 [2024-10-01 08:45:50.212049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:58.447 [2024-10-01 08:45:50.221219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.447 [2024-10-01 08:45:50.221505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.447 [2024-10-01 08:45:50.221523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:58.447 [2024-10-01 08:45:50.225530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.447 [2024-10-01 08:45:50.225588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.447 [2024-10-01 08:45:50.225604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:58.447 [2024-10-01 08:45:50.233640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.447 [2024-10-01 08:45:50.233707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.447 [2024-10-01 08:45:50.233722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:58.447 [2024-10-01 08:45:50.237488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.447 [2024-10-01 08:45:50.237544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.447 [2024-10-01 08:45:50.237559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:58.447 [2024-10-01 08:45:50.241381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.447 [2024-10-01 08:45:50.241432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.447 [2024-10-01 08:45:50.241448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:58.447 [2024-10-01 08:45:50.245272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.447 [2024-10-01 08:45:50.245335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.447 [2024-10-01 08:45:50.245350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:58.447 [2024-10-01 08:45:50.249912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.447 [2024-10-01 08:45:50.249965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.447 [2024-10-01 08:45:50.249981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:58.447 [2024-10-01 08:45:50.253967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.447 [2024-10-01 08:45:50.254036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.447 [2024-10-01 08:45:50.254054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:58.447 [2024-10-01 08:45:50.257887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.447 [2024-10-01 08:45:50.257948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.447 [2024-10-01 08:45:50.257964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:58.447 [2024-10-01 08:45:50.261707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.447 [2024-10-01 08:45:50.261777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.447 [2024-10-01 08:45:50.261793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:58.447 [2024-10-01 08:45:50.265245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.447 [2024-10-01 08:45:50.265311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.447 [2024-10-01 08:45:50.265326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:58.708 [2024-10-01 08:45:50.268763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.708 [2024-10-01 08:45:50.268833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.708 [2024-10-01 08:45:50.268848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:58.708 [2024-10-01 08:45:50.272358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.708 [2024-10-01 08:45:50.272419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.708 [2024-10-01 08:45:50.272434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:58.708 [2024-10-01 08:45:50.276283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.708 [2024-10-01 08:45:50.276343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.708 [2024-10-01 08:45:50.276358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:58.709 [2024-10-01 08:45:50.280407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.709 [2024-10-01 08:45:50.280470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.709 [2024-10-01 08:45:50.280485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:58.709 [2024-10-01 08:45:50.284389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.709 [2024-10-01 08:45:50.284443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.709 [2024-10-01 08:45:50.284458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:58.709 [2024-10-01 08:45:50.288398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.709 [2024-10-01 08:45:50.288457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.709 [2024-10-01 08:45:50.288476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:58.709 [2024-10-01 08:45:50.291932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.709 [2024-10-01 08:45:50.291984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.709 [2024-10-01 08:45:50.292005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:58.709 [2024-10-01 08:45:50.295420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.709 [2024-10-01 08:45:50.295478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.709 [2024-10-01 08:45:50.295493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:58.709 [2024-10-01 08:45:50.298880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.709 [2024-10-01 08:45:50.298948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.709 [2024-10-01 08:45:50.298963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:58.709 [2024-10-01 08:45:50.302413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.709 [2024-10-01 08:45:50.302468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.709 [2024-10-01 08:45:50.302483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:58.709 [2024-10-01 08:45:50.306637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.709 [2024-10-01 08:45:50.306729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.709 [2024-10-01 08:45:50.306745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:58.709 [2024-10-01 08:45:50.314810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.709 [2024-10-01 08:45:50.315105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.709 [2024-10-01 08:45:50.315122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:58.709 [2024-10-01 08:45:50.322504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.709 [2024-10-01 08:45:50.322575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.709 [2024-10-01 08:45:50.322591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:58.709 [2024-10-01 08:45:50.326787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.709 [2024-10-01 08:45:50.326862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.709 [2024-10-01 08:45:50.326878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:58.709 [2024-10-01 08:45:50.332341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.709 [2024-10-01 08:45:50.332408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.709 [2024-10-01 08:45:50.332423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:58.709 [2024-10-01 08:45:50.339898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.709 [2024-10-01 08:45:50.339971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.709 [2024-10-01 08:45:50.339986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:58.709 [2024-10-01 08:45:50.346395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.709 [2024-10-01 08:45:50.346451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.709 [2024-10-01 08:45:50.346466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:58.709 [2024-10-01 08:45:50.350591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.709 [2024-10-01 08:45:50.350646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.709 [2024-10-01 08:45:50.350661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:58.709 [2024-10-01 08:45:50.354603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.709 [2024-10-01 08:45:50.354680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.709 [2024-10-01 08:45:50.354695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:58.709 [2024-10-01 08:45:50.358668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.709 [2024-10-01 08:45:50.358731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.709 [2024-10-01 08:45:50.358747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:58.709 [2024-10-01 08:45:50.362910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.709 [2024-10-01 08:45:50.363021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.709 [2024-10-01 08:45:50.363037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:58.709 [2024-10-01 08:45:50.368666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.709 [2024-10-01 08:45:50.368791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.709 [2024-10-01 08:45:50.368807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:58.709 [2024-10-01 08:45:50.377167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.709 [2024-10-01 08:45:50.377431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.709 [2024-10-01 08:45:50.377448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:58.709 [2024-10-01 08:45:50.387063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.709 [2024-10-01 08:45:50.387335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.709 [2024-10-01 08:45:50.387360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:58.709 [2024-10-01 08:45:50.397329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.709 [2024-10-01 08:45:50.397573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.709 [2024-10-01 08:45:50.397590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:58.709 [2024-10-01 08:45:50.408434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.709 [2024-10-01 08:45:50.408628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.709 [2024-10-01 08:45:50.408643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:58.709 [2024-10-01 08:45:50.418978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.709 [2024-10-01 08:45:50.419245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.709 [2024-10-01 08:45:50.419263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:58.709 [2024-10-01 08:45:50.428884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.710 [2024-10-01 08:45:50.429074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.710 [2024-10-01 08:45:50.429091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:58.710 [2024-10-01 08:45:50.439092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.710 [2024-10-01 08:45:50.439314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.710 [2024-10-01 08:45:50.439330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:58.710 4951.50 IOPS, 618.94 MiB/s [2024-10-01 08:45:50.449990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x69e930) with pdu=0x2000198fef90 00:30:58.710 [2024-10-01 08:45:50.450278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.710 [2024-10-01 08:45:50.450295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:58.710 00:30:58.710 Latency(us) 00:30:58.710 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:58.710 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:30:58.710 nvme0n1 : 2.01 4943.61 617.95 0.00 0.00 3229.42 1686.19 13653.33 00:30:58.710 =================================================================================================================== 00:30:58.710 Total : 4943.61 617.95 0.00 0.00 3229.42 1686.19 13653.33 00:30:58.710 { 00:30:58.710 "results": [ 00:30:58.710 { 00:30:58.710 "job": "nvme0n1", 00:30:58.710 "core_mask": "0x2", 00:30:58.710 "workload": "randwrite", 00:30:58.710 "status": "finished", 00:30:58.710 "queue_depth": 16, 00:30:58.710 "io_size": 131072, 00:30:58.710 "runtime": 2.007236, 00:30:58.710 "iops": 4943.614004531604, 00:30:58.710 "mibps": 617.9517505664505, 00:30:58.710 "io_failed": 0, 00:30:58.710 "io_timeout": 0, 00:30:58.710 "avg_latency_us": 3229.4247357989857, 00:30:58.710 "min_latency_us": 1686.1866666666667, 00:30:58.710 "max_latency_us": 13653.333333333334 00:30:58.710 } 00:30:58.710 ], 00:30:58.710 "core_count": 1 00:30:58.710 } 00:30:58.710 08:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:58.710 08:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:58.710 08:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:58.710 | .driver_specific 00:30:58.710 | .nvme_error 00:30:58.710 | .status_code 00:30:58.710 | .command_transient_transport_error' 00:30:58.710 08:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:58.969 08:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 320 > 0 )) 00:30:58.969 08:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3930772 00:30:58.969 08:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3930772 ']' 00:30:58.970 08:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3930772 00:30:58.970 08:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:30:58.970 08:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:58.970 08:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3930772 00:30:58.970 08:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:58.970 08:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:58.970 08:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3930772' 00:30:58.970 killing process with pid 3930772 00:30:58.970 08:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3930772 00:30:58.970 Received shutdown signal, test time was about 2.000000 seconds 00:30:58.970 00:30:58.970 Latency(us) 00:30:58.970 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:58.970 =================================================================================================================== 00:30:58.970 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:58.970 08:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3930772 00:30:59.230 08:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3928373 00:30:59.230 08:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3928373 ']' 00:30:59.230 08:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3928373 00:30:59.230 08:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:30:59.230 08:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:59.230 08:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3928373 00:30:59.230 08:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:59.230 08:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:59.230 08:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3928373' 00:30:59.230 killing process with pid 3928373 00:30:59.230 08:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3928373 00:30:59.230 08:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3928373 00:30:59.230 00:30:59.230 real 0m16.440s 00:30:59.230 user 0m32.534s 00:30:59.230 sys 0m3.459s 00:30:59.230 08:45:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:59.230 08:45:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:59.230 ************************************ 00:30:59.230 END TEST nvmf_digest_error 00:30:59.230 ************************************ 00:30:59.491 08:45:51 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:30:59.491 08:45:51 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:30:59.491 08:45:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@512 -- # nvmfcleanup 00:30:59.491 08:45:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:30:59.491 08:45:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:59.491 08:45:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:30:59.491 08:45:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:59.491 08:45:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:59.491 rmmod nvme_tcp 00:30:59.491 rmmod nvme_fabrics 00:30:59.491 rmmod nvme_keyring 00:30:59.491 08:45:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:59.491 08:45:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:30:59.491 08:45:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:30:59.491 08:45:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@513 -- # '[' -n 3928373 ']' 00:30:59.491 08:45:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@514 -- # killprocess 3928373 00:30:59.491 08:45:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 3928373 ']' 00:30:59.491 08:45:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 3928373 00:30:59.491 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3928373) - No such process 00:30:59.491 08:45:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 3928373 is not found' 00:30:59.491 Process with pid 3928373 is not found 00:30:59.491 08:45:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:30:59.491 08:45:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:30:59.491 08:45:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:30:59.491 08:45:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:30:59.491 08:45:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # iptables-save 00:30:59.491 08:45:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:30:59.491 08:45:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # iptables-restore 00:30:59.491 08:45:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:59.491 08:45:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:59.491 08:45:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:59.491 08:45:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:59.491 08:45:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:01.402 08:45:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:01.714 00:31:01.714 real 0m42.363s 00:31:01.714 user 1m7.102s 00:31:01.714 sys 0m12.391s 00:31:01.714 08:45:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:01.714 08:45:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:31:01.714 ************************************ 00:31:01.714 END TEST nvmf_digest 00:31:01.714 ************************************ 00:31:01.714 08:45:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:31:01.714 08:45:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:31:01.714 08:45:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:31:01.714 08:45:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:31:01.714 08:45:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:01.714 08:45:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:01.714 08:45:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.714 ************************************ 00:31:01.714 START TEST nvmf_bdevperf 00:31:01.714 ************************************ 00:31:01.714 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:31:01.714 * Looking for test storage... 00:31:01.714 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # lcov --version 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:01.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:01.715 --rc genhtml_branch_coverage=1 00:31:01.715 --rc genhtml_function_coverage=1 00:31:01.715 --rc genhtml_legend=1 00:31:01.715 --rc geninfo_all_blocks=1 00:31:01.715 --rc geninfo_unexecuted_blocks=1 00:31:01.715 00:31:01.715 ' 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:01.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:01.715 --rc genhtml_branch_coverage=1 00:31:01.715 --rc genhtml_function_coverage=1 00:31:01.715 --rc genhtml_legend=1 00:31:01.715 --rc geninfo_all_blocks=1 00:31:01.715 --rc geninfo_unexecuted_blocks=1 00:31:01.715 00:31:01.715 ' 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:01.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:01.715 --rc genhtml_branch_coverage=1 00:31:01.715 --rc genhtml_function_coverage=1 00:31:01.715 --rc genhtml_legend=1 00:31:01.715 --rc geninfo_all_blocks=1 00:31:01.715 --rc geninfo_unexecuted_blocks=1 00:31:01.715 00:31:01.715 ' 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:01.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:01.715 --rc genhtml_branch_coverage=1 00:31:01.715 --rc genhtml_function_coverage=1 00:31:01.715 --rc genhtml_legend=1 00:31:01.715 --rc geninfo_all_blocks=1 00:31:01.715 --rc geninfo_unexecuted_blocks=1 00:31:01.715 00:31:01.715 ' 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:01.715 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@472 -- # prepare_net_devs 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:01.715 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:01.716 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:02.058 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:31:02.058 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:31:02.058 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:31:02.058 08:45:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:08.639 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:08.639 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:31:08.639 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:08.639 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:08.639 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:08.639 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:08.639 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:08.639 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:31:08.639 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:08.639 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:31:08.639 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:31:08.639 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:31:08.639 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:31:08.639 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:31:08.639 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:31:08.639 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:08.639 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:08.639 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:08.639 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:08.639 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:08.639 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:08.639 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:08.639 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:08.639 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:08.639 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:08.639 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:08.639 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:31:08.639 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:31:08.639 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:31:08.639 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:31:08.639 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:31:08.639 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:31:08.639 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:31:08.639 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:08.639 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:08.639 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:31:08.639 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:31:08.639 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:08.639 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:08.639 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:31:08.639 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:31:08.639 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:08.639 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:08.639 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:31:08.639 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:31:08.640 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:08.640 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:08.640 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:31:08.640 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:31:08.640 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:31:08.640 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:31:08.640 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:31:08.640 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:08.640 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:31:08.640 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:08.640 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:31:08.640 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:31:08.640 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:08.640 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:08.640 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:08.640 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:31:08.640 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:31:08.640 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:08.640 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:31:08.640 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:08.640 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:31:08.640 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:31:08.640 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:08.640 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:08.640 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:08.640 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:31:08.640 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:31:08.640 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # is_hw=yes 00:31:08.640 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:31:08.640 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:31:08.640 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:31:08.640 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:08.640 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:08.640 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:08.640 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:08.640 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:08.640 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:08.640 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:08.640 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:08.640 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:08.640 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:08.640 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:08.640 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:08.640 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:08.640 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:08.640 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:08.912 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:08.912 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:08.912 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:08.912 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:08.912 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:08.912 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:08.912 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:08.912 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:08.912 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:08.912 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.614 ms 00:31:08.912 00:31:08.912 --- 10.0.0.2 ping statistics --- 00:31:08.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:08.912 rtt min/avg/max/mdev = 0.614/0.614/0.614/0.000 ms 00:31:08.912 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:08.912 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:08.912 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:31:08.912 00:31:08.912 --- 10.0.0.1 ping statistics --- 00:31:08.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:08.912 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:31:08.912 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:08.912 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # return 0 00:31:08.912 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:31:08.912 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:08.912 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:31:08.912 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:31:08.912 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:08.912 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:31:08.912 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:31:09.174 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:31:09.174 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:31:09.174 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:31:09.174 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:09.174 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:09.174 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # nvmfpid=3935799 00:31:09.174 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # waitforlisten 3935799 00:31:09.174 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:09.174 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 3935799 ']' 00:31:09.174 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:09.174 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:09.174 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:09.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:09.174 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:09.174 08:46:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:09.174 [2024-10-01 08:46:00.829606] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:31:09.174 [2024-10-01 08:46:00.829714] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:09.174 [2024-10-01 08:46:00.923279] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:09.435 [2024-10-01 08:46:01.016294] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:09.435 [2024-10-01 08:46:01.016351] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:09.435 [2024-10-01 08:46:01.016360] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:09.435 [2024-10-01 08:46:01.016368] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:09.435 [2024-10-01 08:46:01.016374] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:09.435 [2024-10-01 08:46:01.017914] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:31:09.435 [2024-10-01 08:46:01.018085] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:31:09.435 [2024-10-01 08:46:01.018108] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:31:10.004 08:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:10.004 08:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:31:10.004 08:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:31:10.004 08:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:10.004 08:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:10.004 08:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:10.004 08:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:10.004 08:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.004 08:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:10.004 [2024-10-01 08:46:01.681665] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:10.004 08:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.004 08:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:10.004 08:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.004 08:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:10.004 Malloc0 00:31:10.004 08:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.004 08:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:10.004 08:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.004 08:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:10.004 08:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.004 08:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:10.004 08:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.004 08:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:10.004 08:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.004 08:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:10.004 08:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.004 08:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:10.004 [2024-10-01 08:46:01.747442] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:10.004 08:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.004 08:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:31:10.004 08:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:31:10.004 08:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # config=() 00:31:10.004 08:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # local subsystem config 00:31:10.004 08:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:31:10.004 08:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:31:10.004 { 00:31:10.004 "params": { 00:31:10.004 "name": "Nvme$subsystem", 00:31:10.004 "trtype": "$TEST_TRANSPORT", 00:31:10.004 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:10.004 "adrfam": "ipv4", 00:31:10.004 "trsvcid": "$NVMF_PORT", 00:31:10.004 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:10.004 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:10.004 "hdgst": ${hdgst:-false}, 00:31:10.004 "ddgst": ${ddgst:-false} 00:31:10.004 }, 00:31:10.004 "method": "bdev_nvme_attach_controller" 00:31:10.004 } 00:31:10.004 EOF 00:31:10.004 )") 00:31:10.004 08:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@578 -- # cat 00:31:10.004 08:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # jq . 00:31:10.004 08:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@581 -- # IFS=, 00:31:10.004 08:46:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:31:10.004 "params": { 00:31:10.004 "name": "Nvme1", 00:31:10.004 "trtype": "tcp", 00:31:10.004 "traddr": "10.0.0.2", 00:31:10.004 "adrfam": "ipv4", 00:31:10.004 "trsvcid": "4420", 00:31:10.004 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:10.004 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:10.004 "hdgst": false, 00:31:10.004 "ddgst": false 00:31:10.004 }, 00:31:10.004 "method": "bdev_nvme_attach_controller" 00:31:10.004 }' 00:31:10.004 [2024-10-01 08:46:01.803012] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:31:10.004 [2024-10-01 08:46:01.803067] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3935854 ] 00:31:10.263 [2024-10-01 08:46:01.863818] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:10.263 [2024-10-01 08:46:01.928493] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:31:10.523 Running I/O for 1 seconds... 00:31:11.463 8886.00 IOPS, 34.71 MiB/s 00:31:11.463 Latency(us) 00:31:11.463 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:11.463 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:11.463 Verification LBA range: start 0x0 length 0x4000 00:31:11.463 Nvme1n1 : 1.01 8964.22 35.02 0.00 0.00 14211.87 887.47 12342.61 00:31:11.463 =================================================================================================================== 00:31:11.463 Total : 8964.22 35.02 0.00 0.00 14211.87 887.47 12342.61 00:31:11.723 08:46:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3936168 00:31:11.723 08:46:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:31:11.723 08:46:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:31:11.723 08:46:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:31:11.723 08:46:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # config=() 00:31:11.723 08:46:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # local subsystem config 00:31:11.723 08:46:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:31:11.723 08:46:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:31:11.723 { 00:31:11.723 "params": { 00:31:11.723 "name": "Nvme$subsystem", 00:31:11.723 "trtype": "$TEST_TRANSPORT", 00:31:11.723 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:11.723 "adrfam": "ipv4", 00:31:11.723 "trsvcid": "$NVMF_PORT", 00:31:11.723 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:11.723 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:11.723 "hdgst": ${hdgst:-false}, 00:31:11.723 "ddgst": ${ddgst:-false} 00:31:11.723 }, 00:31:11.723 "method": "bdev_nvme_attach_controller" 00:31:11.723 } 00:31:11.723 EOF 00:31:11.723 )") 00:31:11.723 08:46:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@578 -- # cat 00:31:11.723 08:46:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # jq . 00:31:11.723 08:46:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@581 -- # IFS=, 00:31:11.723 08:46:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:31:11.723 "params": { 00:31:11.723 "name": "Nvme1", 00:31:11.723 "trtype": "tcp", 00:31:11.723 "traddr": "10.0.0.2", 00:31:11.723 "adrfam": "ipv4", 00:31:11.723 "trsvcid": "4420", 00:31:11.723 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:11.723 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:11.723 "hdgst": false, 00:31:11.723 "ddgst": false 00:31:11.723 }, 00:31:11.723 "method": "bdev_nvme_attach_controller" 00:31:11.723 }' 00:31:11.723 [2024-10-01 08:46:03.437443] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:31:11.723 [2024-10-01 08:46:03.437498] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3936168 ] 00:31:11.723 [2024-10-01 08:46:03.498293] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:11.983 [2024-10-01 08:46:03.561640] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:31:11.983 Running I/O for 15 seconds... 00:31:14.592 8780.00 IOPS, 34.30 MiB/s 9565.00 IOPS, 37.36 MiB/s 08:46:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3935799 00:31:14.592 08:46:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:31:14.592 [2024-10-01 08:46:06.401917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:84168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.592 [2024-10-01 08:46:06.401957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.592 [2024-10-01 08:46:06.401977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:84176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.592 [2024-10-01 08:46:06.401988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.592 [2024-10-01 08:46:06.402004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:84184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.592 [2024-10-01 08:46:06.402013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.592 [2024-10-01 08:46:06.402022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:84192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.592 [2024-10-01 08:46:06.402032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.592 [2024-10-01 08:46:06.402044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:84200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.592 [2024-10-01 08:46:06.402054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.592 [2024-10-01 08:46:06.402071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:84208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.592 [2024-10-01 08:46:06.402082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.592 [2024-10-01 08:46:06.402091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:84216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.592 [2024-10-01 08:46:06.402099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.592 [2024-10-01 08:46:06.402109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:84224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.592 [2024-10-01 08:46:06.402118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.592 [2024-10-01 08:46:06.402128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:84232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.592 [2024-10-01 08:46:06.402135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.592 [2024-10-01 08:46:06.402149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:84240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.592 [2024-10-01 08:46:06.402159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.592 [2024-10-01 08:46:06.402171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:84248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.592 [2024-10-01 08:46:06.402180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.592 [2024-10-01 08:46:06.402191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.592 [2024-10-01 08:46:06.402200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.592 [2024-10-01 08:46:06.402216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:84264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.592 [2024-10-01 08:46:06.402227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.592 [2024-10-01 08:46:06.402238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:84272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.592 [2024-10-01 08:46:06.402248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.592 [2024-10-01 08:46:06.402260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:84280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.592 [2024-10-01 08:46:06.402269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.592 [2024-10-01 08:46:06.402281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:84288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.592 [2024-10-01 08:46:06.402292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.592 [2024-10-01 08:46:06.402306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.592 [2024-10-01 08:46:06.402315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.592 [2024-10-01 08:46:06.402324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.592 [2024-10-01 08:46:06.402334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.592 [2024-10-01 08:46:06.402346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:83640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.592 [2024-10-01 08:46:06.402355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.592 [2024-10-01 08:46:06.402365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:84312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.592 [2024-10-01 08:46:06.402373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.592 [2024-10-01 08:46:06.402384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:84320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.592 [2024-10-01 08:46:06.402392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.592 [2024-10-01 08:46:06.402402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:84328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.592 [2024-10-01 08:46:06.402410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.592 [2024-10-01 08:46:06.402419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:84336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.592 [2024-10-01 08:46:06.402427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.592 [2024-10-01 08:46:06.402436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:84344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.592 [2024-10-01 08:46:06.402443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.592 [2024-10-01 08:46:06.402453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.592 [2024-10-01 08:46:06.402460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.592 [2024-10-01 08:46:06.402470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:84360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.592 [2024-10-01 08:46:06.402477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.592 [2024-10-01 08:46:06.402487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:84368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.592 [2024-10-01 08:46:06.402494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.592 [2024-10-01 08:46:06.402503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:84376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.592 [2024-10-01 08:46:06.402511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.592 [2024-10-01 08:46:06.402520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:84384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.592 [2024-10-01 08:46:06.402528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.592 [2024-10-01 08:46:06.402537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:84392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.592 [2024-10-01 08:46:06.402545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.592 [2024-10-01 08:46:06.402554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:84400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.592 [2024-10-01 08:46:06.402563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.592 [2024-10-01 08:46:06.402573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:84408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.593 [2024-10-01 08:46:06.402580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.593 [2024-10-01 08:46:06.402589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.593 [2024-10-01 08:46:06.402596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.593 [2024-10-01 08:46:06.402606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:84424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.593 [2024-10-01 08:46:06.402614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.593 [2024-10-01 08:46:06.402624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:84432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.593 [2024-10-01 08:46:06.402632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.593 [2024-10-01 08:46:06.402641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:84440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.593 [2024-10-01 08:46:06.402649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.593 [2024-10-01 08:46:06.402658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:84448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.593 [2024-10-01 08:46:06.402666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.593 [2024-10-01 08:46:06.402675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:84456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.593 [2024-10-01 08:46:06.402682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.593 [2024-10-01 08:46:06.402692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:84464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.593 [2024-10-01 08:46:06.402699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.593 [2024-10-01 08:46:06.402710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:84472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.593 [2024-10-01 08:46:06.402717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.593 [2024-10-01 08:46:06.402726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:84480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.593 [2024-10-01 08:46:06.402734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.593 [2024-10-01 08:46:06.402743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:84488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.593 [2024-10-01 08:46:06.402750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.593 [2024-10-01 08:46:06.402759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:84496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.593 [2024-10-01 08:46:06.402767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.593 [2024-10-01 08:46:06.402778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:84504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.593 [2024-10-01 08:46:06.402786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.593 [2024-10-01 08:46:06.402795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:84512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.593 [2024-10-01 08:46:06.402803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.593 [2024-10-01 08:46:06.402812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:84520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.593 [2024-10-01 08:46:06.402819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.593 [2024-10-01 08:46:06.402829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:84528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.593 [2024-10-01 08:46:06.402837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.593 [2024-10-01 08:46:06.402846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:84536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.593 [2024-10-01 08:46:06.402853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.593 [2024-10-01 08:46:06.402862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:84544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.593 [2024-10-01 08:46:06.402870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.593 [2024-10-01 08:46:06.402880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.593 [2024-10-01 08:46:06.402887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.593 [2024-10-01 08:46:06.402897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:84560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.593 [2024-10-01 08:46:06.402904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.593 [2024-10-01 08:46:06.402914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:84568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.593 [2024-10-01 08:46:06.402921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.593 [2024-10-01 08:46:06.402930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:84576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.593 [2024-10-01 08:46:06.402939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.593 [2024-10-01 08:46:06.402948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:84584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.593 [2024-10-01 08:46:06.402955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.593 [2024-10-01 08:46:06.402964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:84592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.593 [2024-10-01 08:46:06.402972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.593 [2024-10-01 08:46:06.402981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:84600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.593 [2024-10-01 08:46:06.402990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.593 [2024-10-01 08:46:06.403098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:84608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.593 [2024-10-01 08:46:06.403106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.593 [2024-10-01 08:46:06.403115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:84616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.593 [2024-10-01 08:46:06.403123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.593 [2024-10-01 08:46:06.403132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:84624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.593 [2024-10-01 08:46:06.403140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.593 [2024-10-01 08:46:06.403149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:84632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.593 [2024-10-01 08:46:06.403157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.593 [2024-10-01 08:46:06.403166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:84640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.593 [2024-10-01 08:46:06.403174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.593 [2024-10-01 08:46:06.403183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:84648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.593 [2024-10-01 08:46:06.403191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.593 [2024-10-01 08:46:06.403201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:84656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.593 [2024-10-01 08:46:06.403208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.593 [2024-10-01 08:46:06.403217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:83648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.593 [2024-10-01 08:46:06.403225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.593 [2024-10-01 08:46:06.403234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:83656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.593 [2024-10-01 08:46:06.403242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.593 [2024-10-01 08:46:06.403252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:83664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.593 [2024-10-01 08:46:06.403260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.593 [2024-10-01 08:46:06.403270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:83672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.593 [2024-10-01 08:46:06.403277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.593 [2024-10-01 08:46:06.403287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.593 [2024-10-01 08:46:06.403295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.593 [2024-10-01 08:46:06.403307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:83688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.594 [2024-10-01 08:46:06.403315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.594 [2024-10-01 08:46:06.403324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:83696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.594 [2024-10-01 08:46:06.403331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.594 [2024-10-01 08:46:06.403341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:83704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.594 [2024-10-01 08:46:06.403348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.594 [2024-10-01 08:46:06.403359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:83712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.594 [2024-10-01 08:46:06.403366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.594 [2024-10-01 08:46:06.403376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:83720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.594 [2024-10-01 08:46:06.403384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.594 [2024-10-01 08:46:06.403394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:83728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.594 [2024-10-01 08:46:06.403403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.594 [2024-10-01 08:46:06.403413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:83736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.594 [2024-10-01 08:46:06.403420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.594 [2024-10-01 08:46:06.403430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:83744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.594 [2024-10-01 08:46:06.403437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.594 [2024-10-01 08:46:06.403447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:83752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.594 [2024-10-01 08:46:06.403454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.594 [2024-10-01 08:46:06.403464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:83760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.594 [2024-10-01 08:46:06.403472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.594 [2024-10-01 08:46:06.403482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:83768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.594 [2024-10-01 08:46:06.403489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.594 [2024-10-01 08:46:06.403498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:83776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.594 [2024-10-01 08:46:06.403505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.594 [2024-10-01 08:46:06.403515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:83784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.594 [2024-10-01 08:46:06.403523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.594 [2024-10-01 08:46:06.403535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:83792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.594 [2024-10-01 08:46:06.403542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.594 [2024-10-01 08:46:06.403552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:83800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.594 [2024-10-01 08:46:06.403559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.594 [2024-10-01 08:46:06.403568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:83808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.594 [2024-10-01 08:46:06.403576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.594 [2024-10-01 08:46:06.403585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:83816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.594 [2024-10-01 08:46:06.403593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.594 [2024-10-01 08:46:06.403603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.594 [2024-10-01 08:46:06.403610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.594 [2024-10-01 08:46:06.403622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:83832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.594 [2024-10-01 08:46:06.403630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.594 [2024-10-01 08:46:06.403640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:83840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.594 [2024-10-01 08:46:06.403647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.594 [2024-10-01 08:46:06.403657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:83848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.594 [2024-10-01 08:46:06.403664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.594 [2024-10-01 08:46:06.403675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:83856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.594 [2024-10-01 08:46:06.403683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.594 [2024-10-01 08:46:06.403693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:83864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.594 [2024-10-01 08:46:06.403700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.594 [2024-10-01 08:46:06.403710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:83872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.594 [2024-10-01 08:46:06.403717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.594 [2024-10-01 08:46:06.403728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:83880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.594 [2024-10-01 08:46:06.403736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.594 [2024-10-01 08:46:06.403746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:83888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.594 [2024-10-01 08:46:06.403755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.594 [2024-10-01 08:46:06.403764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:83896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.594 [2024-10-01 08:46:06.403772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.594 [2024-10-01 08:46:06.403782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:83904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.594 [2024-10-01 08:46:06.403790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.594 [2024-10-01 08:46:06.403799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:83912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.594 [2024-10-01 08:46:06.403807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.594 [2024-10-01 08:46:06.403818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:83920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.594 [2024-10-01 08:46:06.403825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.594 [2024-10-01 08:46:06.403835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:83928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.594 [2024-10-01 08:46:06.403843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.594 [2024-10-01 08:46:06.403853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:83936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.594 [2024-10-01 08:46:06.403860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.594 [2024-10-01 08:46:06.403870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:83944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.594 [2024-10-01 08:46:06.403877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.594 [2024-10-01 08:46:06.403887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:83952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.594 [2024-10-01 08:46:06.403895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.594 [2024-10-01 08:46:06.403904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:83960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.594 [2024-10-01 08:46:06.403911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.594 [2024-10-01 08:46:06.403921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:83968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.594 [2024-10-01 08:46:06.403928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.594 [2024-10-01 08:46:06.403938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.594 [2024-10-01 08:46:06.403945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.594 [2024-10-01 08:46:06.403955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:83984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.594 [2024-10-01 08:46:06.403963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.594 [2024-10-01 08:46:06.403974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:83992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.594 [2024-10-01 08:46:06.403981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.594 [2024-10-01 08:46:06.403991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:84000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.594 [2024-10-01 08:46:06.404002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.595 [2024-10-01 08:46:06.404012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:84008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.595 [2024-10-01 08:46:06.404020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.595 [2024-10-01 08:46:06.404029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:84016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.595 [2024-10-01 08:46:06.404036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.595 [2024-10-01 08:46:06.404046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:84024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.595 [2024-10-01 08:46:06.404054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.595 [2024-10-01 08:46:06.404064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:84032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.595 [2024-10-01 08:46:06.404071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.595 [2024-10-01 08:46:06.404081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.595 [2024-10-01 08:46:06.404088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.595 [2024-10-01 08:46:06.404098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:84048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.595 [2024-10-01 08:46:06.404105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.595 [2024-10-01 08:46:06.404115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:84056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.595 [2024-10-01 08:46:06.404123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.595 [2024-10-01 08:46:06.404133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:84064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.595 [2024-10-01 08:46:06.404140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.595 [2024-10-01 08:46:06.404149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:84072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.595 [2024-10-01 08:46:06.404157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.595 [2024-10-01 08:46:06.404167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:84080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.595 [2024-10-01 08:46:06.404174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.595 [2024-10-01 08:46:06.404184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:84088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.595 [2024-10-01 08:46:06.404192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.595 [2024-10-01 08:46:06.404202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:84096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.595 [2024-10-01 08:46:06.404210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.595 [2024-10-01 08:46:06.404220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:84104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.595 [2024-10-01 08:46:06.404227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.595 [2024-10-01 08:46:06.404237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:84112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.595 [2024-10-01 08:46:06.404244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.595 [2024-10-01 08:46:06.404254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:84120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.595 [2024-10-01 08:46:06.404261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.595 [2024-10-01 08:46:06.404271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:84128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.595 [2024-10-01 08:46:06.404279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.595 [2024-10-01 08:46:06.404288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:84136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.595 [2024-10-01 08:46:06.404296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.595 [2024-10-01 08:46:06.404305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:84144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.595 [2024-10-01 08:46:06.404312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.595 [2024-10-01 08:46:06.404322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:84152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.595 [2024-10-01 08:46:06.404330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.595 [2024-10-01 08:46:06.404339] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2482c90 is same with the state(6) to be set 00:31:14.595 [2024-10-01 08:46:06.404348] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:14.595 [2024-10-01 08:46:06.404354] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:14.595 [2024-10-01 08:46:06.404360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84160 len:8 PRP1 0x0 PRP2 0x0 00:31:14.595 [2024-10-01 08:46:06.404368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.595 [2024-10-01 08:46:06.404406] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2482c90 was disconnected and freed. reset controller. 00:31:14.595 [2024-10-01 08:46:06.407983] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.595 [2024-10-01 08:46:06.408042] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:14.595 [2024-10-01 08:46:06.408837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.595 [2024-10-01 08:46:06.408854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:14.595 [2024-10-01 08:46:06.408866] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:14.595 [2024-10-01 08:46:06.409089] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:14.595 [2024-10-01 08:46:06.409307] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.595 [2024-10-01 08:46:06.409316] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.595 [2024-10-01 08:46:06.409324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.856 [2024-10-01 08:46:06.412814] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.856 [2024-10-01 08:46:06.422082] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.856 [2024-10-01 08:46:06.422675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.857 [2024-10-01 08:46:06.422715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:14.857 [2024-10-01 08:46:06.422726] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:14.857 [2024-10-01 08:46:06.422963] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:14.857 [2024-10-01 08:46:06.423195] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.857 [2024-10-01 08:46:06.423207] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.857 [2024-10-01 08:46:06.423215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.857 [2024-10-01 08:46:06.426707] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.857 [2024-10-01 08:46:06.435968] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.857 [2024-10-01 08:46:06.436624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.857 [2024-10-01 08:46:06.436664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:14.857 [2024-10-01 08:46:06.436675] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:14.857 [2024-10-01 08:46:06.436910] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:14.857 [2024-10-01 08:46:06.437139] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.857 [2024-10-01 08:46:06.437150] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.857 [2024-10-01 08:46:06.437158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.857 [2024-10-01 08:46:06.440650] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.857 [2024-10-01 08:46:06.449701] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.857 [2024-10-01 08:46:06.450389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.857 [2024-10-01 08:46:06.450430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:14.857 [2024-10-01 08:46:06.450441] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:14.857 [2024-10-01 08:46:06.450676] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:14.857 [2024-10-01 08:46:06.450897] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.857 [2024-10-01 08:46:06.450911] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.857 [2024-10-01 08:46:06.450919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.857 [2024-10-01 08:46:06.454422] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.857 [2024-10-01 08:46:06.463468] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.857 [2024-10-01 08:46:06.464095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.857 [2024-10-01 08:46:06.464135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:14.857 [2024-10-01 08:46:06.464147] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:14.857 [2024-10-01 08:46:06.464385] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:14.857 [2024-10-01 08:46:06.464605] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.857 [2024-10-01 08:46:06.464615] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.857 [2024-10-01 08:46:06.464622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.857 [2024-10-01 08:46:06.468124] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.857 [2024-10-01 08:46:06.477381] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.857 [2024-10-01 08:46:06.478059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.857 [2024-10-01 08:46:06.478107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:14.857 [2024-10-01 08:46:06.478119] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:14.857 [2024-10-01 08:46:06.478354] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:14.857 [2024-10-01 08:46:06.478574] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.857 [2024-10-01 08:46:06.478584] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.857 [2024-10-01 08:46:06.478591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.857 [2024-10-01 08:46:06.482091] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.857 [2024-10-01 08:46:06.491147] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.857 [2024-10-01 08:46:06.491756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.857 [2024-10-01 08:46:06.491795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:14.857 [2024-10-01 08:46:06.491806] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:14.857 [2024-10-01 08:46:06.492050] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:14.857 [2024-10-01 08:46:06.492272] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.857 [2024-10-01 08:46:06.492283] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.857 [2024-10-01 08:46:06.492291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.857 [2024-10-01 08:46:06.495785] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.857 [2024-10-01 08:46:06.505048] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.857 [2024-10-01 08:46:06.505697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.857 [2024-10-01 08:46:06.505736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:14.857 [2024-10-01 08:46:06.505747] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:14.857 [2024-10-01 08:46:06.505983] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:14.857 [2024-10-01 08:46:06.506213] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.857 [2024-10-01 08:46:06.506224] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.857 [2024-10-01 08:46:06.506232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.857 [2024-10-01 08:46:06.509725] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.857 [2024-10-01 08:46:06.518976] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.857 [2024-10-01 08:46:06.519633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.857 [2024-10-01 08:46:06.519672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:14.857 [2024-10-01 08:46:06.519683] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:14.857 [2024-10-01 08:46:06.519919] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:14.857 [2024-10-01 08:46:06.520149] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.857 [2024-10-01 08:46:06.520160] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.857 [2024-10-01 08:46:06.520168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.857 [2024-10-01 08:46:06.523661] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.857 [2024-10-01 08:46:06.532721] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.857 [2024-10-01 08:46:06.533344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.857 [2024-10-01 08:46:06.533384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:14.857 [2024-10-01 08:46:06.533395] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:14.857 [2024-10-01 08:46:06.533630] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:14.857 [2024-10-01 08:46:06.533851] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.857 [2024-10-01 08:46:06.533861] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.857 [2024-10-01 08:46:06.533868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.857 [2024-10-01 08:46:06.537370] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.857 [2024-10-01 08:46:06.546635] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.857 [2024-10-01 08:46:06.547329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.857 [2024-10-01 08:46:06.547369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:14.857 [2024-10-01 08:46:06.547380] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:14.857 [2024-10-01 08:46:06.547621] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:14.857 [2024-10-01 08:46:06.547841] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.857 [2024-10-01 08:46:06.547851] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.857 [2024-10-01 08:46:06.547859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.857 [2024-10-01 08:46:06.551361] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.857 [2024-10-01 08:46:06.560406] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.857 [2024-10-01 08:46:06.561022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.857 [2024-10-01 08:46:06.561061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:14.858 [2024-10-01 08:46:06.561073] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:14.858 [2024-10-01 08:46:06.561308] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:14.858 [2024-10-01 08:46:06.561529] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.858 [2024-10-01 08:46:06.561538] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.858 [2024-10-01 08:46:06.561547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.858 [2024-10-01 08:46:06.565050] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.858 [2024-10-01 08:46:06.574307] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.858 [2024-10-01 08:46:06.574932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.858 [2024-10-01 08:46:06.574971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:14.858 [2024-10-01 08:46:06.574984] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:14.858 [2024-10-01 08:46:06.575229] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:14.858 [2024-10-01 08:46:06.575451] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.858 [2024-10-01 08:46:06.575460] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.858 [2024-10-01 08:46:06.575468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.858 [2024-10-01 08:46:06.578960] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.858 [2024-10-01 08:46:06.588216] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.858 [2024-10-01 08:46:06.588871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.858 [2024-10-01 08:46:06.588910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:14.858 [2024-10-01 08:46:06.588922] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:14.858 [2024-10-01 08:46:06.589168] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:14.858 [2024-10-01 08:46:06.589389] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.858 [2024-10-01 08:46:06.589399] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.858 [2024-10-01 08:46:06.589412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.858 [2024-10-01 08:46:06.592901] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.858 [2024-10-01 08:46:06.601950] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.858 [2024-10-01 08:46:06.602568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.858 [2024-10-01 08:46:06.602607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:14.858 [2024-10-01 08:46:06.602618] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:14.858 [2024-10-01 08:46:06.602853] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:14.858 [2024-10-01 08:46:06.603083] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.858 [2024-10-01 08:46:06.603094] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.858 [2024-10-01 08:46:06.603101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.858 [2024-10-01 08:46:06.606593] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.858 [2024-10-01 08:46:06.615849] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.858 [2024-10-01 08:46:06.616408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.858 [2024-10-01 08:46:06.616448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:14.858 [2024-10-01 08:46:06.616460] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:14.858 [2024-10-01 08:46:06.616697] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:14.858 [2024-10-01 08:46:06.616918] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.858 [2024-10-01 08:46:06.616927] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.858 [2024-10-01 08:46:06.616935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.858 [2024-10-01 08:46:06.620435] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.858 [2024-10-01 08:46:06.629699] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.858 [2024-10-01 08:46:06.630333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.858 [2024-10-01 08:46:06.630373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:14.858 [2024-10-01 08:46:06.630383] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:14.858 [2024-10-01 08:46:06.630619] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:14.858 [2024-10-01 08:46:06.630839] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.858 [2024-10-01 08:46:06.630849] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.858 [2024-10-01 08:46:06.630857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.858 [2024-10-01 08:46:06.634356] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.858 [2024-10-01 08:46:06.643617] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.858 [2024-10-01 08:46:06.644298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.858 [2024-10-01 08:46:06.644338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:14.858 [2024-10-01 08:46:06.644350] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:14.858 [2024-10-01 08:46:06.644585] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:14.858 [2024-10-01 08:46:06.644806] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.858 [2024-10-01 08:46:06.644815] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.858 [2024-10-01 08:46:06.644823] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.858 [2024-10-01 08:46:06.648324] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.858 [2024-10-01 08:46:06.657371] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.858 [2024-10-01 08:46:06.658071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.858 [2024-10-01 08:46:06.658111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:14.858 [2024-10-01 08:46:06.658123] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:14.858 [2024-10-01 08:46:06.658361] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:14.858 [2024-10-01 08:46:06.658582] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.858 [2024-10-01 08:46:06.658592] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.858 [2024-10-01 08:46:06.658601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.858 [2024-10-01 08:46:06.662105] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.858 [2024-10-01 08:46:06.671154] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:14.858 [2024-10-01 08:46:06.671780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.858 [2024-10-01 08:46:06.671820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:14.858 [2024-10-01 08:46:06.671833] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:14.858 [2024-10-01 08:46:06.672077] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:14.858 [2024-10-01 08:46:06.672299] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:14.858 [2024-10-01 08:46:06.672310] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:14.858 [2024-10-01 08:46:06.672318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:14.858 [2024-10-01 08:46:06.675815] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.120 [2024-10-01 08:46:06.685075] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.120 [2024-10-01 08:46:06.685642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.120 [2024-10-01 08:46:06.685662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.120 [2024-10-01 08:46:06.685671] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.120 [2024-10-01 08:46:06.685892] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.120 [2024-10-01 08:46:06.686117] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.120 [2024-10-01 08:46:06.686129] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.120 [2024-10-01 08:46:06.686136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.120 [2024-10-01 08:46:06.689623] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.120 [2024-10-01 08:46:06.698870] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.120 [2024-10-01 08:46:06.699488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.120 [2024-10-01 08:46:06.699528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.120 [2024-10-01 08:46:06.699539] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.120 [2024-10-01 08:46:06.699774] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.120 [2024-10-01 08:46:06.700005] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.120 [2024-10-01 08:46:06.700015] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.121 [2024-10-01 08:46:06.700023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.121 [2024-10-01 08:46:06.703515] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.121 [2024-10-01 08:46:06.712769] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.121 [2024-10-01 08:46:06.713399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.121 [2024-10-01 08:46:06.713438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.121 [2024-10-01 08:46:06.713450] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.121 [2024-10-01 08:46:06.713685] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.121 [2024-10-01 08:46:06.713912] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.121 [2024-10-01 08:46:06.713922] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.121 [2024-10-01 08:46:06.713930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.121 8946.33 IOPS, 34.95 MiB/s [2024-10-01 08:46:06.719112] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.121 [2024-10-01 08:46:06.726522] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.121 [2024-10-01 08:46:06.727116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.121 [2024-10-01 08:46:06.727156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.121 [2024-10-01 08:46:06.727168] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.121 [2024-10-01 08:46:06.727408] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.121 [2024-10-01 08:46:06.727628] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.121 [2024-10-01 08:46:06.727638] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.121 [2024-10-01 08:46:06.727651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.121 [2024-10-01 08:46:06.731167] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.121 [2024-10-01 08:46:06.740425] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.121 [2024-10-01 08:46:06.741090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.121 [2024-10-01 08:46:06.741130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.121 [2024-10-01 08:46:06.741143] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.121 [2024-10-01 08:46:06.741382] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.121 [2024-10-01 08:46:06.741604] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.121 [2024-10-01 08:46:06.741614] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.121 [2024-10-01 08:46:06.741622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.121 [2024-10-01 08:46:06.745133] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.121 [2024-10-01 08:46:06.754189] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.121 [2024-10-01 08:46:06.754839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.121 [2024-10-01 08:46:06.754879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.121 [2024-10-01 08:46:06.754890] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.121 [2024-10-01 08:46:06.755136] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.121 [2024-10-01 08:46:06.755357] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.121 [2024-10-01 08:46:06.755368] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.121 [2024-10-01 08:46:06.755375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.121 [2024-10-01 08:46:06.758865] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.121 [2024-10-01 08:46:06.768119] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.121 [2024-10-01 08:46:06.768698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.121 [2024-10-01 08:46:06.768718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.121 [2024-10-01 08:46:06.768726] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.121 [2024-10-01 08:46:06.768942] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.121 [2024-10-01 08:46:06.769166] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.121 [2024-10-01 08:46:06.769176] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.121 [2024-10-01 08:46:06.769184] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.121 [2024-10-01 08:46:06.772671] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.121 [2024-10-01 08:46:06.781922] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.121 [2024-10-01 08:46:06.782487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.121 [2024-10-01 08:46:06.782511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.121 [2024-10-01 08:46:06.782519] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.121 [2024-10-01 08:46:06.782735] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.121 [2024-10-01 08:46:06.782951] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.121 [2024-10-01 08:46:06.782960] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.121 [2024-10-01 08:46:06.782968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.121 [2024-10-01 08:46:06.786460] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.121 [2024-10-01 08:46:06.795706] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.121 [2024-10-01 08:46:06.796322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.121 [2024-10-01 08:46:06.796361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.121 [2024-10-01 08:46:06.796374] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.121 [2024-10-01 08:46:06.796611] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.121 [2024-10-01 08:46:06.796832] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.121 [2024-10-01 08:46:06.796842] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.121 [2024-10-01 08:46:06.796849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.121 [2024-10-01 08:46:06.800350] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.121 [2024-10-01 08:46:06.809607] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.121 [2024-10-01 08:46:06.810305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.121 [2024-10-01 08:46:06.810345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.121 [2024-10-01 08:46:06.810357] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.121 [2024-10-01 08:46:06.810592] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.121 [2024-10-01 08:46:06.810812] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.121 [2024-10-01 08:46:06.810823] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.121 [2024-10-01 08:46:06.810830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.121 [2024-10-01 08:46:06.814331] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.121 [2024-10-01 08:46:06.823381] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.121 [2024-10-01 08:46:06.824070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.121 [2024-10-01 08:46:06.824111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.121 [2024-10-01 08:46:06.824124] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.121 [2024-10-01 08:46:06.824363] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.121 [2024-10-01 08:46:06.824588] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.121 [2024-10-01 08:46:06.824598] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.121 [2024-10-01 08:46:06.824607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.121 [2024-10-01 08:46:06.828108] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.121 [2024-10-01 08:46:06.837176] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.121 [2024-10-01 08:46:06.837835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.121 [2024-10-01 08:46:06.837874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.121 [2024-10-01 08:46:06.837886] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.121 [2024-10-01 08:46:06.838129] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.121 [2024-10-01 08:46:06.838351] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.122 [2024-10-01 08:46:06.838361] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.122 [2024-10-01 08:46:06.838369] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.122 [2024-10-01 08:46:06.841862] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.122 [2024-10-01 08:46:06.850924] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.122 [2024-10-01 08:46:06.851456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.122 [2024-10-01 08:46:06.851476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.122 [2024-10-01 08:46:06.851484] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.122 [2024-10-01 08:46:06.851701] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.122 [2024-10-01 08:46:06.851917] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.122 [2024-10-01 08:46:06.851927] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.122 [2024-10-01 08:46:06.851935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.122 [2024-10-01 08:46:06.855428] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.122 [2024-10-01 08:46:06.864678] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.122 [2024-10-01 08:46:06.865317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.122 [2024-10-01 08:46:06.865357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.122 [2024-10-01 08:46:06.865369] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.122 [2024-10-01 08:46:06.865604] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.122 [2024-10-01 08:46:06.865825] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.122 [2024-10-01 08:46:06.865835] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.122 [2024-10-01 08:46:06.865842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.122 [2024-10-01 08:46:06.869349] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.122 [2024-10-01 08:46:06.878601] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.122 [2024-10-01 08:46:06.879295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.122 [2024-10-01 08:46:06.879334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.122 [2024-10-01 08:46:06.879345] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.122 [2024-10-01 08:46:06.879581] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.122 [2024-10-01 08:46:06.879802] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.122 [2024-10-01 08:46:06.879812] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.122 [2024-10-01 08:46:06.879820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.122 [2024-10-01 08:46:06.883321] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.122 [2024-10-01 08:46:06.892372] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.122 [2024-10-01 08:46:06.893040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.122 [2024-10-01 08:46:06.893080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.122 [2024-10-01 08:46:06.893092] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.122 [2024-10-01 08:46:06.893329] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.122 [2024-10-01 08:46:06.893550] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.122 [2024-10-01 08:46:06.893560] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.122 [2024-10-01 08:46:06.893568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.122 [2024-10-01 08:46:06.897070] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.122 [2024-10-01 08:46:06.906128] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.122 [2024-10-01 08:46:06.906546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.122 [2024-10-01 08:46:06.906568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.122 [2024-10-01 08:46:06.906577] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.122 [2024-10-01 08:46:06.906795] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.122 [2024-10-01 08:46:06.907023] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.122 [2024-10-01 08:46:06.907035] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.122 [2024-10-01 08:46:06.907042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.122 [2024-10-01 08:46:06.910532] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.122 [2024-10-01 08:46:06.919989] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.122 [2024-10-01 08:46:06.920656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.122 [2024-10-01 08:46:06.920698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.122 [2024-10-01 08:46:06.920715] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.122 [2024-10-01 08:46:06.920950] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.122 [2024-10-01 08:46:06.921179] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.122 [2024-10-01 08:46:06.921195] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.122 [2024-10-01 08:46:06.921203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.122 [2024-10-01 08:46:06.924703] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.122 [2024-10-01 08:46:06.933765] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.122 [2024-10-01 08:46:06.934401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.122 [2024-10-01 08:46:06.934440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.122 [2024-10-01 08:46:06.934452] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.122 [2024-10-01 08:46:06.934688] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.122 [2024-10-01 08:46:06.934908] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.122 [2024-10-01 08:46:06.934919] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.122 [2024-10-01 08:46:06.934926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.122 [2024-10-01 08:46:06.938426] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.384 [2024-10-01 08:46:06.947692] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.384 [2024-10-01 08:46:06.948272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-10-01 08:46:06.948293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.384 [2024-10-01 08:46:06.948301] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.384 [2024-10-01 08:46:06.948517] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.384 [2024-10-01 08:46:06.948734] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.384 [2024-10-01 08:46:06.948744] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.384 [2024-10-01 08:46:06.948751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.384 [2024-10-01 08:46:06.952240] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.384 [2024-10-01 08:46:06.961491] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.384 [2024-10-01 08:46:06.962066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-10-01 08:46:06.962083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.384 [2024-10-01 08:46:06.962091] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.384 [2024-10-01 08:46:06.962307] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.384 [2024-10-01 08:46:06.962522] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.384 [2024-10-01 08:46:06.962538] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.384 [2024-10-01 08:46:06.962547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.384 [2024-10-01 08:46:06.966037] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.384 [2024-10-01 08:46:06.975414] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.384 [2024-10-01 08:46:06.975970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-10-01 08:46:06.975986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.384 [2024-10-01 08:46:06.976000] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.384 [2024-10-01 08:46:06.976217] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.384 [2024-10-01 08:46:06.976433] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.384 [2024-10-01 08:46:06.976442] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.384 [2024-10-01 08:46:06.976449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.384 [2024-10-01 08:46:06.979932] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.384 [2024-10-01 08:46:06.989185] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.384 [2024-10-01 08:46:06.989838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-10-01 08:46:06.989878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.384 [2024-10-01 08:46:06.989889] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.384 [2024-10-01 08:46:06.990132] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.384 [2024-10-01 08:46:06.990354] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.384 [2024-10-01 08:46:06.990364] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.384 [2024-10-01 08:46:06.990372] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.384 [2024-10-01 08:46:06.993864] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.384 [2024-10-01 08:46:07.002915] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.384 [2024-10-01 08:46:07.003473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-10-01 08:46:07.003493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.384 [2024-10-01 08:46:07.003501] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.384 [2024-10-01 08:46:07.003718] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.384 [2024-10-01 08:46:07.003935] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.384 [2024-10-01 08:46:07.003945] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.384 [2024-10-01 08:46:07.003952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.384 [2024-10-01 08:46:07.007445] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.384 [2024-10-01 08:46:07.016700] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.384 [2024-10-01 08:46:07.017374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-10-01 08:46:07.017414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.384 [2024-10-01 08:46:07.017425] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.384 [2024-10-01 08:46:07.017661] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.384 [2024-10-01 08:46:07.017902] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.384 [2024-10-01 08:46:07.017912] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.384 [2024-10-01 08:46:07.017920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.384 [2024-10-01 08:46:07.021417] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.384 [2024-10-01 08:46:07.030483] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.384 [2024-10-01 08:46:07.031040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.384 [2024-10-01 08:46:07.031067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.384 [2024-10-01 08:46:07.031076] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.384 [2024-10-01 08:46:07.031298] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.384 [2024-10-01 08:46:07.031525] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.384 [2024-10-01 08:46:07.031536] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.384 [2024-10-01 08:46:07.031544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.384 [2024-10-01 08:46:07.035045] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.384 [2024-10-01 08:46:07.044340] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.384 [2024-10-01 08:46:07.045016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-10-01 08:46:07.045055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.385 [2024-10-01 08:46:07.045066] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.385 [2024-10-01 08:46:07.045301] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.385 [2024-10-01 08:46:07.045521] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.385 [2024-10-01 08:46:07.045531] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.385 [2024-10-01 08:46:07.045539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.385 [2024-10-01 08:46:07.049038] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.385 [2024-10-01 08:46:07.058088] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.385 [2024-10-01 08:46:07.058643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-10-01 08:46:07.058663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.385 [2024-10-01 08:46:07.058671] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.385 [2024-10-01 08:46:07.058895] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.385 [2024-10-01 08:46:07.059117] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.385 [2024-10-01 08:46:07.059128] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.385 [2024-10-01 08:46:07.059135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.385 [2024-10-01 08:46:07.062620] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.385 [2024-10-01 08:46:07.071873] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.385 [2024-10-01 08:46:07.072422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-10-01 08:46:07.072439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.385 [2024-10-01 08:46:07.072447] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.385 [2024-10-01 08:46:07.072663] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.385 [2024-10-01 08:46:07.072880] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.385 [2024-10-01 08:46:07.072890] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.385 [2024-10-01 08:46:07.072898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.385 [2024-10-01 08:46:07.076389] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.385 [2024-10-01 08:46:07.085640] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.385 [2024-10-01 08:46:07.086300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-10-01 08:46:07.086339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.385 [2024-10-01 08:46:07.086351] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.385 [2024-10-01 08:46:07.086587] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.385 [2024-10-01 08:46:07.086808] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.385 [2024-10-01 08:46:07.086818] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.385 [2024-10-01 08:46:07.086826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.385 [2024-10-01 08:46:07.090328] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.385 [2024-10-01 08:46:07.099383] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.385 [2024-10-01 08:46:07.100044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-10-01 08:46:07.100084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.385 [2024-10-01 08:46:07.100097] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.385 [2024-10-01 08:46:07.100334] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.385 [2024-10-01 08:46:07.100555] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.385 [2024-10-01 08:46:07.100564] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.385 [2024-10-01 08:46:07.100577] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.385 [2024-10-01 08:46:07.104079] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.385 [2024-10-01 08:46:07.113131] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.385 [2024-10-01 08:46:07.113657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-10-01 08:46:07.113696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.385 [2024-10-01 08:46:07.113709] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.385 [2024-10-01 08:46:07.113948] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.385 [2024-10-01 08:46:07.114177] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.385 [2024-10-01 08:46:07.114188] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.385 [2024-10-01 08:46:07.114196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.385 [2024-10-01 08:46:07.117689] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.385 [2024-10-01 08:46:07.126945] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.385 [2024-10-01 08:46:07.127482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-10-01 08:46:07.127502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.385 [2024-10-01 08:46:07.127510] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.385 [2024-10-01 08:46:07.127727] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.385 [2024-10-01 08:46:07.127943] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.385 [2024-10-01 08:46:07.127952] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.385 [2024-10-01 08:46:07.127959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.385 [2024-10-01 08:46:07.131451] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.385 [2024-10-01 08:46:07.140714] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.385 [2024-10-01 08:46:07.141251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-10-01 08:46:07.141291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.385 [2024-10-01 08:46:07.141302] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.385 [2024-10-01 08:46:07.141538] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.385 [2024-10-01 08:46:07.141758] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.385 [2024-10-01 08:46:07.141768] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.385 [2024-10-01 08:46:07.141775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.385 [2024-10-01 08:46:07.145283] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.385 [2024-10-01 08:46:07.154540] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.385 [2024-10-01 08:46:07.155251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-10-01 08:46:07.155290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.385 [2024-10-01 08:46:07.155302] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.385 [2024-10-01 08:46:07.155537] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.385 [2024-10-01 08:46:07.155758] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.385 [2024-10-01 08:46:07.155768] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.385 [2024-10-01 08:46:07.155776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.385 [2024-10-01 08:46:07.159276] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.385 [2024-10-01 08:46:07.168375] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.385 [2024-10-01 08:46:07.168970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-10-01 08:46:07.169019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.385 [2024-10-01 08:46:07.169031] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.385 [2024-10-01 08:46:07.169267] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.385 [2024-10-01 08:46:07.169488] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.385 [2024-10-01 08:46:07.169498] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.385 [2024-10-01 08:46:07.169506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.385 [2024-10-01 08:46:07.173003] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.385 [2024-10-01 08:46:07.182262] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.385 [2024-10-01 08:46:07.182805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.385 [2024-10-01 08:46:07.182825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.385 [2024-10-01 08:46:07.182833] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.386 [2024-10-01 08:46:07.183054] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.386 [2024-10-01 08:46:07.183272] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.386 [2024-10-01 08:46:07.183281] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.386 [2024-10-01 08:46:07.183288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.386 [2024-10-01 08:46:07.186775] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.386 [2024-10-01 08:46:07.196025] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.386 [2024-10-01 08:46:07.196676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.386 [2024-10-01 08:46:07.196715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.386 [2024-10-01 08:46:07.196726] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.386 [2024-10-01 08:46:07.196966] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.386 [2024-10-01 08:46:07.197195] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.386 [2024-10-01 08:46:07.197207] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.386 [2024-10-01 08:46:07.197214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.386 [2024-10-01 08:46:07.200706] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.647 [2024-10-01 08:46:07.209756] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.647 [2024-10-01 08:46:07.210439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.647 [2024-10-01 08:46:07.210479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.647 [2024-10-01 08:46:07.210490] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.647 [2024-10-01 08:46:07.210726] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.647 [2024-10-01 08:46:07.210946] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.647 [2024-10-01 08:46:07.210956] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.647 [2024-10-01 08:46:07.210964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.647 [2024-10-01 08:46:07.214466] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.647 [2024-10-01 08:46:07.223522] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.647 [2024-10-01 08:46:07.224177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.647 [2024-10-01 08:46:07.224216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.647 [2024-10-01 08:46:07.224229] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.647 [2024-10-01 08:46:07.224466] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.647 [2024-10-01 08:46:07.224686] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.647 [2024-10-01 08:46:07.224696] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.647 [2024-10-01 08:46:07.224705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.647 [2024-10-01 08:46:07.228205] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.647 [2024-10-01 08:46:07.237272] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.647 [2024-10-01 08:46:07.237715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.647 [2024-10-01 08:46:07.237736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.647 [2024-10-01 08:46:07.237745] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.647 [2024-10-01 08:46:07.237962] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.647 [2024-10-01 08:46:07.238185] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.647 [2024-10-01 08:46:07.238195] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.647 [2024-10-01 08:46:07.238208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.647 [2024-10-01 08:46:07.241697] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.647 [2024-10-01 08:46:07.251165] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.647 [2024-10-01 08:46:07.251538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.647 [2024-10-01 08:46:07.251555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.647 [2024-10-01 08:46:07.251562] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.647 [2024-10-01 08:46:07.251778] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.647 [2024-10-01 08:46:07.252000] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.647 [2024-10-01 08:46:07.252009] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.647 [2024-10-01 08:46:07.252017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.647 [2024-10-01 08:46:07.255500] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.647 [2024-10-01 08:46:07.264953] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.647 [2024-10-01 08:46:07.265609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.647 [2024-10-01 08:46:07.265648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.647 [2024-10-01 08:46:07.265659] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.647 [2024-10-01 08:46:07.265896] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.647 [2024-10-01 08:46:07.266124] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.647 [2024-10-01 08:46:07.266136] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.647 [2024-10-01 08:46:07.266143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.647 [2024-10-01 08:46:07.269638] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.647 [2024-10-01 08:46:07.278689] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.647 [2024-10-01 08:46:07.279174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.647 [2024-10-01 08:46:07.279214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.647 [2024-10-01 08:46:07.279226] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.647 [2024-10-01 08:46:07.279465] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.647 [2024-10-01 08:46:07.279686] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.647 [2024-10-01 08:46:07.279696] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.647 [2024-10-01 08:46:07.279704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.647 [2024-10-01 08:46:07.283205] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.647 [2024-10-01 08:46:07.292462] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.647 [2024-10-01 08:46:07.293012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.647 [2024-10-01 08:46:07.293054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.647 [2024-10-01 08:46:07.293066] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.647 [2024-10-01 08:46:07.293306] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.647 [2024-10-01 08:46:07.293526] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.648 [2024-10-01 08:46:07.293535] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.648 [2024-10-01 08:46:07.293543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.648 [2024-10-01 08:46:07.297045] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.648 [2024-10-01 08:46:07.306301] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.648 [2024-10-01 08:46:07.306824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.648 [2024-10-01 08:46:07.306843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.648 [2024-10-01 08:46:07.306851] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.648 [2024-10-01 08:46:07.307074] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.648 [2024-10-01 08:46:07.307291] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.648 [2024-10-01 08:46:07.307301] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.648 [2024-10-01 08:46:07.307308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.648 [2024-10-01 08:46:07.310796] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.648 [2024-10-01 08:46:07.320056] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.648 [2024-10-01 08:46:07.320653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.648 [2024-10-01 08:46:07.320693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.648 [2024-10-01 08:46:07.320703] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.648 [2024-10-01 08:46:07.320940] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.648 [2024-10-01 08:46:07.321168] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.648 [2024-10-01 08:46:07.321179] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.648 [2024-10-01 08:46:07.321187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.648 [2024-10-01 08:46:07.324677] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.648 [2024-10-01 08:46:07.333946] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.648 [2024-10-01 08:46:07.334492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.648 [2024-10-01 08:46:07.334512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.648 [2024-10-01 08:46:07.334520] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.648 [2024-10-01 08:46:07.334737] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.648 [2024-10-01 08:46:07.334959] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.648 [2024-10-01 08:46:07.334968] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.648 [2024-10-01 08:46:07.334975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.648 [2024-10-01 08:46:07.338471] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.648 [2024-10-01 08:46:07.347745] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.648 [2024-10-01 08:46:07.348382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.648 [2024-10-01 08:46:07.348421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.648 [2024-10-01 08:46:07.348432] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.648 [2024-10-01 08:46:07.348668] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.648 [2024-10-01 08:46:07.348889] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.648 [2024-10-01 08:46:07.348899] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.648 [2024-10-01 08:46:07.348907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.648 [2024-10-01 08:46:07.352410] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.648 [2024-10-01 08:46:07.361708] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.648 [2024-10-01 08:46:07.362056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.648 [2024-10-01 08:46:07.362076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.648 [2024-10-01 08:46:07.362085] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.648 [2024-10-01 08:46:07.362301] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.648 [2024-10-01 08:46:07.362518] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.648 [2024-10-01 08:46:07.362528] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.648 [2024-10-01 08:46:07.362535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.648 [2024-10-01 08:46:07.366032] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.648 [2024-10-01 08:46:07.375499] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.648 [2024-10-01 08:46:07.376046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.648 [2024-10-01 08:46:07.376071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.648 [2024-10-01 08:46:07.376080] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.648 [2024-10-01 08:46:07.376301] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.648 [2024-10-01 08:46:07.376519] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.648 [2024-10-01 08:46:07.376529] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.648 [2024-10-01 08:46:07.376536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.648 [2024-10-01 08:46:07.380033] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.648 [2024-10-01 08:46:07.389279] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.648 [2024-10-01 08:46:07.389940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.648 [2024-10-01 08:46:07.389979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.648 [2024-10-01 08:46:07.389992] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.648 [2024-10-01 08:46:07.390238] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.648 [2024-10-01 08:46:07.390459] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.648 [2024-10-01 08:46:07.390469] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.648 [2024-10-01 08:46:07.390477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.648 [2024-10-01 08:46:07.393972] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.648 [2024-10-01 08:46:07.403031] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.648 [2024-10-01 08:46:07.403663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.648 [2024-10-01 08:46:07.403700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.648 [2024-10-01 08:46:07.403713] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.648 [2024-10-01 08:46:07.403951] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.648 [2024-10-01 08:46:07.404184] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.648 [2024-10-01 08:46:07.404195] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.648 [2024-10-01 08:46:07.404203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.648 [2024-10-01 08:46:07.407697] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.648 [2024-10-01 08:46:07.416957] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.648 [2024-10-01 08:46:07.417482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.648 [2024-10-01 08:46:07.417502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.648 [2024-10-01 08:46:07.417510] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.648 [2024-10-01 08:46:07.417727] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.648 [2024-10-01 08:46:07.417943] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.648 [2024-10-01 08:46:07.417952] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.648 [2024-10-01 08:46:07.417959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.648 [2024-10-01 08:46:07.421452] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.648 [2024-10-01 08:46:07.430718] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.648 [2024-10-01 08:46:07.431333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.648 [2024-10-01 08:46:07.431373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.648 [2024-10-01 08:46:07.431390] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.648 [2024-10-01 08:46:07.431626] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.648 [2024-10-01 08:46:07.431847] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.648 [2024-10-01 08:46:07.431858] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.648 [2024-10-01 08:46:07.431866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.648 [2024-10-01 08:46:07.435481] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.649 [2024-10-01 08:46:07.444558] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.649 [2024-10-01 08:46:07.445138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.649 [2024-10-01 08:46:07.445177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.649 [2024-10-01 08:46:07.445190] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.649 [2024-10-01 08:46:07.445428] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.649 [2024-10-01 08:46:07.445648] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.649 [2024-10-01 08:46:07.445659] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.649 [2024-10-01 08:46:07.445666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.649 [2024-10-01 08:46:07.449165] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.649 [2024-10-01 08:46:07.458415] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.649 [2024-10-01 08:46:07.458984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.649 [2024-10-01 08:46:07.459008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.649 [2024-10-01 08:46:07.459017] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.649 [2024-10-01 08:46:07.459233] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.649 [2024-10-01 08:46:07.459450] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.649 [2024-10-01 08:46:07.459460] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.649 [2024-10-01 08:46:07.459467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.649 [2024-10-01 08:46:07.462952] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.910 [2024-10-01 08:46:07.472205] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.910 [2024-10-01 08:46:07.472795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.910 [2024-10-01 08:46:07.472811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.910 [2024-10-01 08:46:07.472819] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.910 [2024-10-01 08:46:07.473043] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.910 [2024-10-01 08:46:07.473260] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.910 [2024-10-01 08:46:07.473275] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.910 [2024-10-01 08:46:07.473283] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.910 [2024-10-01 08:46:07.476770] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.910 [2024-10-01 08:46:07.486058] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.910 [2024-10-01 08:46:07.486717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.910 [2024-10-01 08:46:07.486756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.910 [2024-10-01 08:46:07.486768] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.910 [2024-10-01 08:46:07.487013] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.910 [2024-10-01 08:46:07.487235] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.910 [2024-10-01 08:46:07.487245] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.910 [2024-10-01 08:46:07.487252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.910 [2024-10-01 08:46:07.490744] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.910 [2024-10-01 08:46:07.499801] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.910 [2024-10-01 08:46:07.500469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.910 [2024-10-01 08:46:07.500508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.910 [2024-10-01 08:46:07.500519] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.910 [2024-10-01 08:46:07.500755] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.910 [2024-10-01 08:46:07.500976] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.911 [2024-10-01 08:46:07.500986] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.911 [2024-10-01 08:46:07.501003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.911 [2024-10-01 08:46:07.504497] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.911 [2024-10-01 08:46:07.513554] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.911 [2024-10-01 08:46:07.514142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.911 [2024-10-01 08:46:07.514182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.911 [2024-10-01 08:46:07.514195] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.911 [2024-10-01 08:46:07.514434] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.911 [2024-10-01 08:46:07.514654] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.911 [2024-10-01 08:46:07.514665] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.911 [2024-10-01 08:46:07.514673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.911 [2024-10-01 08:46:07.518172] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.911 [2024-10-01 08:46:07.527438] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.911 [2024-10-01 08:46:07.528111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.911 [2024-10-01 08:46:07.528151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.911 [2024-10-01 08:46:07.528163] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.911 [2024-10-01 08:46:07.528403] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.911 [2024-10-01 08:46:07.528623] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.911 [2024-10-01 08:46:07.528633] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.911 [2024-10-01 08:46:07.528641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.911 [2024-10-01 08:46:07.532140] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.911 [2024-10-01 08:46:07.541198] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.911 [2024-10-01 08:46:07.541708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.911 [2024-10-01 08:46:07.541728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.911 [2024-10-01 08:46:07.541736] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.911 [2024-10-01 08:46:07.541952] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.911 [2024-10-01 08:46:07.542174] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.911 [2024-10-01 08:46:07.542184] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.911 [2024-10-01 08:46:07.542192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.911 [2024-10-01 08:46:07.545687] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.911 [2024-10-01 08:46:07.554928] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.911 [2024-10-01 08:46:07.555484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.911 [2024-10-01 08:46:07.555501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.911 [2024-10-01 08:46:07.555509] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.911 [2024-10-01 08:46:07.555725] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.911 [2024-10-01 08:46:07.555941] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.911 [2024-10-01 08:46:07.555950] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.911 [2024-10-01 08:46:07.555958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.911 [2024-10-01 08:46:07.559445] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.911 [2024-10-01 08:46:07.568689] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.911 [2024-10-01 08:46:07.569252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.911 [2024-10-01 08:46:07.569292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.911 [2024-10-01 08:46:07.569303] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.911 [2024-10-01 08:46:07.569543] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.911 [2024-10-01 08:46:07.569763] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.911 [2024-10-01 08:46:07.569775] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.911 [2024-10-01 08:46:07.569782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.911 [2024-10-01 08:46:07.573278] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.911 [2024-10-01 08:46:07.582534] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.911 [2024-10-01 08:46:07.583109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.911 [2024-10-01 08:46:07.583149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.911 [2024-10-01 08:46:07.583161] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.911 [2024-10-01 08:46:07.583400] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.911 [2024-10-01 08:46:07.583620] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.911 [2024-10-01 08:46:07.583631] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.911 [2024-10-01 08:46:07.583639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.911 [2024-10-01 08:46:07.587138] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.911 [2024-10-01 08:46:07.596392] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.911 [2024-10-01 08:46:07.596894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.911 [2024-10-01 08:46:07.596934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.911 [2024-10-01 08:46:07.596945] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.911 [2024-10-01 08:46:07.597191] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.911 [2024-10-01 08:46:07.597413] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.911 [2024-10-01 08:46:07.597423] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.911 [2024-10-01 08:46:07.597431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.911 [2024-10-01 08:46:07.600921] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.911 [2024-10-01 08:46:07.610183] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.911 [2024-10-01 08:46:07.610795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.911 [2024-10-01 08:46:07.610834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.911 [2024-10-01 08:46:07.610846] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.911 [2024-10-01 08:46:07.611091] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.911 [2024-10-01 08:46:07.611312] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.911 [2024-10-01 08:46:07.611322] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.911 [2024-10-01 08:46:07.611335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.911 [2024-10-01 08:46:07.614827] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.911 [2024-10-01 08:46:07.624088] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.911 [2024-10-01 08:46:07.624753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.911 [2024-10-01 08:46:07.624793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.911 [2024-10-01 08:46:07.624804] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.911 [2024-10-01 08:46:07.625048] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.911 [2024-10-01 08:46:07.625270] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.911 [2024-10-01 08:46:07.625280] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.911 [2024-10-01 08:46:07.625288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.911 [2024-10-01 08:46:07.628783] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.911 [2024-10-01 08:46:07.637860] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.911 [2024-10-01 08:46:07.638415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.911 [2024-10-01 08:46:07.638435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.911 [2024-10-01 08:46:07.638444] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.911 [2024-10-01 08:46:07.638660] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.911 [2024-10-01 08:46:07.638877] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.911 [2024-10-01 08:46:07.638887] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.911 [2024-10-01 08:46:07.638894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.911 [2024-10-01 08:46:07.642390] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.912 [2024-10-01 08:46:07.651673] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.912 [2024-10-01 08:46:07.652203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.912 [2024-10-01 08:46:07.652222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.912 [2024-10-01 08:46:07.652230] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.912 [2024-10-01 08:46:07.652447] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.912 [2024-10-01 08:46:07.652663] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.912 [2024-10-01 08:46:07.652672] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.912 [2024-10-01 08:46:07.652679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.912 [2024-10-01 08:46:07.656174] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.912 [2024-10-01 08:46:07.665439] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.912 [2024-10-01 08:46:07.666082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.912 [2024-10-01 08:46:07.666121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.912 [2024-10-01 08:46:07.666133] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.912 [2024-10-01 08:46:07.666368] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.912 [2024-10-01 08:46:07.666589] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.912 [2024-10-01 08:46:07.666599] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.912 [2024-10-01 08:46:07.666606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.912 [2024-10-01 08:46:07.670103] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.912 [2024-10-01 08:46:07.679361] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.912 [2024-10-01 08:46:07.680032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.912 [2024-10-01 08:46:07.680072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.912 [2024-10-01 08:46:07.680085] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.912 [2024-10-01 08:46:07.680326] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.912 [2024-10-01 08:46:07.680547] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.912 [2024-10-01 08:46:07.680557] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.912 [2024-10-01 08:46:07.680565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.912 [2024-10-01 08:46:07.684070] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.912 [2024-10-01 08:46:07.693135] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.912 [2024-10-01 08:46:07.693697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.912 [2024-10-01 08:46:07.693719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.912 [2024-10-01 08:46:07.693727] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.912 [2024-10-01 08:46:07.693944] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.912 [2024-10-01 08:46:07.694168] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.912 [2024-10-01 08:46:07.694179] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.912 [2024-10-01 08:46:07.694186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.912 [2024-10-01 08:46:07.697731] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.912 [2024-10-01 08:46:07.706999] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.912 [2024-10-01 08:46:07.707613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.912 [2024-10-01 08:46:07.707653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.912 [2024-10-01 08:46:07.707664] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.912 [2024-10-01 08:46:07.707900] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.912 [2024-10-01 08:46:07.708137] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.912 [2024-10-01 08:46:07.708149] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.912 [2024-10-01 08:46:07.708157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.912 [2024-10-01 08:46:07.711652] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.912 6709.75 IOPS, 26.21 MiB/s [2024-10-01 08:46:07.722563] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:15.912 [2024-10-01 08:46:07.723216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.912 [2024-10-01 08:46:07.723255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:15.912 [2024-10-01 08:46:07.723267] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:15.912 [2024-10-01 08:46:07.723502] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:15.912 [2024-10-01 08:46:07.723723] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.912 [2024-10-01 08:46:07.723733] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.912 [2024-10-01 08:46:07.723741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.912 [2024-10-01 08:46:07.727240] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.175 [2024-10-01 08:46:07.736305] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.175 [2024-10-01 08:46:07.736977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.175 [2024-10-01 08:46:07.737024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.175 [2024-10-01 08:46:07.737037] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.175 [2024-10-01 08:46:07.737274] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.175 [2024-10-01 08:46:07.737494] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.175 [2024-10-01 08:46:07.737504] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.175 [2024-10-01 08:46:07.737512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.175 [2024-10-01 08:46:07.741010] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.175 [2024-10-01 08:46:07.750076] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.175 [2024-10-01 08:46:07.750748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.175 [2024-10-01 08:46:07.750788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.175 [2024-10-01 08:46:07.750799] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.175 [2024-10-01 08:46:07.751045] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.175 [2024-10-01 08:46:07.751267] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.175 [2024-10-01 08:46:07.751276] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.175 [2024-10-01 08:46:07.751293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.175 [2024-10-01 08:46:07.754787] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.175 [2024-10-01 08:46:07.763836] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.175 [2024-10-01 08:46:07.764470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.175 [2024-10-01 08:46:07.764510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.175 [2024-10-01 08:46:07.764521] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.175 [2024-10-01 08:46:07.764757] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.175 [2024-10-01 08:46:07.764977] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.175 [2024-10-01 08:46:07.764987] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.175 [2024-10-01 08:46:07.765006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.175 [2024-10-01 08:46:07.768497] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.175 [2024-10-01 08:46:07.777746] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.175 [2024-10-01 08:46:07.778421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.175 [2024-10-01 08:46:07.778460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.175 [2024-10-01 08:46:07.778471] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.175 [2024-10-01 08:46:07.778707] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.175 [2024-10-01 08:46:07.778927] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.175 [2024-10-01 08:46:07.778937] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.175 [2024-10-01 08:46:07.778945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.175 [2024-10-01 08:46:07.782447] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.175 [2024-10-01 08:46:07.791493] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.175 [2024-10-01 08:46:07.792203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.175 [2024-10-01 08:46:07.792242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.175 [2024-10-01 08:46:07.792253] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.175 [2024-10-01 08:46:07.792489] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.175 [2024-10-01 08:46:07.792709] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.175 [2024-10-01 08:46:07.792718] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.175 [2024-10-01 08:46:07.792726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.175 [2024-10-01 08:46:07.796229] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.175 [2024-10-01 08:46:07.805275] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.175 [2024-10-01 08:46:07.805944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.175 [2024-10-01 08:46:07.805988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.175 [2024-10-01 08:46:07.806010] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.175 [2024-10-01 08:46:07.806246] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.175 [2024-10-01 08:46:07.806466] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.175 [2024-10-01 08:46:07.806476] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.175 [2024-10-01 08:46:07.806484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.175 [2024-10-01 08:46:07.809975] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.175 [2024-10-01 08:46:07.819025] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.175 [2024-10-01 08:46:07.819649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.175 [2024-10-01 08:46:07.819689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.175 [2024-10-01 08:46:07.819700] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.175 [2024-10-01 08:46:07.819935] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.175 [2024-10-01 08:46:07.820166] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.175 [2024-10-01 08:46:07.820178] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.175 [2024-10-01 08:46:07.820185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.175 [2024-10-01 08:46:07.823677] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.175 [2024-10-01 08:46:07.832943] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.175 [2024-10-01 08:46:07.833617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.175 [2024-10-01 08:46:07.833657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.175 [2024-10-01 08:46:07.833669] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.175 [2024-10-01 08:46:07.833904] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.175 [2024-10-01 08:46:07.834145] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.175 [2024-10-01 08:46:07.834157] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.175 [2024-10-01 08:46:07.834165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.175 [2024-10-01 08:46:07.837664] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.175 [2024-10-01 08:46:07.846745] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.175 [2024-10-01 08:46:07.847378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.175 [2024-10-01 08:46:07.847417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.175 [2024-10-01 08:46:07.847428] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.175 [2024-10-01 08:46:07.847664] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.175 [2024-10-01 08:46:07.847890] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.175 [2024-10-01 08:46:07.847900] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.175 [2024-10-01 08:46:07.847908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.175 [2024-10-01 08:46:07.851414] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.175 [2024-10-01 08:46:07.860482] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.175 [2024-10-01 08:46:07.861058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.175 [2024-10-01 08:46:07.861079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.175 [2024-10-01 08:46:07.861087] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.176 [2024-10-01 08:46:07.861304] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.176 [2024-10-01 08:46:07.861521] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.176 [2024-10-01 08:46:07.861532] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.176 [2024-10-01 08:46:07.861539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.176 [2024-10-01 08:46:07.865034] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.176 [2024-10-01 08:46:07.874297] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.176 [2024-10-01 08:46:07.874893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.176 [2024-10-01 08:46:07.874935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.176 [2024-10-01 08:46:07.874946] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.176 [2024-10-01 08:46:07.875190] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.176 [2024-10-01 08:46:07.875411] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.176 [2024-10-01 08:46:07.875421] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.176 [2024-10-01 08:46:07.875429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.176 [2024-10-01 08:46:07.878922] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.176 [2024-10-01 08:46:07.888183] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.176 [2024-10-01 08:46:07.888794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.176 [2024-10-01 08:46:07.888833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.176 [2024-10-01 08:46:07.888845] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.176 [2024-10-01 08:46:07.889092] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.176 [2024-10-01 08:46:07.889313] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.176 [2024-10-01 08:46:07.889326] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.176 [2024-10-01 08:46:07.889334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.176 [2024-10-01 08:46:07.892838] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.176 [2024-10-01 08:46:07.902112] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.176 [2024-10-01 08:46:07.902763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.176 [2024-10-01 08:46:07.902803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.176 [2024-10-01 08:46:07.902814] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.176 [2024-10-01 08:46:07.903059] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.176 [2024-10-01 08:46:07.903281] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.176 [2024-10-01 08:46:07.903291] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.176 [2024-10-01 08:46:07.903298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.176 [2024-10-01 08:46:07.906789] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.176 [2024-10-01 08:46:07.915848] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.176 [2024-10-01 08:46:07.916519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.176 [2024-10-01 08:46:07.916559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.176 [2024-10-01 08:46:07.916570] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.176 [2024-10-01 08:46:07.916805] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.176 [2024-10-01 08:46:07.917034] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.176 [2024-10-01 08:46:07.917045] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.176 [2024-10-01 08:46:07.917053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.176 [2024-10-01 08:46:07.920547] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.176 [2024-10-01 08:46:07.929608] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.176 [2024-10-01 08:46:07.930322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.176 [2024-10-01 08:46:07.930363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.176 [2024-10-01 08:46:07.930375] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.176 [2024-10-01 08:46:07.930611] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.176 [2024-10-01 08:46:07.930831] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.176 [2024-10-01 08:46:07.930843] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.176 [2024-10-01 08:46:07.930851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.176 [2024-10-01 08:46:07.934365] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.176 [2024-10-01 08:46:07.943436] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.176 [2024-10-01 08:46:07.944012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.176 [2024-10-01 08:46:07.944032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.176 [2024-10-01 08:46:07.944045] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.176 [2024-10-01 08:46:07.944263] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.176 [2024-10-01 08:46:07.944479] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.176 [2024-10-01 08:46:07.944489] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.176 [2024-10-01 08:46:07.944496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.176 [2024-10-01 08:46:07.947985] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.176 [2024-10-01 08:46:07.957245] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.176 [2024-10-01 08:46:07.957693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.176 [2024-10-01 08:46:07.957710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.176 [2024-10-01 08:46:07.957718] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.176 [2024-10-01 08:46:07.957934] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.176 [2024-10-01 08:46:07.958158] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.176 [2024-10-01 08:46:07.958169] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.176 [2024-10-01 08:46:07.958176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.176 [2024-10-01 08:46:07.961664] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.176 [2024-10-01 08:46:07.971113] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.176 [2024-10-01 08:46:07.971720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.176 [2024-10-01 08:46:07.971760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.176 [2024-10-01 08:46:07.971771] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.176 [2024-10-01 08:46:07.972017] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.176 [2024-10-01 08:46:07.972238] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.176 [2024-10-01 08:46:07.972248] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.176 [2024-10-01 08:46:07.972255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.176 [2024-10-01 08:46:07.975747] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.176 [2024-10-01 08:46:07.985001] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.176 [2024-10-01 08:46:07.985529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.176 [2024-10-01 08:46:07.985548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.176 [2024-10-01 08:46:07.985556] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.176 [2024-10-01 08:46:07.985772] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.176 [2024-10-01 08:46:07.985988] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.176 [2024-10-01 08:46:07.986009] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.176 [2024-10-01 08:46:07.986017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.176 [2024-10-01 08:46:07.989504] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.438 [2024-10-01 08:46:07.998894] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.438 [2024-10-01 08:46:07.999466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.438 [2024-10-01 08:46:07.999483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.438 [2024-10-01 08:46:07.999491] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.438 [2024-10-01 08:46:07.999708] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.438 [2024-10-01 08:46:07.999924] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.438 [2024-10-01 08:46:07.999934] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.438 [2024-10-01 08:46:07.999941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.438 [2024-10-01 08:46:08.003437] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.438 [2024-10-01 08:46:08.012682] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.438 [2024-10-01 08:46:08.013295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.438 [2024-10-01 08:46:08.013334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.438 [2024-10-01 08:46:08.013345] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.438 [2024-10-01 08:46:08.013581] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.438 [2024-10-01 08:46:08.013802] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.438 [2024-10-01 08:46:08.013811] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.438 [2024-10-01 08:46:08.013819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.438 [2024-10-01 08:46:08.017321] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.438 [2024-10-01 08:46:08.026572] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.438 [2024-10-01 08:46:08.027206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.438 [2024-10-01 08:46:08.027246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.438 [2024-10-01 08:46:08.027257] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.438 [2024-10-01 08:46:08.027493] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.438 [2024-10-01 08:46:08.027713] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.438 [2024-10-01 08:46:08.027723] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.438 [2024-10-01 08:46:08.027731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.438 [2024-10-01 08:46:08.031232] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.438 [2024-10-01 08:46:08.040299] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.438 [2024-10-01 08:46:08.040982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.438 [2024-10-01 08:46:08.041030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.438 [2024-10-01 08:46:08.041041] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.438 [2024-10-01 08:46:08.041277] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.438 [2024-10-01 08:46:08.041497] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.438 [2024-10-01 08:46:08.041507] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.438 [2024-10-01 08:46:08.041514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.438 [2024-10-01 08:46:08.045021] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.438 [2024-10-01 08:46:08.054067] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.438 [2024-10-01 08:46:08.054636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.438 [2024-10-01 08:46:08.054655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.438 [2024-10-01 08:46:08.054663] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.438 [2024-10-01 08:46:08.054879] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.438 [2024-10-01 08:46:08.055104] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.438 [2024-10-01 08:46:08.055114] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.438 [2024-10-01 08:46:08.055122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.438 [2024-10-01 08:46:08.058606] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.438 [2024-10-01 08:46:08.067850] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.438 [2024-10-01 08:46:08.068415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.438 [2024-10-01 08:46:08.068433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.438 [2024-10-01 08:46:08.068441] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.438 [2024-10-01 08:46:08.068656] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.438 [2024-10-01 08:46:08.068872] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.438 [2024-10-01 08:46:08.068882] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.438 [2024-10-01 08:46:08.068889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.438 [2024-10-01 08:46:08.072382] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.439 [2024-10-01 08:46:08.081625] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.439 [2024-10-01 08:46:08.082046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.439 [2024-10-01 08:46:08.082072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.439 [2024-10-01 08:46:08.082080] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.439 [2024-10-01 08:46:08.082306] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.439 [2024-10-01 08:46:08.082524] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.439 [2024-10-01 08:46:08.082533] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.439 [2024-10-01 08:46:08.082540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.439 [2024-10-01 08:46:08.086033] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.439 [2024-10-01 08:46:08.095484] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.439 [2024-10-01 08:46:08.096144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.439 [2024-10-01 08:46:08.096184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.439 [2024-10-01 08:46:08.096196] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.439 [2024-10-01 08:46:08.096431] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.439 [2024-10-01 08:46:08.096652] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.439 [2024-10-01 08:46:08.096662] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.439 [2024-10-01 08:46:08.096669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.439 [2024-10-01 08:46:08.100171] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.439 [2024-10-01 08:46:08.109219] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.439 [2024-10-01 08:46:08.109862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.439 [2024-10-01 08:46:08.109901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.439 [2024-10-01 08:46:08.109912] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.439 [2024-10-01 08:46:08.110158] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.439 [2024-10-01 08:46:08.110379] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.439 [2024-10-01 08:46:08.110390] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.439 [2024-10-01 08:46:08.110397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.439 [2024-10-01 08:46:08.113888] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.439 [2024-10-01 08:46:08.123140] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.439 [2024-10-01 08:46:08.123805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.439 [2024-10-01 08:46:08.123844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.439 [2024-10-01 08:46:08.123855] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.439 [2024-10-01 08:46:08.124101] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.439 [2024-10-01 08:46:08.124322] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.439 [2024-10-01 08:46:08.124332] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.439 [2024-10-01 08:46:08.124346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.439 [2024-10-01 08:46:08.127838] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.439 [2024-10-01 08:46:08.136896] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.439 [2024-10-01 08:46:08.137565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.439 [2024-10-01 08:46:08.137605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.439 [2024-10-01 08:46:08.137616] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.439 [2024-10-01 08:46:08.137852] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.439 [2024-10-01 08:46:08.138082] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.439 [2024-10-01 08:46:08.138093] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.439 [2024-10-01 08:46:08.138101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.439 [2024-10-01 08:46:08.141593] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.439 [2024-10-01 08:46:08.150647] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.439 [2024-10-01 08:46:08.151204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.439 [2024-10-01 08:46:08.151244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.439 [2024-10-01 08:46:08.151256] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.439 [2024-10-01 08:46:08.151494] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.439 [2024-10-01 08:46:08.151714] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.439 [2024-10-01 08:46:08.151724] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.439 [2024-10-01 08:46:08.151732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.439 [2024-10-01 08:46:08.155233] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.439 [2024-10-01 08:46:08.164484] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.439 [2024-10-01 08:46:08.165165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.439 [2024-10-01 08:46:08.165205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.439 [2024-10-01 08:46:08.165216] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.439 [2024-10-01 08:46:08.165452] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.439 [2024-10-01 08:46:08.165673] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.439 [2024-10-01 08:46:08.165683] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.439 [2024-10-01 08:46:08.165690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.439 [2024-10-01 08:46:08.169196] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.439 [2024-10-01 08:46:08.178241] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.439 [2024-10-01 08:46:08.178898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.439 [2024-10-01 08:46:08.178937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.439 [2024-10-01 08:46:08.178949] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.439 [2024-10-01 08:46:08.179197] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.439 [2024-10-01 08:46:08.179419] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.439 [2024-10-01 08:46:08.179429] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.439 [2024-10-01 08:46:08.179437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.439 [2024-10-01 08:46:08.182929] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.439 [2024-10-01 08:46:08.191998] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.439 [2024-10-01 08:46:08.192662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.439 [2024-10-01 08:46:08.192702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.439 [2024-10-01 08:46:08.192713] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.439 [2024-10-01 08:46:08.192949] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.439 [2024-10-01 08:46:08.193179] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.439 [2024-10-01 08:46:08.193190] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.439 [2024-10-01 08:46:08.193198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.439 [2024-10-01 08:46:08.196696] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.439 [2024-10-01 08:46:08.205750] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.439 [2024-10-01 08:46:08.206326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.439 [2024-10-01 08:46:08.206346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.439 [2024-10-01 08:46:08.206354] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.439 [2024-10-01 08:46:08.206571] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.439 [2024-10-01 08:46:08.206788] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.439 [2024-10-01 08:46:08.206798] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.439 [2024-10-01 08:46:08.206805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.439 [2024-10-01 08:46:08.210299] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.439 [2024-10-01 08:46:08.219547] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.439 [2024-10-01 08:46:08.220103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.439 [2024-10-01 08:46:08.220121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.440 [2024-10-01 08:46:08.220128] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.440 [2024-10-01 08:46:08.220344] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.440 [2024-10-01 08:46:08.220566] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.440 [2024-10-01 08:46:08.220575] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.440 [2024-10-01 08:46:08.220583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.440 [2024-10-01 08:46:08.224071] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.440 [2024-10-01 08:46:08.233314] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.440 [2024-10-01 08:46:08.233802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.440 [2024-10-01 08:46:08.233842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.440 [2024-10-01 08:46:08.233852] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.440 [2024-10-01 08:46:08.234099] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.440 [2024-10-01 08:46:08.234321] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.440 [2024-10-01 08:46:08.234331] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.440 [2024-10-01 08:46:08.234339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.440 [2024-10-01 08:46:08.237842] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.440 [2024-10-01 08:46:08.247106] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.440 [2024-10-01 08:46:08.247740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.440 [2024-10-01 08:46:08.247780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.440 [2024-10-01 08:46:08.247791] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.440 [2024-10-01 08:46:08.248037] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.440 [2024-10-01 08:46:08.248259] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.440 [2024-10-01 08:46:08.248268] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.440 [2024-10-01 08:46:08.248276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.440 [2024-10-01 08:46:08.251769] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.702 [2024-10-01 08:46:08.261035] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.702 [2024-10-01 08:46:08.261702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.702 [2024-10-01 08:46:08.261741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.702 [2024-10-01 08:46:08.261752] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.702 [2024-10-01 08:46:08.261988] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.702 [2024-10-01 08:46:08.262220] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.702 [2024-10-01 08:46:08.262231] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.702 [2024-10-01 08:46:08.262239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.702 [2024-10-01 08:46:08.265740] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.702 [2024-10-01 08:46:08.274787] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.702 [2024-10-01 08:46:08.275455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.702 [2024-10-01 08:46:08.275494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.702 [2024-10-01 08:46:08.275505] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.702 [2024-10-01 08:46:08.275741] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.702 [2024-10-01 08:46:08.275961] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.702 [2024-10-01 08:46:08.275971] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.702 [2024-10-01 08:46:08.275979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.702 [2024-10-01 08:46:08.279480] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.702 [2024-10-01 08:46:08.288531] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.702 [2024-10-01 08:46:08.289248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.702 [2024-10-01 08:46:08.289287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.702 [2024-10-01 08:46:08.289298] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.702 [2024-10-01 08:46:08.289534] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.702 [2024-10-01 08:46:08.289755] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.702 [2024-10-01 08:46:08.289765] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.702 [2024-10-01 08:46:08.289773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.702 [2024-10-01 08:46:08.293277] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.702 [2024-10-01 08:46:08.302325] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.702 [2024-10-01 08:46:08.302950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.702 [2024-10-01 08:46:08.302989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.702 [2024-10-01 08:46:08.303010] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.702 [2024-10-01 08:46:08.303246] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.702 [2024-10-01 08:46:08.303466] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.702 [2024-10-01 08:46:08.303477] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.702 [2024-10-01 08:46:08.303484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.702 [2024-10-01 08:46:08.306977] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.702 [2024-10-01 08:46:08.316230] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.702 [2024-10-01 08:46:08.316902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.702 [2024-10-01 08:46:08.316946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.702 [2024-10-01 08:46:08.316959] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.702 [2024-10-01 08:46:08.317206] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.702 [2024-10-01 08:46:08.317428] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.702 [2024-10-01 08:46:08.317437] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.702 [2024-10-01 08:46:08.317445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.702 [2024-10-01 08:46:08.320936] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.702 [2024-10-01 08:46:08.329985] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.702 [2024-10-01 08:46:08.330638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.702 [2024-10-01 08:46:08.330677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.702 [2024-10-01 08:46:08.330688] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.702 [2024-10-01 08:46:08.330924] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.702 [2024-10-01 08:46:08.331155] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.702 [2024-10-01 08:46:08.331167] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.702 [2024-10-01 08:46:08.331174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.702 [2024-10-01 08:46:08.334666] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.702 [2024-10-01 08:46:08.343729] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.702 [2024-10-01 08:46:08.344296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.702 [2024-10-01 08:46:08.344336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.702 [2024-10-01 08:46:08.344348] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.702 [2024-10-01 08:46:08.344586] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.702 [2024-10-01 08:46:08.344806] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.702 [2024-10-01 08:46:08.344816] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.702 [2024-10-01 08:46:08.344824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.702 [2024-10-01 08:46:08.348329] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.702 [2024-10-01 08:46:08.357597] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.702 [2024-10-01 08:46:08.358171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.702 [2024-10-01 08:46:08.358192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.702 [2024-10-01 08:46:08.358200] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.702 [2024-10-01 08:46:08.358417] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.702 [2024-10-01 08:46:08.358638] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.702 [2024-10-01 08:46:08.358647] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.702 [2024-10-01 08:46:08.358654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.702 [2024-10-01 08:46:08.362152] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.702 [2024-10-01 08:46:08.371413] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.702 [2024-10-01 08:46:08.371970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.702 [2024-10-01 08:46:08.371987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.702 [2024-10-01 08:46:08.372000] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.702 [2024-10-01 08:46:08.372217] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.702 [2024-10-01 08:46:08.372434] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.702 [2024-10-01 08:46:08.372443] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.702 [2024-10-01 08:46:08.372450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.703 [2024-10-01 08:46:08.375939] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.703 [2024-10-01 08:46:08.385208] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.703 [2024-10-01 08:46:08.385773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.703 [2024-10-01 08:46:08.385789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.703 [2024-10-01 08:46:08.385797] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.703 [2024-10-01 08:46:08.386019] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.703 [2024-10-01 08:46:08.386235] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.703 [2024-10-01 08:46:08.386245] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.703 [2024-10-01 08:46:08.386252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.703 [2024-10-01 08:46:08.389738] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.703 [2024-10-01 08:46:08.399009] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.703 [2024-10-01 08:46:08.399661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.703 [2024-10-01 08:46:08.399701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.703 [2024-10-01 08:46:08.399712] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.703 [2024-10-01 08:46:08.399948] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.703 [2024-10-01 08:46:08.400177] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.703 [2024-10-01 08:46:08.400188] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.703 [2024-10-01 08:46:08.400196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.703 [2024-10-01 08:46:08.403693] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.703 [2024-10-01 08:46:08.412765] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.703 [2024-10-01 08:46:08.413234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.703 [2024-10-01 08:46:08.413255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.703 [2024-10-01 08:46:08.413263] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.703 [2024-10-01 08:46:08.413479] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.703 [2024-10-01 08:46:08.413697] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.703 [2024-10-01 08:46:08.413707] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.703 [2024-10-01 08:46:08.413714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.703 [2024-10-01 08:46:08.417213] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.703 [2024-10-01 08:46:08.426678] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.703 [2024-10-01 08:46:08.427290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.703 [2024-10-01 08:46:08.427330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.703 [2024-10-01 08:46:08.427341] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.703 [2024-10-01 08:46:08.427577] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.703 [2024-10-01 08:46:08.427797] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.703 [2024-10-01 08:46:08.427807] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.703 [2024-10-01 08:46:08.427815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.703 [2024-10-01 08:46:08.431321] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.703 [2024-10-01 08:46:08.440606] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.703 [2024-10-01 08:46:08.441264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.703 [2024-10-01 08:46:08.441304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.703 [2024-10-01 08:46:08.441316] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.703 [2024-10-01 08:46:08.441552] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.703 [2024-10-01 08:46:08.441773] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.703 [2024-10-01 08:46:08.441783] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.703 [2024-10-01 08:46:08.441791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.703 [2024-10-01 08:46:08.445303] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.703 [2024-10-01 08:46:08.454355] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.703 [2024-10-01 08:46:08.455056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.703 [2024-10-01 08:46:08.455095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.703 [2024-10-01 08:46:08.455118] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.703 [2024-10-01 08:46:08.455354] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.703 [2024-10-01 08:46:08.455574] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.703 [2024-10-01 08:46:08.455584] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.703 [2024-10-01 08:46:08.455592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.703 [2024-10-01 08:46:08.459090] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.703 [2024-10-01 08:46:08.468228] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.703 [2024-10-01 08:46:08.468857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.703 [2024-10-01 08:46:08.468897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.703 [2024-10-01 08:46:08.468908] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.703 [2024-10-01 08:46:08.469154] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.703 [2024-10-01 08:46:08.469375] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.703 [2024-10-01 08:46:08.469385] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.703 [2024-10-01 08:46:08.469392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.703 [2024-10-01 08:46:08.472885] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.703 [2024-10-01 08:46:08.482142] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.703 [2024-10-01 08:46:08.482792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.703 [2024-10-01 08:46:08.482832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.703 [2024-10-01 08:46:08.482842] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.703 [2024-10-01 08:46:08.483087] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.703 [2024-10-01 08:46:08.483309] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.703 [2024-10-01 08:46:08.483319] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.703 [2024-10-01 08:46:08.483326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.703 [2024-10-01 08:46:08.486821] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.703 [2024-10-01 08:46:08.495869] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.703 [2024-10-01 08:46:08.496486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.703 [2024-10-01 08:46:08.496526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.703 [2024-10-01 08:46:08.496537] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.703 [2024-10-01 08:46:08.496773] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.703 [2024-10-01 08:46:08.497003] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.703 [2024-10-01 08:46:08.497019] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.703 [2024-10-01 08:46:08.497026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.703 [2024-10-01 08:46:08.500517] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.703 [2024-10-01 08:46:08.509764] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.703 [2024-10-01 08:46:08.510401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.703 [2024-10-01 08:46:08.510440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.703 [2024-10-01 08:46:08.510451] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.703 [2024-10-01 08:46:08.510687] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.703 [2024-10-01 08:46:08.510907] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.703 [2024-10-01 08:46:08.510917] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.703 [2024-10-01 08:46:08.510925] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.703 [2024-10-01 08:46:08.514427] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.965 [2024-10-01 08:46:08.523683] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.965 [2024-10-01 08:46:08.524332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.965 [2024-10-01 08:46:08.524371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.965 [2024-10-01 08:46:08.524383] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.965 [2024-10-01 08:46:08.524619] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.965 [2024-10-01 08:46:08.524839] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.965 [2024-10-01 08:46:08.524849] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.965 [2024-10-01 08:46:08.524857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.966 [2024-10-01 08:46:08.528357] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.966 [2024-10-01 08:46:08.537623] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.966 [2024-10-01 08:46:08.538291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.966 [2024-10-01 08:46:08.538331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.966 [2024-10-01 08:46:08.538342] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.966 [2024-10-01 08:46:08.538578] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.966 [2024-10-01 08:46:08.538799] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.966 [2024-10-01 08:46:08.538808] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.966 [2024-10-01 08:46:08.538816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.966 [2024-10-01 08:46:08.542319] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.966 [2024-10-01 08:46:08.551377] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.966 [2024-10-01 08:46:08.552025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.966 [2024-10-01 08:46:08.552065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.966 [2024-10-01 08:46:08.552078] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.966 [2024-10-01 08:46:08.552315] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.966 [2024-10-01 08:46:08.552536] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.966 [2024-10-01 08:46:08.552546] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.966 [2024-10-01 08:46:08.552553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.966 [2024-10-01 08:46:08.556055] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.966 [2024-10-01 08:46:08.565309] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.966 [2024-10-01 08:46:08.565837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.966 [2024-10-01 08:46:08.565877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.966 [2024-10-01 08:46:08.565888] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.966 [2024-10-01 08:46:08.566133] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.966 [2024-10-01 08:46:08.566356] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.966 [2024-10-01 08:46:08.566366] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.966 [2024-10-01 08:46:08.566374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.966 [2024-10-01 08:46:08.569865] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.966 [2024-10-01 08:46:08.579152] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.966 [2024-10-01 08:46:08.579775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.966 [2024-10-01 08:46:08.579814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.966 [2024-10-01 08:46:08.579825] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.966 [2024-10-01 08:46:08.580070] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.966 [2024-10-01 08:46:08.580291] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.966 [2024-10-01 08:46:08.580300] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.966 [2024-10-01 08:46:08.580308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.966 [2024-10-01 08:46:08.583800] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.966 [2024-10-01 08:46:08.593057] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.966 [2024-10-01 08:46:08.593723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.966 [2024-10-01 08:46:08.593762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.966 [2024-10-01 08:46:08.593773] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.966 [2024-10-01 08:46:08.594023] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.966 [2024-10-01 08:46:08.594245] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.966 [2024-10-01 08:46:08.594255] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.966 [2024-10-01 08:46:08.594262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.966 [2024-10-01 08:46:08.597755] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.966 [2024-10-01 08:46:08.606801] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.966 [2024-10-01 08:46:08.607430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.966 [2024-10-01 08:46:08.607470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.966 [2024-10-01 08:46:08.607481] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.966 [2024-10-01 08:46:08.607717] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.966 [2024-10-01 08:46:08.607937] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.966 [2024-10-01 08:46:08.607947] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.966 [2024-10-01 08:46:08.607955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.966 [2024-10-01 08:46:08.611454] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.966 [2024-10-01 08:46:08.620709] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.966 [2024-10-01 08:46:08.621333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.966 [2024-10-01 08:46:08.621373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.966 [2024-10-01 08:46:08.621384] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.966 [2024-10-01 08:46:08.621620] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.966 [2024-10-01 08:46:08.621840] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.966 [2024-10-01 08:46:08.621850] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.966 [2024-10-01 08:46:08.621858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.966 [2024-10-01 08:46:08.625358] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.966 [2024-10-01 08:46:08.634616] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.966 [2024-10-01 08:46:08.635243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.966 [2024-10-01 08:46:08.635283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.966 [2024-10-01 08:46:08.635294] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.966 [2024-10-01 08:46:08.635529] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.966 [2024-10-01 08:46:08.635750] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.966 [2024-10-01 08:46:08.635760] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.966 [2024-10-01 08:46:08.635773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.966 [2024-10-01 08:46:08.639284] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.966 [2024-10-01 08:46:08.648549] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.966 [2024-10-01 08:46:08.649122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.966 [2024-10-01 08:46:08.649162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.966 [2024-10-01 08:46:08.649174] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.966 [2024-10-01 08:46:08.649413] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.966 [2024-10-01 08:46:08.649633] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.966 [2024-10-01 08:46:08.649643] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.966 [2024-10-01 08:46:08.649651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.966 [2024-10-01 08:46:08.653155] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.966 [2024-10-01 08:46:08.662405] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.966 [2024-10-01 08:46:08.663093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.966 [2024-10-01 08:46:08.663133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.966 [2024-10-01 08:46:08.663146] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.966 [2024-10-01 08:46:08.663383] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.966 [2024-10-01 08:46:08.663604] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.966 [2024-10-01 08:46:08.663613] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.966 [2024-10-01 08:46:08.663621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.967 [2024-10-01 08:46:08.667123] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.967 [2024-10-01 08:46:08.676178] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.967 [2024-10-01 08:46:08.676822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.967 [2024-10-01 08:46:08.676862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.967 [2024-10-01 08:46:08.676873] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.967 [2024-10-01 08:46:08.677117] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.967 [2024-10-01 08:46:08.677338] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.967 [2024-10-01 08:46:08.677348] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.967 [2024-10-01 08:46:08.677356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.967 [2024-10-01 08:46:08.680851] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.967 [2024-10-01 08:46:08.690109] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.967 [2024-10-01 08:46:08.690574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.967 [2024-10-01 08:46:08.690594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.967 [2024-10-01 08:46:08.690603] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.967 [2024-10-01 08:46:08.690819] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.967 [2024-10-01 08:46:08.691043] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.967 [2024-10-01 08:46:08.691054] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.967 [2024-10-01 08:46:08.691062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.967 [2024-10-01 08:46:08.694724] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.967 [2024-10-01 08:46:08.703990] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.967 [2024-10-01 08:46:08.704515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.967 [2024-10-01 08:46:08.704534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.967 [2024-10-01 08:46:08.704541] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.967 [2024-10-01 08:46:08.704758] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.967 [2024-10-01 08:46:08.704974] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.967 [2024-10-01 08:46:08.704985] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.967 [2024-10-01 08:46:08.704992] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.967 [2024-10-01 08:46:08.708483] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.967 [2024-10-01 08:46:08.717732] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.967 [2024-10-01 08:46:08.718270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.967 [2024-10-01 08:46:08.718287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.967 [2024-10-01 08:46:08.718295] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.967 [2024-10-01 08:46:08.718512] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.967 [2024-10-01 08:46:08.718728] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.967 [2024-10-01 08:46:08.718737] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.967 [2024-10-01 08:46:08.718744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.967 [2024-10-01 08:46:08.723879] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.967 5367.80 IOPS, 20.97 MiB/s [2024-10-01 08:46:08.731498] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.967 [2024-10-01 08:46:08.732019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.967 [2024-10-01 08:46:08.732036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.967 [2024-10-01 08:46:08.732044] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.967 [2024-10-01 08:46:08.732264] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.967 [2024-10-01 08:46:08.732480] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.967 [2024-10-01 08:46:08.732488] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.967 [2024-10-01 08:46:08.732495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.967 [2024-10-01 08:46:08.735980] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.967 [2024-10-01 08:46:08.745253] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.967 [2024-10-01 08:46:08.745766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.967 [2024-10-01 08:46:08.745783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.967 [2024-10-01 08:46:08.745791] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.967 [2024-10-01 08:46:08.746012] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.967 [2024-10-01 08:46:08.746229] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.967 [2024-10-01 08:46:08.746239] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.967 [2024-10-01 08:46:08.746246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.967 [2024-10-01 08:46:08.749730] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.967 [2024-10-01 08:46:08.758978] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.967 [2024-10-01 08:46:08.759633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.967 [2024-10-01 08:46:08.759675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.967 [2024-10-01 08:46:08.759686] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.967 [2024-10-01 08:46:08.759922] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.967 [2024-10-01 08:46:08.760150] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.967 [2024-10-01 08:46:08.760162] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.967 [2024-10-01 08:46:08.760169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.967 [2024-10-01 08:46:08.763661] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.967 [2024-10-01 08:46:08.772745] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:16.967 [2024-10-01 08:46:08.773449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.967 [2024-10-01 08:46:08.773487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:16.967 [2024-10-01 08:46:08.773502] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:16.967 [2024-10-01 08:46:08.773741] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:16.967 [2024-10-01 08:46:08.773960] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:16.967 [2024-10-01 08:46:08.773970] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:16.967 [2024-10-01 08:46:08.773983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:16.967 [2024-10-01 08:46:08.777484] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:16.967 [2024-10-01 08:46:08.786535] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:17.230 [2024-10-01 08:46:08.787478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.230 [2024-10-01 08:46:08.787504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:17.230 [2024-10-01 08:46:08.787513] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:17.230 [2024-10-01 08:46:08.787736] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:17.230 [2024-10-01 08:46:08.787954] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:17.230 [2024-10-01 08:46:08.787965] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:17.230 [2024-10-01 08:46:08.787972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:17.230 [2024-10-01 08:46:08.791468] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.230 [2024-10-01 08:46:08.800318] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:17.230 [2024-10-01 08:46:08.800854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.230 [2024-10-01 08:46:08.800871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:17.230 [2024-10-01 08:46:08.800879] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:17.230 [2024-10-01 08:46:08.801101] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:17.230 [2024-10-01 08:46:08.801317] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:17.230 [2024-10-01 08:46:08.801326] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:17.230 [2024-10-01 08:46:08.801334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:17.230 [2024-10-01 08:46:08.804817] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.230 [2024-10-01 08:46:08.814097] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:17.230 [2024-10-01 08:46:08.814731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.230 [2024-10-01 08:46:08.814771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:17.230 [2024-10-01 08:46:08.814782] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:17.230 [2024-10-01 08:46:08.815025] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:17.230 [2024-10-01 08:46:08.815247] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:17.230 [2024-10-01 08:46:08.815257] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:17.230 [2024-10-01 08:46:08.815264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:17.230 [2024-10-01 08:46:08.818758] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.230 [2024-10-01 08:46:08.828019] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:17.230 [2024-10-01 08:46:08.828686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.230 [2024-10-01 08:46:08.828730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:17.231 [2024-10-01 08:46:08.828742] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:17.231 [2024-10-01 08:46:08.828978] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:17.231 [2024-10-01 08:46:08.829207] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:17.231 [2024-10-01 08:46:08.829218] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:17.231 [2024-10-01 08:46:08.829225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:17.231 [2024-10-01 08:46:08.832716] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.231 [2024-10-01 08:46:08.841779] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:17.231 [2024-10-01 08:46:08.842345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.231 [2024-10-01 08:46:08.842366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:17.231 [2024-10-01 08:46:08.842374] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:17.231 [2024-10-01 08:46:08.842591] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:17.231 [2024-10-01 08:46:08.842807] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:17.231 [2024-10-01 08:46:08.842816] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:17.231 [2024-10-01 08:46:08.842823] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:17.231 [2024-10-01 08:46:08.846324] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.231 [2024-10-01 08:46:08.855575] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:17.231 [2024-10-01 08:46:08.856070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.231 [2024-10-01 08:46:08.856087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:17.231 [2024-10-01 08:46:08.856095] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:17.231 [2024-10-01 08:46:08.856310] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:17.231 [2024-10-01 08:46:08.856527] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:17.231 [2024-10-01 08:46:08.856536] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:17.231 [2024-10-01 08:46:08.856544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:17.231 [2024-10-01 08:46:08.860033] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.231 [2024-10-01 08:46:08.869489] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:17.231 [2024-10-01 08:46:08.870015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.231 [2024-10-01 08:46:08.870032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:17.231 [2024-10-01 08:46:08.870049] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:17.231 [2024-10-01 08:46:08.870264] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:17.231 [2024-10-01 08:46:08.870484] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:17.231 [2024-10-01 08:46:08.870494] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:17.231 [2024-10-01 08:46:08.870502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:17.231 [2024-10-01 08:46:08.873989] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.231 [2024-10-01 08:46:08.883241] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:17.231 [2024-10-01 08:46:08.883858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.231 [2024-10-01 08:46:08.883898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:17.231 [2024-10-01 08:46:08.883910] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:17.231 [2024-10-01 08:46:08.884155] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:17.231 [2024-10-01 08:46:08.884378] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:17.231 [2024-10-01 08:46:08.884387] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:17.231 [2024-10-01 08:46:08.884395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:17.231 [2024-10-01 08:46:08.887888] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.231 [2024-10-01 08:46:08.897149] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:17.231 [2024-10-01 08:46:08.897722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.231 [2024-10-01 08:46:08.897743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:17.231 [2024-10-01 08:46:08.897751] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:17.231 [2024-10-01 08:46:08.897966] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:17.231 [2024-10-01 08:46:08.898191] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:17.231 [2024-10-01 08:46:08.898203] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:17.231 [2024-10-01 08:46:08.898213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:17.231 [2024-10-01 08:46:08.901701] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.231 [2024-10-01 08:46:08.910952] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:17.231 [2024-10-01 08:46:08.911568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.231 [2024-10-01 08:46:08.911607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:17.231 [2024-10-01 08:46:08.911618] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:17.231 [2024-10-01 08:46:08.911853] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:17.231 [2024-10-01 08:46:08.912082] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:17.231 [2024-10-01 08:46:08.912093] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:17.231 [2024-10-01 08:46:08.912101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:17.231 [2024-10-01 08:46:08.915599] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.231 [2024-10-01 08:46:08.924855] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:17.231 [2024-10-01 08:46:08.925514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.231 [2024-10-01 08:46:08.925554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:17.231 [2024-10-01 08:46:08.925565] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:17.231 [2024-10-01 08:46:08.925801] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:17.231 [2024-10-01 08:46:08.926029] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:17.231 [2024-10-01 08:46:08.926040] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:17.231 [2024-10-01 08:46:08.926048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:17.231 [2024-10-01 08:46:08.929539] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.231 [2024-10-01 08:46:08.938603] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:17.231 [2024-10-01 08:46:08.939075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.231 [2024-10-01 08:46:08.939095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:17.231 [2024-10-01 08:46:08.939103] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:17.231 [2024-10-01 08:46:08.939320] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:17.231 [2024-10-01 08:46:08.939536] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:17.231 [2024-10-01 08:46:08.939545] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:17.231 [2024-10-01 08:46:08.939552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:17.231 [2024-10-01 08:46:08.943052] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.231 [2024-10-01 08:46:08.952510] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:17.231 [2024-10-01 08:46:08.953117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.231 [2024-10-01 08:46:08.953158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:17.231 [2024-10-01 08:46:08.953172] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:17.231 [2024-10-01 08:46:08.953409] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:17.231 [2024-10-01 08:46:08.953629] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:17.231 [2024-10-01 08:46:08.953640] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:17.231 [2024-10-01 08:46:08.953649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:17.231 [2024-10-01 08:46:08.957152] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.231 [2024-10-01 08:46:08.966259] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:17.231 [2024-10-01 08:46:08.966886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.231 [2024-10-01 08:46:08.966925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:17.231 [2024-10-01 08:46:08.966941] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:17.231 [2024-10-01 08:46:08.967185] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:17.232 [2024-10-01 08:46:08.967406] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:17.232 [2024-10-01 08:46:08.967417] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:17.232 [2024-10-01 08:46:08.967425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:17.232 [2024-10-01 08:46:08.970917] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.232 [2024-10-01 08:46:08.980181] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:17.232 [2024-10-01 08:46:08.980718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.232 [2024-10-01 08:46:08.980738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:17.232 [2024-10-01 08:46:08.980746] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:17.232 [2024-10-01 08:46:08.980962] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:17.232 [2024-10-01 08:46:08.981186] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:17.232 [2024-10-01 08:46:08.981196] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:17.232 [2024-10-01 08:46:08.981203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:17.232 [2024-10-01 08:46:08.984691] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.232 [2024-10-01 08:46:08.993944] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:17.232 [2024-10-01 08:46:08.994597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.232 [2024-10-01 08:46:08.994637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:17.232 [2024-10-01 08:46:08.994648] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:17.232 [2024-10-01 08:46:08.994884] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:17.232 [2024-10-01 08:46:08.995112] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:17.232 [2024-10-01 08:46:08.995123] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:17.232 [2024-10-01 08:46:08.995130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:17.232 [2024-10-01 08:46:08.998622] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.232 [2024-10-01 08:46:09.007679] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:17.232 [2024-10-01 08:46:09.008299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.232 [2024-10-01 08:46:09.008338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:17.232 [2024-10-01 08:46:09.008350] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:17.232 [2024-10-01 08:46:09.008586] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:17.232 [2024-10-01 08:46:09.008807] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:17.232 [2024-10-01 08:46:09.008821] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:17.232 [2024-10-01 08:46:09.008829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:17.232 [2024-10-01 08:46:09.012328] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.232 [2024-10-01 08:46:09.021584] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:17.232 [2024-10-01 08:46:09.022317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.232 [2024-10-01 08:46:09.022357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:17.232 [2024-10-01 08:46:09.022368] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:17.232 [2024-10-01 08:46:09.022604] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:17.232 [2024-10-01 08:46:09.022824] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:17.232 [2024-10-01 08:46:09.022834] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:17.232 [2024-10-01 08:46:09.022842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:17.232 [2024-10-01 08:46:09.026340] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.232 [2024-10-01 08:46:09.035395] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:17.232 [2024-10-01 08:46:09.036091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.232 [2024-10-01 08:46:09.036131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:17.232 [2024-10-01 08:46:09.036144] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:17.232 [2024-10-01 08:46:09.036381] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:17.232 [2024-10-01 08:46:09.036601] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:17.232 [2024-10-01 08:46:09.036611] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:17.232 [2024-10-01 08:46:09.036619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:17.232 [2024-10-01 08:46:09.040133] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.232 [2024-10-01 08:46:09.049198] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:17.232 [2024-10-01 08:46:09.049810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.232 [2024-10-01 08:46:09.049850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:17.232 [2024-10-01 08:46:09.049861] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:17.232 [2024-10-01 08:46:09.050106] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:17.494 [2024-10-01 08:46:09.050328] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:17.494 [2024-10-01 08:46:09.050339] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:17.494 [2024-10-01 08:46:09.050347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:17.494 [2024-10-01 08:46:09.053839] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.494 [2024-10-01 08:46:09.063105] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:17.494 [2024-10-01 08:46:09.063605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.494 [2024-10-01 08:46:09.063645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:17.494 [2024-10-01 08:46:09.063658] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:17.494 [2024-10-01 08:46:09.063895] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:17.494 [2024-10-01 08:46:09.064123] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:17.494 [2024-10-01 08:46:09.064134] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:17.494 [2024-10-01 08:46:09.064141] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:17.494 [2024-10-01 08:46:09.067633] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.494 [2024-10-01 08:46:09.076891] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:17.494 [2024-10-01 08:46:09.077422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.494 [2024-10-01 08:46:09.077460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:17.494 [2024-10-01 08:46:09.077472] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:17.494 [2024-10-01 08:46:09.077707] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:17.494 [2024-10-01 08:46:09.077928] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:17.494 [2024-10-01 08:46:09.077937] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:17.494 [2024-10-01 08:46:09.077945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:17.494 [2024-10-01 08:46:09.081445] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.494 [2024-10-01 08:46:09.090702] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:17.494 [2024-10-01 08:46:09.091341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.494 [2024-10-01 08:46:09.091381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:17.494 [2024-10-01 08:46:09.091392] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:17.494 [2024-10-01 08:46:09.091627] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:17.494 [2024-10-01 08:46:09.091848] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:17.494 [2024-10-01 08:46:09.091859] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:17.494 [2024-10-01 08:46:09.091867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:17.494 [2024-10-01 08:46:09.095367] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.494 [2024-10-01 08:46:09.104627] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:17.494 [2024-10-01 08:46:09.105108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.494 [2024-10-01 08:46:09.105148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:17.494 [2024-10-01 08:46:09.105161] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:17.494 [2024-10-01 08:46:09.105405] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:17.494 [2024-10-01 08:46:09.105625] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:17.494 [2024-10-01 08:46:09.105636] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:17.494 [2024-10-01 08:46:09.105644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:17.494 [2024-10-01 08:46:09.109143] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.494 [2024-10-01 08:46:09.118401] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:17.494 [2024-10-01 08:46:09.118969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.494 [2024-10-01 08:46:09.118989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:17.494 [2024-10-01 08:46:09.119002] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:17.494 [2024-10-01 08:46:09.119219] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:17.494 [2024-10-01 08:46:09.119435] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:17.494 [2024-10-01 08:46:09.119444] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:17.494 [2024-10-01 08:46:09.119451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:17.494 [2024-10-01 08:46:09.122938] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.494 [2024-10-01 08:46:09.132191] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:17.494 [2024-10-01 08:46:09.132747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.494 [2024-10-01 08:46:09.132764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:17.494 [2024-10-01 08:46:09.132772] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:17.494 [2024-10-01 08:46:09.132987] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:17.494 [2024-10-01 08:46:09.133209] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:17.494 [2024-10-01 08:46:09.133220] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:17.494 [2024-10-01 08:46:09.133227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:17.494 [2024-10-01 08:46:09.136711] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.494 [2024-10-01 08:46:09.145981] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:17.494 [2024-10-01 08:46:09.146646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.494 [2024-10-01 08:46:09.146686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:17.494 [2024-10-01 08:46:09.146697] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:17.494 [2024-10-01 08:46:09.146932] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:17.494 [2024-10-01 08:46:09.147161] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:17.494 [2024-10-01 08:46:09.147172] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:17.494 [2024-10-01 08:46:09.147184] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:17.494 [2024-10-01 08:46:09.150679] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.494 [2024-10-01 08:46:09.159761] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:17.494 [2024-10-01 08:46:09.160378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.495 [2024-10-01 08:46:09.160417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:17.495 [2024-10-01 08:46:09.160428] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:17.495 [2024-10-01 08:46:09.160664] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:17.495 [2024-10-01 08:46:09.160884] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:17.495 [2024-10-01 08:46:09.160894] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:17.495 [2024-10-01 08:46:09.160902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:17.495 [2024-10-01 08:46:09.164402] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.495 [2024-10-01 08:46:09.173658] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:17.495 [2024-10-01 08:46:09.174384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.495 [2024-10-01 08:46:09.174424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:17.495 [2024-10-01 08:46:09.174436] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:17.495 [2024-10-01 08:46:09.174674] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:17.495 [2024-10-01 08:46:09.174894] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:17.495 [2024-10-01 08:46:09.174904] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:17.495 [2024-10-01 08:46:09.174912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:17.495 [2024-10-01 08:46:09.178414] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.495 [2024-10-01 08:46:09.187466] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:17.495 [2024-10-01 08:46:09.188028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.495 [2024-10-01 08:46:09.188068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:17.495 [2024-10-01 08:46:09.188080] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:17.495 [2024-10-01 08:46:09.188319] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:17.495 [2024-10-01 08:46:09.188540] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:17.495 [2024-10-01 08:46:09.188549] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:17.495 [2024-10-01 08:46:09.188557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:17.495 [2024-10-01 08:46:09.192059] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.495 [2024-10-01 08:46:09.201315] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:17.495 [2024-10-01 08:46:09.201758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.495 [2024-10-01 08:46:09.201778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:17.495 [2024-10-01 08:46:09.201787] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:17.495 [2024-10-01 08:46:09.202012] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:17.495 [2024-10-01 08:46:09.202231] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:17.495 [2024-10-01 08:46:09.202240] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:17.495 [2024-10-01 08:46:09.202248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:17.495 [2024-10-01 08:46:09.205739] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.495 [2024-10-01 08:46:09.215202] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:17.495 [2024-10-01 08:46:09.215876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.495 [2024-10-01 08:46:09.215915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:17.495 [2024-10-01 08:46:09.215926] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:17.495 [2024-10-01 08:46:09.216169] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:17.495 [2024-10-01 08:46:09.216391] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:17.495 [2024-10-01 08:46:09.216401] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:17.495 [2024-10-01 08:46:09.216408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:17.495 [2024-10-01 08:46:09.219902] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.495 [2024-10-01 08:46:09.228951] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:17.495 [2024-10-01 08:46:09.229554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.495 [2024-10-01 08:46:09.229575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:17.495 [2024-10-01 08:46:09.229583] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:17.495 [2024-10-01 08:46:09.229799] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:17.495 [2024-10-01 08:46:09.230021] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:17.495 [2024-10-01 08:46:09.230030] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:17.495 [2024-10-01 08:46:09.230037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:17.495 [2024-10-01 08:46:09.233525] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.495 [2024-10-01 08:46:09.242789] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:17.495 [2024-10-01 08:46:09.243398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.495 [2024-10-01 08:46:09.243437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:17.495 [2024-10-01 08:46:09.243449] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:17.495 [2024-10-01 08:46:09.243685] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:17.495 [2024-10-01 08:46:09.243910] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:17.495 [2024-10-01 08:46:09.243920] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:17.495 [2024-10-01 08:46:09.243928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:17.495 [2024-10-01 08:46:09.247430] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.495 [2024-10-01 08:46:09.256685] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:17.495 [2024-10-01 08:46:09.257280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.495 [2024-10-01 08:46:09.257301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:17.495 [2024-10-01 08:46:09.257309] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:17.495 [2024-10-01 08:46:09.257526] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:17.495 [2024-10-01 08:46:09.257742] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:17.495 [2024-10-01 08:46:09.257751] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:17.495 [2024-10-01 08:46:09.257758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:17.495 [2024-10-01 08:46:09.261250] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.495 [2024-10-01 08:46:09.270502] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:17.495 [2024-10-01 08:46:09.271046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.495 [2024-10-01 08:46:09.271071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:17.495 [2024-10-01 08:46:09.271080] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:17.495 [2024-10-01 08:46:09.271300] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:17.495 [2024-10-01 08:46:09.271518] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:17.495 [2024-10-01 08:46:09.271527] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:17.495 [2024-10-01 08:46:09.271534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:17.495 [2024-10-01 08:46:09.275030] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.495 [2024-10-01 08:46:09.284277] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:17.495 [2024-10-01 08:46:09.284688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.495 [2024-10-01 08:46:09.284707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:17.495 [2024-10-01 08:46:09.284715] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:17.495 [2024-10-01 08:46:09.284932] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:17.495 [2024-10-01 08:46:09.285155] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:17.495 [2024-10-01 08:46:09.285165] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:17.495 [2024-10-01 08:46:09.285172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:17.495 [2024-10-01 08:46:09.288664] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.495 [2024-10-01 08:46:09.298119] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:17.495 [2024-10-01 08:46:09.298650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.495 [2024-10-01 08:46:09.298689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:17.495 [2024-10-01 08:46:09.298702] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:17.495 [2024-10-01 08:46:09.298939] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:17.496 [2024-10-01 08:46:09.299167] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:17.496 [2024-10-01 08:46:09.299178] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:17.496 [2024-10-01 08:46:09.299187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:17.496 [2024-10-01 08:46:09.302681] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.496 [2024-10-01 08:46:09.311936] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:17.496 [2024-10-01 08:46:09.312615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.496 [2024-10-01 08:46:09.312655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:17.496 [2024-10-01 08:46:09.312666] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:17.496 [2024-10-01 08:46:09.312901] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:17.496 [2024-10-01 08:46:09.313129] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:17.496 [2024-10-01 08:46:09.313140] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:17.496 [2024-10-01 08:46:09.313148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:17.757 [2024-10-01 08:46:09.316641] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.757 [2024-10-01 08:46:09.325692] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:17.757 [2024-10-01 08:46:09.326282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.757 [2024-10-01 08:46:09.326303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:17.757 [2024-10-01 08:46:09.326312] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:17.757 [2024-10-01 08:46:09.326528] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:17.757 [2024-10-01 08:46:09.326745] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:17.757 [2024-10-01 08:46:09.326754] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:17.757 [2024-10-01 08:46:09.326761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:17.757 [2024-10-01 08:46:09.330249] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.757 [2024-10-01 08:46:09.339511] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:17.757 [2024-10-01 08:46:09.340136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.757 [2024-10-01 08:46:09.340176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:17.757 [2024-10-01 08:46:09.340194] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:17.757 [2024-10-01 08:46:09.340433] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:17.757 [2024-10-01 08:46:09.340654] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:17.757 [2024-10-01 08:46:09.340664] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:17.757 [2024-10-01 08:46:09.340671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:17.757 [2024-10-01 08:46:09.344184] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.757 [2024-10-01 08:46:09.353241] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:17.757 [2024-10-01 08:46:09.353801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.757 [2024-10-01 08:46:09.353838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:17.757 [2024-10-01 08:46:09.353850] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:17.757 [2024-10-01 08:46:09.354092] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:17.757 [2024-10-01 08:46:09.354312] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:17.757 [2024-10-01 08:46:09.354321] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:17.757 [2024-10-01 08:46:09.354329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:17.757 [2024-10-01 08:46:09.357824] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.757 [2024-10-01 08:46:09.367093] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:17.757 [2024-10-01 08:46:09.367679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.757 [2024-10-01 08:46:09.367697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:17.757 [2024-10-01 08:46:09.367705] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:17.757 [2024-10-01 08:46:09.367922] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:17.757 [2024-10-01 08:46:09.368142] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:17.757 [2024-10-01 08:46:09.368153] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:17.757 [2024-10-01 08:46:09.368160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:17.757 [2024-10-01 08:46:09.371649] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.757 [2024-10-01 08:46:09.380899] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:17.757 [2024-10-01 08:46:09.381430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.757 [2024-10-01 08:46:09.381446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:17.757 [2024-10-01 08:46:09.381454] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:17.757 [2024-10-01 08:46:09.381670] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:17.757 [2024-10-01 08:46:09.381889] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:17.757 [2024-10-01 08:46:09.381898] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:17.757 [2024-10-01 08:46:09.381905] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:17.757 [2024-10-01 08:46:09.385394] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.757 [2024-10-01 08:46:09.394647] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:17.757 [2024-10-01 08:46:09.395205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.757 [2024-10-01 08:46:09.395222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:17.758 [2024-10-01 08:46:09.395230] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:17.758 [2024-10-01 08:46:09.395446] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:17.758 [2024-10-01 08:46:09.395662] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:17.758 [2024-10-01 08:46:09.395670] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:17.758 [2024-10-01 08:46:09.395678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:17.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3935799 Killed "${NVMF_APP[@]}" "$@" 00:31:17.758 [2024-10-01 08:46:09.399168] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.758 08:46:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:31:17.758 08:46:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:31:17.758 08:46:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:31:17.758 08:46:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:17.758 08:46:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:17.758 08:46:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # nvmfpid=3937499 00:31:17.758 08:46:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # waitforlisten 3937499 00:31:17.758 [2024-10-01 08:46:09.408455] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:17.758 08:46:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:17.758 08:46:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 3937499 ']' 00:31:17.758 08:46:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:17.758 [2024-10-01 08:46:09.409101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.758 [2024-10-01 08:46:09.409140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:17.758 [2024-10-01 08:46:09.409153] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:17.758 08:46:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:17.758 [2024-10-01 08:46:09.409393] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:17.758 08:46:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:17.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:17.758 [2024-10-01 08:46:09.409615] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:17.758 [2024-10-01 08:46:09.409630] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:17.758 [2024-10-01 08:46:09.409637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:17.758 08:46:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:17.758 08:46:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:17.758 [2024-10-01 08:46:09.413142] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.758 [2024-10-01 08:46:09.422200] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:17.758 [2024-10-01 08:46:09.422668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.758 [2024-10-01 08:46:09.422687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:17.758 [2024-10-01 08:46:09.422695] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:17.758 [2024-10-01 08:46:09.422912] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:17.758 [2024-10-01 08:46:09.423134] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:17.758 [2024-10-01 08:46:09.423144] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:17.758 [2024-10-01 08:46:09.423151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:17.758 [2024-10-01 08:46:09.426639] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.758 [2024-10-01 08:46:09.436101] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:17.758 [2024-10-01 08:46:09.436558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.758 [2024-10-01 08:46:09.436574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:17.758 [2024-10-01 08:46:09.436581] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:17.758 [2024-10-01 08:46:09.436796] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:17.758 [2024-10-01 08:46:09.437017] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:17.758 [2024-10-01 08:46:09.437026] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:17.758 [2024-10-01 08:46:09.437032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:17.758 [2024-10-01 08:46:09.440527] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.758 [2024-10-01 08:46:09.449998] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:17.758 [2024-10-01 08:46:09.450570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.758 [2024-10-01 08:46:09.450586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:17.758 [2024-10-01 08:46:09.450594] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:17.758 [2024-10-01 08:46:09.450809] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:17.758 [2024-10-01 08:46:09.451029] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:17.758 [2024-10-01 08:46:09.451039] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:17.758 [2024-10-01 08:46:09.451046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:17.758 [2024-10-01 08:46:09.454539] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.758 [2024-10-01 08:46:09.461669] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:31:17.758 [2024-10-01 08:46:09.461717] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:17.758 [2024-10-01 08:46:09.463791] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:17.758 [2024-10-01 08:46:09.464432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.758 [2024-10-01 08:46:09.464471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:17.758 [2024-10-01 08:46:09.464482] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:17.758 [2024-10-01 08:46:09.464719] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:17.758 [2024-10-01 08:46:09.464938] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:17.758 [2024-10-01 08:46:09.464949] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:17.758 [2024-10-01 08:46:09.464956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:17.758 [2024-10-01 08:46:09.468457] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.758 [2024-10-01 08:46:09.477548] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:17.758 [2024-10-01 08:46:09.478245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.758 [2024-10-01 08:46:09.478283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:17.758 [2024-10-01 08:46:09.478294] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:17.758 [2024-10-01 08:46:09.478530] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:17.758 [2024-10-01 08:46:09.478750] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:17.758 [2024-10-01 08:46:09.478760] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:17.758 [2024-10-01 08:46:09.478767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:17.758 [2024-10-01 08:46:09.482267] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.758 [2024-10-01 08:46:09.491313] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:17.758 [2024-10-01 08:46:09.491955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.758 [2024-10-01 08:46:09.492000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:17.758 [2024-10-01 08:46:09.492012] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:17.758 [2024-10-01 08:46:09.492248] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:17.758 [2024-10-01 08:46:09.492468] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:17.758 [2024-10-01 08:46:09.492478] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:17.758 [2024-10-01 08:46:09.492486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:17.758 [2024-10-01 08:46:09.496077] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.758 [2024-10-01 08:46:09.505148] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:17.758 [2024-10-01 08:46:09.505744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.758 [2024-10-01 08:46:09.505764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:17.758 [2024-10-01 08:46:09.505772] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:17.758 [2024-10-01 08:46:09.505988] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:17.758 [2024-10-01 08:46:09.506211] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:17.758 [2024-10-01 08:46:09.506219] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:17.759 [2024-10-01 08:46:09.506227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:17.759 [2024-10-01 08:46:09.509711] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.759 [2024-10-01 08:46:09.518958] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:17.759 [2024-10-01 08:46:09.519612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.759 [2024-10-01 08:46:09.519651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:17.759 [2024-10-01 08:46:09.519662] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:17.759 [2024-10-01 08:46:09.519898] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:17.759 [2024-10-01 08:46:09.520125] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:17.759 [2024-10-01 08:46:09.520134] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:17.759 [2024-10-01 08:46:09.520142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:17.759 [2024-10-01 08:46:09.523635] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.759 [2024-10-01 08:46:09.532688] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:17.759 [2024-10-01 08:46:09.533412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.759 [2024-10-01 08:46:09.533450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:17.759 [2024-10-01 08:46:09.533462] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:17.759 [2024-10-01 08:46:09.533697] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:17.759 [2024-10-01 08:46:09.533917] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:17.759 [2024-10-01 08:46:09.533926] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:17.759 [2024-10-01 08:46:09.533933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:17.759 [2024-10-01 08:46:09.537434] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.759 [2024-10-01 08:46:09.543686] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:17.759 [2024-10-01 08:46:09.546511] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:17.759 [2024-10-01 08:46:09.547134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.759 [2024-10-01 08:46:09.547172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:17.759 [2024-10-01 08:46:09.547188] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:17.759 [2024-10-01 08:46:09.547424] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:17.759 [2024-10-01 08:46:09.547644] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:17.759 [2024-10-01 08:46:09.547653] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:17.759 [2024-10-01 08:46:09.547661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:17.759 [2024-10-01 08:46:09.551166] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.759 [2024-10-01 08:46:09.560425] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:17.759 [2024-10-01 08:46:09.561116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.759 [2024-10-01 08:46:09.561154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:17.759 [2024-10-01 08:46:09.561165] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:17.759 [2024-10-01 08:46:09.561401] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:17.759 [2024-10-01 08:46:09.561620] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:17.759 [2024-10-01 08:46:09.561629] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:17.759 [2024-10-01 08:46:09.561637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:17.759 [2024-10-01 08:46:09.565141] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.759 [2024-10-01 08:46:09.574195] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:17.759 [2024-10-01 08:46:09.574928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.759 [2024-10-01 08:46:09.574967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:17.759 [2024-10-01 08:46:09.574981] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:17.759 [2024-10-01 08:46:09.575226] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:17.759 [2024-10-01 08:46:09.575447] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:17.759 [2024-10-01 08:46:09.575456] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:17.759 [2024-10-01 08:46:09.575464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:18.021 [2024-10-01 08:46:09.578959] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:18.021 [2024-10-01 08:46:09.588017] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:18.021 [2024-10-01 08:46:09.588600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:18.021 [2024-10-01 08:46:09.588618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:18.021 [2024-10-01 08:46:09.588627] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:18.021 [2024-10-01 08:46:09.588843] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:18.021 [2024-10-01 08:46:09.589065] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:18.021 [2024-10-01 08:46:09.589079] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:18.021 [2024-10-01 08:46:09.589087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:18.021 [2024-10-01 08:46:09.592575] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:18.021 [2024-10-01 08:46:09.596358] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:18.021 [2024-10-01 08:46:09.596382] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:18.021 [2024-10-01 08:46:09.596389] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:18.021 [2024-10-01 08:46:09.596394] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:18.021 [2024-10-01 08:46:09.596399] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:18.021 [2024-10-01 08:46:09.597259] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:31:18.021 [2024-10-01 08:46:09.597632] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:31:18.021 [2024-10-01 08:46:09.597633] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:31:18.021 [2024-10-01 08:46:09.601824] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:18.021 [2024-10-01 08:46:09.602478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:18.021 [2024-10-01 08:46:09.602517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:18.021 [2024-10-01 08:46:09.602528] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:18.021 [2024-10-01 08:46:09.602765] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:18.021 [2024-10-01 08:46:09.602985] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:18.021 [2024-10-01 08:46:09.603003] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:18.021 [2024-10-01 08:46:09.603011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:18.021 [2024-10-01 08:46:09.606508] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:18.021 [2024-10-01 08:46:09.615558] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:18.021 [2024-10-01 08:46:09.616105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:18.021 [2024-10-01 08:46:09.616145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:18.021 [2024-10-01 08:46:09.616158] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:18.021 [2024-10-01 08:46:09.616398] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:18.021 [2024-10-01 08:46:09.616618] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:18.021 [2024-10-01 08:46:09.616628] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:18.021 [2024-10-01 08:46:09.616636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:18.021 [2024-10-01 08:46:09.620138] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:18.021 [2024-10-01 08:46:09.629394] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:18.021 [2024-10-01 08:46:09.629956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:18.021 [2024-10-01 08:46:09.629976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:18.021 [2024-10-01 08:46:09.629990] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:18.021 [2024-10-01 08:46:09.630214] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:18.021 [2024-10-01 08:46:09.630430] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:18.021 [2024-10-01 08:46:09.630438] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:18.021 [2024-10-01 08:46:09.630446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:18.022 [2024-10-01 08:46:09.633931] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:18.022 [2024-10-01 08:46:09.643210] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:18.022 [2024-10-01 08:46:09.643839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:18.022 [2024-10-01 08:46:09.643879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:18.022 [2024-10-01 08:46:09.643891] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:18.022 [2024-10-01 08:46:09.644135] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:18.022 [2024-10-01 08:46:09.644356] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:18.022 [2024-10-01 08:46:09.644365] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:18.022 [2024-10-01 08:46:09.644373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:18.022 [2024-10-01 08:46:09.647866] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:18.022 [2024-10-01 08:46:09.657124] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:18.022 [2024-10-01 08:46:09.657748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:18.022 [2024-10-01 08:46:09.657786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:18.022 [2024-10-01 08:46:09.657798] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:18.022 [2024-10-01 08:46:09.658042] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:18.022 [2024-10-01 08:46:09.658263] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:18.022 [2024-10-01 08:46:09.658272] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:18.022 [2024-10-01 08:46:09.658279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:18.022 [2024-10-01 08:46:09.661771] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:18.022 [2024-10-01 08:46:09.671032] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:18.022 [2024-10-01 08:46:09.671649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:18.022 [2024-10-01 08:46:09.671688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:18.022 [2024-10-01 08:46:09.671699] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:18.022 [2024-10-01 08:46:09.671935] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:18.022 [2024-10-01 08:46:09.672163] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:18.022 [2024-10-01 08:46:09.672178] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:18.022 [2024-10-01 08:46:09.672186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:18.022 [2024-10-01 08:46:09.675678] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:18.022 [2024-10-01 08:46:09.684930] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:18.022 [2024-10-01 08:46:09.685610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:18.022 [2024-10-01 08:46:09.685649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:18.022 [2024-10-01 08:46:09.685660] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:18.022 [2024-10-01 08:46:09.685896] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:18.022 [2024-10-01 08:46:09.686124] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:18.022 [2024-10-01 08:46:09.686134] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:18.022 [2024-10-01 08:46:09.686142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:18.022 [2024-10-01 08:46:09.689632] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:18.022 [2024-10-01 08:46:09.698686] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:18.022 [2024-10-01 08:46:09.699349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:18.022 [2024-10-01 08:46:09.699388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:18.022 [2024-10-01 08:46:09.699399] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:18.022 [2024-10-01 08:46:09.699635] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:18.022 [2024-10-01 08:46:09.699854] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:18.022 [2024-10-01 08:46:09.699863] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:18.022 [2024-10-01 08:46:09.699871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:18.022 [2024-10-01 08:46:09.703369] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:18.022 [2024-10-01 08:46:09.712626] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:18.022 [2024-10-01 08:46:09.713283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:18.022 [2024-10-01 08:46:09.713322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:18.022 [2024-10-01 08:46:09.713334] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:18.022 [2024-10-01 08:46:09.713570] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:18.022 [2024-10-01 08:46:09.713789] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:18.022 [2024-10-01 08:46:09.713798] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:18.022 [2024-10-01 08:46:09.713806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:18.022 [2024-10-01 08:46:09.717306] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:18.022 4473.17 IOPS, 17.47 MiB/s [2024-10-01 08:46:09.728026] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:18.022 [2024-10-01 08:46:09.728650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:18.022 [2024-10-01 08:46:09.728689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:18.022 [2024-10-01 08:46:09.728700] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:18.022 [2024-10-01 08:46:09.728936] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:18.022 [2024-10-01 08:46:09.729163] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:18.022 [2024-10-01 08:46:09.729173] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:18.022 [2024-10-01 08:46:09.729181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:18.022 [2024-10-01 08:46:09.732672] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:18.022 [2024-10-01 08:46:09.741938] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:18.022 [2024-10-01 08:46:09.742458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:18.022 [2024-10-01 08:46:09.742497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:18.022 [2024-10-01 08:46:09.742508] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:18.022 [2024-10-01 08:46:09.742744] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:18.022 [2024-10-01 08:46:09.742963] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:18.022 [2024-10-01 08:46:09.742972] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:18.022 [2024-10-01 08:46:09.742979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:18.022 [2024-10-01 08:46:09.746481] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:18.022 [2024-10-01 08:46:09.755735] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:18.022 [2024-10-01 08:46:09.756368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:18.022 [2024-10-01 08:46:09.756408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:18.022 [2024-10-01 08:46:09.756418] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:18.022 [2024-10-01 08:46:09.756654] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:18.022 [2024-10-01 08:46:09.756874] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:18.022 [2024-10-01 08:46:09.756883] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:18.022 [2024-10-01 08:46:09.756891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:18.022 [2024-10-01 08:46:09.760388] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:18.022 [2024-10-01 08:46:09.769639] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:18.022 [2024-10-01 08:46:09.770315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:18.022 [2024-10-01 08:46:09.770353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:18.022 [2024-10-01 08:46:09.770369] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:18.022 [2024-10-01 08:46:09.770605] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:18.022 [2024-10-01 08:46:09.770825] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:18.022 [2024-10-01 08:46:09.770834] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:18.022 [2024-10-01 08:46:09.770842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:18.022 [2024-10-01 08:46:09.774340] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:18.022 [2024-10-01 08:46:09.783392] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:18.022 [2024-10-01 08:46:09.784028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:18.023 [2024-10-01 08:46:09.784067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:18.023 [2024-10-01 08:46:09.784080] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:18.023 [2024-10-01 08:46:09.784318] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:18.023 [2024-10-01 08:46:09.784537] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:18.023 [2024-10-01 08:46:09.784546] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:18.023 [2024-10-01 08:46:09.784554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:18.023 [2024-10-01 08:46:09.788054] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:18.023 [2024-10-01 08:46:09.797309] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:18.023 [2024-10-01 08:46:09.797950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:18.023 [2024-10-01 08:46:09.797988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:18.023 [2024-10-01 08:46:09.798008] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:18.023 [2024-10-01 08:46:09.798247] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:18.023 [2024-10-01 08:46:09.798467] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:18.023 [2024-10-01 08:46:09.798475] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:18.023 [2024-10-01 08:46:09.798483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:18.023 [2024-10-01 08:46:09.801976] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:18.023 [2024-10-01 08:46:09.811229] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:18.023 [2024-10-01 08:46:09.811821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:18.023 [2024-10-01 08:46:09.811860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:18.023 [2024-10-01 08:46:09.811871] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:18.023 [2024-10-01 08:46:09.812114] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:18.023 [2024-10-01 08:46:09.812335] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:18.023 [2024-10-01 08:46:09.812348] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:18.023 [2024-10-01 08:46:09.812356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:18.023 [2024-10-01 08:46:09.815849] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:18.023 [2024-10-01 08:46:09.825107] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:18.023 [2024-10-01 08:46:09.825746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:18.023 [2024-10-01 08:46:09.825784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:18.023 [2024-10-01 08:46:09.825795] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:18.023 [2024-10-01 08:46:09.826037] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:18.023 [2024-10-01 08:46:09.826258] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:18.023 [2024-10-01 08:46:09.826268] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:18.023 [2024-10-01 08:46:09.826275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:18.023 [2024-10-01 08:46:09.829768] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:18.023 [2024-10-01 08:46:09.839027] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:18.023 [2024-10-01 08:46:09.839631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:18.023 [2024-10-01 08:46:09.839670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:18.023 [2024-10-01 08:46:09.839681] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:18.023 [2024-10-01 08:46:09.839917] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:18.023 [2024-10-01 08:46:09.840145] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:18.023 [2024-10-01 08:46:09.840154] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:18.023 [2024-10-01 08:46:09.840162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:18.285 [2024-10-01 08:46:09.843676] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:18.285 [2024-10-01 08:46:09.852933] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:18.285 [2024-10-01 08:46:09.853602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:18.285 [2024-10-01 08:46:09.853641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:18.285 [2024-10-01 08:46:09.853652] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:18.285 [2024-10-01 08:46:09.853888] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:18.285 [2024-10-01 08:46:09.854116] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:18.285 [2024-10-01 08:46:09.854126] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:18.285 [2024-10-01 08:46:09.854134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:18.285 [2024-10-01 08:46:09.857625] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:18.285 [2024-10-01 08:46:09.866674] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:18.285 [2024-10-01 08:46:09.867284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:18.285 [2024-10-01 08:46:09.867323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:18.285 [2024-10-01 08:46:09.867334] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:18.285 [2024-10-01 08:46:09.867571] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:18.285 [2024-10-01 08:46:09.867791] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:18.285 [2024-10-01 08:46:09.867800] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:18.285 [2024-10-01 08:46:09.867807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:18.285 [2024-10-01 08:46:09.871306] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:18.285 [2024-10-01 08:46:09.880558] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:18.285 [2024-10-01 08:46:09.881167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:18.285 [2024-10-01 08:46:09.881206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:18.285 [2024-10-01 08:46:09.881218] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:18.285 [2024-10-01 08:46:09.881455] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:18.285 [2024-10-01 08:46:09.881675] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:18.285 [2024-10-01 08:46:09.881684] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:18.285 [2024-10-01 08:46:09.881691] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:18.285 [2024-10-01 08:46:09.885190] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:18.285 [2024-10-01 08:46:09.894444] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:18.285 [2024-10-01 08:46:09.894985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:18.285 [2024-10-01 08:46:09.895010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:18.285 [2024-10-01 08:46:09.895019] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:18.285 [2024-10-01 08:46:09.895235] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:18.285 [2024-10-01 08:46:09.895450] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:18.285 [2024-10-01 08:46:09.895458] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:18.285 [2024-10-01 08:46:09.895465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:18.285 [2024-10-01 08:46:09.898954] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:18.285 [2024-10-01 08:46:09.908219] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:18.285 [2024-10-01 08:46:09.908684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:18.285 [2024-10-01 08:46:09.908700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:18.285 [2024-10-01 08:46:09.908707] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:18.285 [2024-10-01 08:46:09.908928] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:18.285 [2024-10-01 08:46:09.909149] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:18.285 [2024-10-01 08:46:09.909157] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:18.285 [2024-10-01 08:46:09.909164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:18.285 [2024-10-01 08:46:09.912651] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:18.285 [2024-10-01 08:46:09.922117] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:18.285 [2024-10-01 08:46:09.922734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:18.285 [2024-10-01 08:46:09.922773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:18.285 [2024-10-01 08:46:09.922784] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:18.285 [2024-10-01 08:46:09.923028] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:18.285 [2024-10-01 08:46:09.923248] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:18.285 [2024-10-01 08:46:09.923258] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:18.285 [2024-10-01 08:46:09.923266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:18.285 [2024-10-01 08:46:09.926758] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:18.285 [2024-10-01 08:46:09.936020] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:18.285 [2024-10-01 08:46:09.936556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:18.285 [2024-10-01 08:46:09.936574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:18.285 [2024-10-01 08:46:09.936582] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:18.285 [2024-10-01 08:46:09.936798] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:18.285 [2024-10-01 08:46:09.937019] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:18.285 [2024-10-01 08:46:09.937028] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:18.285 [2024-10-01 08:46:09.937035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:18.285 [2024-10-01 08:46:09.940519] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:18.285 [2024-10-01 08:46:09.949786] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:18.285 [2024-10-01 08:46:09.950288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:18.285 [2024-10-01 08:46:09.950305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:18.286 [2024-10-01 08:46:09.950312] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:18.286 [2024-10-01 08:46:09.950528] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:18.286 [2024-10-01 08:46:09.950743] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:18.286 [2024-10-01 08:46:09.950751] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:18.286 [2024-10-01 08:46:09.950763] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:18.286 [2024-10-01 08:46:09.954252] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:18.286 [2024-10-01 08:46:09.963698] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:18.286 [2024-10-01 08:46:09.964280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:18.286 [2024-10-01 08:46:09.964318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:18.286 [2024-10-01 08:46:09.964329] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:18.286 [2024-10-01 08:46:09.964565] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:18.286 [2024-10-01 08:46:09.964785] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:18.286 [2024-10-01 08:46:09.964793] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:18.286 [2024-10-01 08:46:09.964801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:18.286 [2024-10-01 08:46:09.968302] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:18.286 [2024-10-01 08:46:09.977564] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:18.286 [2024-10-01 08:46:09.978074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:18.286 [2024-10-01 08:46:09.978094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:18.286 [2024-10-01 08:46:09.978102] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:18.286 [2024-10-01 08:46:09.978319] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:18.286 [2024-10-01 08:46:09.978534] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:18.286 [2024-10-01 08:46:09.978542] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:18.286 [2024-10-01 08:46:09.978549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:18.286 [2024-10-01 08:46:09.982038] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:18.286 [2024-10-01 08:46:09.991420] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:18.286 [2024-10-01 08:46:09.991952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:18.286 [2024-10-01 08:46:09.991969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:18.286 [2024-10-01 08:46:09.991976] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:18.286 [2024-10-01 08:46:09.992197] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:18.286 [2024-10-01 08:46:09.992414] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:18.286 [2024-10-01 08:46:09.992422] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:18.286 [2024-10-01 08:46:09.992429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:18.286 [2024-10-01 08:46:09.995913] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:18.286 [2024-10-01 08:46:10.005664] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:18.286 [2024-10-01 08:46:10.006352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:18.286 [2024-10-01 08:46:10.006395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:18.286 [2024-10-01 08:46:10.006407] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:18.286 [2024-10-01 08:46:10.006642] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:18.286 [2024-10-01 08:46:10.006863] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:18.286 [2024-10-01 08:46:10.006872] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:18.286 [2024-10-01 08:46:10.006879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:18.286 [2024-10-01 08:46:10.010378] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:18.286 [2024-10-01 08:46:10.019605] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:18.286 [2024-10-01 08:46:10.020301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:18.286 [2024-10-01 08:46:10.020340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:18.286 [2024-10-01 08:46:10.020351] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:18.286 [2024-10-01 08:46:10.020587] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:18.286 [2024-10-01 08:46:10.020807] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:18.286 [2024-10-01 08:46:10.020816] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:18.286 [2024-10-01 08:46:10.020824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:18.286 [2024-10-01 08:46:10.024326] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:18.286 [2024-10-01 08:46:10.033379] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:18.286 [2024-10-01 08:46:10.033885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:18.286 [2024-10-01 08:46:10.033923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:18.286 [2024-10-01 08:46:10.033936] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:18.286 [2024-10-01 08:46:10.034182] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:18.286 [2024-10-01 08:46:10.034403] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:18.286 [2024-10-01 08:46:10.034412] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:18.286 [2024-10-01 08:46:10.034420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:18.286 [2024-10-01 08:46:10.037910] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:18.286 [2024-10-01 08:46:10.048549] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:18.286 [2024-10-01 08:46:10.049170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:18.286 [2024-10-01 08:46:10.049197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:18.286 [2024-10-01 08:46:10.049210] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:18.286 [2024-10-01 08:46:10.049503] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:18.286 [2024-10-01 08:46:10.049798] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:18.286 [2024-10-01 08:46:10.049814] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:18.286 [2024-10-01 08:46:10.049827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:18.286 [2024-10-01 08:46:10.054475] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:18.286 [2024-10-01 08:46:10.063796] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:18.286 [2024-10-01 08:46:10.064442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:18.286 [2024-10-01 08:46:10.064465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:18.286 [2024-10-01 08:46:10.064477] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:18.286 [2024-10-01 08:46:10.064765] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:18.286 [2024-10-01 08:46:10.065062] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:18.286 [2024-10-01 08:46:10.065074] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:18.286 [2024-10-01 08:46:10.065086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:18.286 [2024-10-01 08:46:10.069690] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:18.286 [2024-10-01 08:46:10.078949] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:18.286 [2024-10-01 08:46:10.079476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:18.286 [2024-10-01 08:46:10.079501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:18.286 [2024-10-01 08:46:10.079513] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:18.286 [2024-10-01 08:46:10.079798] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:18.286 [2024-10-01 08:46:10.080094] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:18.286 [2024-10-01 08:46:10.080109] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:18.286 [2024-10-01 08:46:10.080120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:18.286 [2024-10-01 08:46:10.084618] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:18.286 [2024-10-01 08:46:10.092610] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:18.286 [2024-10-01 08:46:10.093250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:18.286 [2024-10-01 08:46:10.093289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:18.286 [2024-10-01 08:46:10.093300] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:18.287 [2024-10-01 08:46:10.093536] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:18.287 [2024-10-01 08:46:10.093756] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:18.287 [2024-10-01 08:46:10.093766] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:18.287 [2024-10-01 08:46:10.093773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:18.287 [2024-10-01 08:46:10.097281] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:18.549 [2024-10-01 08:46:10.106537] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:18.549 [2024-10-01 08:46:10.107096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:18.549 [2024-10-01 08:46:10.107134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:18.549 [2024-10-01 08:46:10.107146] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:18.549 [2024-10-01 08:46:10.107385] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:18.549 [2024-10-01 08:46:10.107605] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:18.549 [2024-10-01 08:46:10.107614] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:18.549 [2024-10-01 08:46:10.107623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:18.549 [2024-10-01 08:46:10.111126] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:18.549 [2024-10-01 08:46:10.120383] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:18.549 [2024-10-01 08:46:10.121024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:18.549 [2024-10-01 08:46:10.121063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:18.549 [2024-10-01 08:46:10.121075] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:18.549 [2024-10-01 08:46:10.121314] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:18.549 [2024-10-01 08:46:10.121533] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:18.549 [2024-10-01 08:46:10.121541] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:18.549 [2024-10-01 08:46:10.121549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:18.549 [2024-10-01 08:46:10.125045] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:18.549 [2024-10-01 08:46:10.134300] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:18.549 [2024-10-01 08:46:10.134893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:18.549 [2024-10-01 08:46:10.134932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:18.549 [2024-10-01 08:46:10.134943] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:18.549 [2024-10-01 08:46:10.135186] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:18.549 [2024-10-01 08:46:10.135407] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:18.549 [2024-10-01 08:46:10.135416] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:18.549 [2024-10-01 08:46:10.135424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:18.549 [2024-10-01 08:46:10.138915] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:18.549 [2024-10-01 08:46:10.148196] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:18.549 [2024-10-01 08:46:10.148741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:18.549 [2024-10-01 08:46:10.148761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:18.549 [2024-10-01 08:46:10.148778] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:18.549 [2024-10-01 08:46:10.149002] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:18.549 [2024-10-01 08:46:10.149219] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:18.549 [2024-10-01 08:46:10.149228] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:18.549 [2024-10-01 08:46:10.149235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:18.549 [2024-10-01 08:46:10.152718] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:18.549 [2024-10-01 08:46:10.161966] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:18.549 [2024-10-01 08:46:10.162412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:18.549 [2024-10-01 08:46:10.162429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:18.549 [2024-10-01 08:46:10.162436] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:18.549 [2024-10-01 08:46:10.162652] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:18.549 [2024-10-01 08:46:10.162867] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:18.549 [2024-10-01 08:46:10.162876] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:18.549 [2024-10-01 08:46:10.162883] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:18.549 [2024-10-01 08:46:10.166372] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:18.549 [2024-10-01 08:46:10.175820] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:18.549 [2024-10-01 08:46:10.176437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:18.549 [2024-10-01 08:46:10.176476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:18.549 [2024-10-01 08:46:10.176487] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:18.549 [2024-10-01 08:46:10.176723] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:18.549 [2024-10-01 08:46:10.176944] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:18.549 [2024-10-01 08:46:10.176953] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:18.549 [2024-10-01 08:46:10.176961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:18.549 [2024-10-01 08:46:10.180462] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:18.549 [2024-10-01 08:46:10.189715] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:18.549 [2024-10-01 08:46:10.190324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:18.549 [2024-10-01 08:46:10.190363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:18.549 [2024-10-01 08:46:10.190374] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:18.549 [2024-10-01 08:46:10.190609] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:18.549 [2024-10-01 08:46:10.190829] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:18.549 [2024-10-01 08:46:10.190843] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:18.550 [2024-10-01 08:46:10.190850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:18.550 [2024-10-01 08:46:10.194348] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:18.550 [2024-10-01 08:46:10.203602] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:18.550 [2024-10-01 08:46:10.204300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:18.550 [2024-10-01 08:46:10.204339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:18.550 [2024-10-01 08:46:10.204351] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:18.550 [2024-10-01 08:46:10.204586] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:18.550 [2024-10-01 08:46:10.204807] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:18.550 [2024-10-01 08:46:10.204815] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:18.550 [2024-10-01 08:46:10.204823] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:18.550 [2024-10-01 08:46:10.208323] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:18.550 [2024-10-01 08:46:10.217370] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:18.550 [2024-10-01 08:46:10.217977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:18.550 [2024-10-01 08:46:10.218022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:18.550 [2024-10-01 08:46:10.218036] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:18.550 [2024-10-01 08:46:10.218275] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:18.550 [2024-10-01 08:46:10.218495] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:18.550 [2024-10-01 08:46:10.218505] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:18.550 [2024-10-01 08:46:10.218513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:18.550 [2024-10-01 08:46:10.222011] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:18.550 [2024-10-01 08:46:10.231263] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:18.550 [2024-10-01 08:46:10.231801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:18.550 [2024-10-01 08:46:10.231819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:18.550 [2024-10-01 08:46:10.231827] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:18.550 [2024-10-01 08:46:10.232048] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:18.550 [2024-10-01 08:46:10.232265] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:18.550 [2024-10-01 08:46:10.232274] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:18.550 [2024-10-01 08:46:10.232281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:18.550 [2024-10-01 08:46:10.235765] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:18.550 [2024-10-01 08:46:10.245037] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:18.550 [2024-10-01 08:46:10.245686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:18.550 [2024-10-01 08:46:10.245724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:18.550 [2024-10-01 08:46:10.245736] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:18.550 [2024-10-01 08:46:10.245971] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:18.550 [2024-10-01 08:46:10.246199] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:18.550 [2024-10-01 08:46:10.246209] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:18.550 [2024-10-01 08:46:10.246216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:18.550 [2024-10-01 08:46:10.249708] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:18.550 08:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:18.550 08:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:31:18.550 08:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:31:18.550 08:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:18.550 08:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:18.550 [2024-10-01 08:46:10.258955] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:18.550 [2024-10-01 08:46:10.259499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:18.550 [2024-10-01 08:46:10.259538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:18.550 [2024-10-01 08:46:10.259550] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:18.550 [2024-10-01 08:46:10.259790] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:18.550 [2024-10-01 08:46:10.260017] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:18.550 [2024-10-01 08:46:10.260027] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:18.550 [2024-10-01 08:46:10.260035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:18.550 [2024-10-01 08:46:10.263526] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:18.550 [2024-10-01 08:46:10.272782] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:18.550 [2024-10-01 08:46:10.273264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:18.550 [2024-10-01 08:46:10.273283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:18.550 [2024-10-01 08:46:10.273291] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:18.550 [2024-10-01 08:46:10.273508] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:18.550 [2024-10-01 08:46:10.273723] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:18.550 [2024-10-01 08:46:10.273732] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:18.550 [2024-10-01 08:46:10.273739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:18.550 [2024-10-01 08:46:10.277229] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:18.550 [2024-10-01 08:46:10.286693] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:18.550 [2024-10-01 08:46:10.287298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:18.550 [2024-10-01 08:46:10.287338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:18.550 [2024-10-01 08:46:10.287350] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:18.550 [2024-10-01 08:46:10.287586] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:18.550 [2024-10-01 08:46:10.287807] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:18.550 [2024-10-01 08:46:10.287816] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:18.550 [2024-10-01 08:46:10.287824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:18.550 [2024-10-01 08:46:10.291322] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:18.550 08:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:18.550 08:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:18.550 08:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.550 08:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:18.550 [2024-10-01 08:46:10.300576] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:18.550 [2024-10-01 08:46:10.301088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:18.550 [2024-10-01 08:46:10.301108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:18.550 [2024-10-01 08:46:10.301116] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:18.550 [2024-10-01 08:46:10.301332] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:18.550 [2024-10-01 08:46:10.301548] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:18.550 [2024-10-01 08:46:10.301556] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:18.550 [2024-10-01 08:46:10.301563] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:18.550 [2024-10-01 08:46:10.305076] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:18.550 [2024-10-01 08:46:10.305106] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:18.550 [2024-10-01 08:46:10.314353] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:18.550 [2024-10-01 08:46:10.315011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:18.550 [2024-10-01 08:46:10.315049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:18.550 [2024-10-01 08:46:10.315062] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:18.550 [2024-10-01 08:46:10.315301] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:18.550 [2024-10-01 08:46:10.315520] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:18.550 [2024-10-01 08:46:10.315529] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:18.550 [2024-10-01 08:46:10.315536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:18.550 [2024-10-01 08:46:10.319039] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:18.550 08:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.550 08:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:18.550 08:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.551 08:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:18.551 [2024-10-01 08:46:10.328086] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:18.551 [2024-10-01 08:46:10.328725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:18.551 [2024-10-01 08:46:10.328764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:18.551 [2024-10-01 08:46:10.328775] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:18.551 [2024-10-01 08:46:10.329019] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:18.551 [2024-10-01 08:46:10.329240] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:18.551 [2024-10-01 08:46:10.329249] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:18.551 [2024-10-01 08:46:10.329256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:18.551 [2024-10-01 08:46:10.332748] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:18.551 Malloc0 00:31:18.551 08:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.551 08:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:18.551 08:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.551 [2024-10-01 08:46:10.342056] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:18.551 08:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:18.551 [2024-10-01 08:46:10.342589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:18.551 [2024-10-01 08:46:10.342607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:18.551 [2024-10-01 08:46:10.342615] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:18.551 [2024-10-01 08:46:10.342832] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:18.551 [2024-10-01 08:46:10.343076] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:18.551 [2024-10-01 08:46:10.343086] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:18.551 [2024-10-01 08:46:10.343093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:18.551 [2024-10-01 08:46:10.346583] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:18.551 08:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.551 08:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:18.551 08:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.551 08:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:18.551 [2024-10-01 08:46:10.355838] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:18.551 [2024-10-01 08:46:10.356464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:18.551 [2024-10-01 08:46:10.356502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:18.551 [2024-10-01 08:46:10.356518] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:18.551 [2024-10-01 08:46:10.356754] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:18.551 [2024-10-01 08:46:10.356974] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:18.551 [2024-10-01 08:46:10.356984] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:18.551 [2024-10-01 08:46:10.356991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:18.551 [2024-10-01 08:46:10.360491] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:18.551 08:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.551 08:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:18.551 08:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.551 08:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:18.551 [2024-10-01 08:46:10.369744] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:18.813 [2024-10-01 08:46:10.370265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:18.813 [2024-10-01 08:46:10.370285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2470280 with addr=10.0.0.2, port=4420 00:31:18.813 [2024-10-01 08:46:10.370293] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2470280 is same with the state(6) to be set 00:31:18.813 [2024-10-01 08:46:10.370510] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470280 (9): Bad file descriptor 00:31:18.813 [2024-10-01 08:46:10.370725] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:18.813 [2024-10-01 08:46:10.370734] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:18.813 [2024-10-01 08:46:10.370741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:18.813 [2024-10-01 08:46:10.372153] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:18.813 [2024-10-01 08:46:10.374233] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:18.813 08:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.813 08:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3936168 00:31:18.813 [2024-10-01 08:46:10.383475] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:18.813 [2024-10-01 08:46:10.432155] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:27.039 4335.14 IOPS, 16.93 MiB/s 5208.12 IOPS, 20.34 MiB/s 5863.89 IOPS, 22.91 MiB/s 6375.40 IOPS, 24.90 MiB/s 6805.91 IOPS, 26.59 MiB/s 7170.92 IOPS, 28.01 MiB/s 7472.77 IOPS, 29.19 MiB/s 7738.93 IOPS, 30.23 MiB/s 7962.93 IOPS, 31.11 MiB/s 00:31:27.039 Latency(us) 00:31:27.039 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:27.039 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:27.039 Verification LBA range: start 0x0 length 0x4000 00:31:27.039 Nvme1n1 : 15.01 7965.37 31.11 9943.65 0.00 7122.38 785.07 13653.33 00:31:27.039 =================================================================================================================== 00:31:27.039 Total : 7965.37 31.11 9943.65 0.00 7122.38 785.07 13653.33 00:31:27.299 08:46:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:31:27.299 08:46:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:27.299 08:46:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.299 08:46:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:27.299 08:46:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.299 08:46:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:31:27.299 08:46:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:31:27.299 08:46:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # nvmfcleanup 00:31:27.299 08:46:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:31:27.299 08:46:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:27.299 08:46:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:31:27.299 08:46:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:27.299 08:46:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:27.299 rmmod nvme_tcp 00:31:27.299 rmmod nvme_fabrics 00:31:27.299 rmmod nvme_keyring 00:31:27.299 08:46:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:27.299 08:46:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:31:27.299 08:46:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:31:27.299 08:46:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@513 -- # '[' -n 3937499 ']' 00:31:27.299 08:46:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@514 -- # killprocess 3937499 00:31:27.299 08:46:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 3937499 ']' 00:31:27.299 08:46:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 3937499 00:31:27.299 08:46:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:31:27.299 08:46:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:27.299 08:46:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3937499 00:31:27.299 08:46:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:27.299 08:46:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:27.299 08:46:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3937499' 00:31:27.299 killing process with pid 3937499 00:31:27.299 08:46:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 3937499 00:31:27.299 08:46:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 3937499 00:31:27.559 08:46:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:31:27.559 08:46:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:31:27.559 08:46:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:31:27.559 08:46:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:31:27.559 08:46:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@787 -- # iptables-save 00:31:27.559 08:46:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:31:27.559 08:46:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@787 -- # iptables-restore 00:31:27.559 08:46:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:27.559 08:46:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:27.559 08:46:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:27.559 08:46:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:27.559 08:46:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:29.470 08:46:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:29.470 00:31:29.470 real 0m27.948s 00:31:29.470 user 1m3.128s 00:31:29.470 sys 0m7.357s 00:31:29.470 08:46:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:29.470 08:46:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:29.470 ************************************ 00:31:29.470 END TEST nvmf_bdevperf 00:31:29.470 ************************************ 00:31:29.731 08:46:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:31:29.731 08:46:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:29.731 08:46:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:29.731 08:46:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.731 ************************************ 00:31:29.731 START TEST nvmf_target_disconnect 00:31:29.731 ************************************ 00:31:29.731 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:31:29.731 * Looking for test storage... 00:31:29.731 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:29.731 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:29.731 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # lcov --version 00:31:29.731 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:29.731 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:29.731 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:29.731 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:29.731 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:29.731 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:31:29.731 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:31:29.731 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:31:29.731 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:31:29.731 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:31:29.731 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:31:29.731 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:31:29.731 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:29.731 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:31:29.731 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:31:29.731 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:29.731 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:29.731 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:31:29.731 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:31:29.731 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:29.731 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:31:29.731 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:31:29.731 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:31:29.731 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:31:29.731 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:29.731 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:31:29.731 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:31:29.731 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:29.731 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:29.731 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:31:29.731 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:29.731 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:29.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:29.731 --rc genhtml_branch_coverage=1 00:31:29.731 --rc genhtml_function_coverage=1 00:31:29.731 --rc genhtml_legend=1 00:31:29.731 --rc geninfo_all_blocks=1 00:31:29.731 --rc geninfo_unexecuted_blocks=1 00:31:29.731 00:31:29.731 ' 00:31:29.731 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:29.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:29.731 --rc genhtml_branch_coverage=1 00:31:29.731 --rc genhtml_function_coverage=1 00:31:29.731 --rc genhtml_legend=1 00:31:29.731 --rc geninfo_all_blocks=1 00:31:29.731 --rc geninfo_unexecuted_blocks=1 00:31:29.731 00:31:29.731 ' 00:31:29.731 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:29.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:29.731 --rc genhtml_branch_coverage=1 00:31:29.731 --rc genhtml_function_coverage=1 00:31:29.731 --rc genhtml_legend=1 00:31:29.731 --rc geninfo_all_blocks=1 00:31:29.731 --rc geninfo_unexecuted_blocks=1 00:31:29.731 00:31:29.731 ' 00:31:29.731 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:29.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:29.731 --rc genhtml_branch_coverage=1 00:31:29.731 --rc genhtml_function_coverage=1 00:31:29.731 --rc genhtml_legend=1 00:31:29.731 --rc geninfo_all_blocks=1 00:31:29.731 --rc geninfo_unexecuted_blocks=1 00:31:29.731 00:31:29.731 ' 00:31:29.731 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:29.731 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:31:29.731 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:29.731 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:29.731 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:29.731 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:29.731 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:29.731 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:29.731 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:29.731 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:29.731 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:29.731 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:29.993 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:29.993 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:29.993 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:29.993 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:29.993 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:29.993 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:29.993 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:29.993 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:31:29.993 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:29.993 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:29.993 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:29.993 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:29.993 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:29.993 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:29.993 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:31:29.993 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:29.993 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:31:29.993 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:29.993 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:29.993 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:29.993 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:29.993 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:29.993 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:29.993 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:29.993 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:29.993 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:29.993 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:29.993 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:31:29.993 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:31:29.993 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:31:29.993 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:31:29.993 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:31:29.993 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:29.993 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@472 -- # prepare_net_devs 00:31:29.993 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@434 -- # local -g is_hw=no 00:31:29.993 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@436 -- # remove_spdk_ns 00:31:29.993 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:29.993 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:29.993 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:29.993 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:31:29.993 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:31:29.993 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:31:29.993 08:46:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:38.128 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:38.128 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:31:38.128 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:38.128 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:38.128 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:38.128 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:38.128 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:38.128 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:31:38.128 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:38.128 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:31:38.128 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:31:38.128 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:31:38.128 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:31:38.128 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:31:38.128 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:31:38.128 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:38.128 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:38.128 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:38.128 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:38.128 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:38.128 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:38.128 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:38.128 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:38.128 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:38.128 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:38.128 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:38.128 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:31:38.128 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:31:38.128 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:31:38.128 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:31:38.128 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:31:38.128 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:31:38.128 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:31:38.128 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:38.128 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:38.128 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:31:38.128 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:31:38.128 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:38.128 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:38.128 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:31:38.128 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:31:38.128 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:38.128 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:38.128 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:31:38.128 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:31:38.128 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:38.128 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:38.128 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:31:38.128 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:31:38.128 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:31:38.128 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:31:38.128 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:31:38.128 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:38.128 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:31:38.128 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:38.128 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ up == up ]] 00:31:38.128 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:31:38.128 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:38.128 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:38.128 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:38.128 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:31:38.128 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:31:38.128 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:38.128 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:31:38.128 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:38.128 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ up == up ]] 00:31:38.128 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:31:38.128 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:38.128 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:38.128 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:38.128 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:31:38.128 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:31:38.129 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # is_hw=yes 00:31:38.129 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:31:38.129 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:31:38.129 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:31:38.129 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:38.129 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:38.129 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:38.129 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:38.129 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:38.129 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:38.129 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:38.129 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:38.129 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:38.129 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:38.129 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:38.129 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:38.129 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:38.129 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:38.129 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:38.129 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:38.129 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:38.129 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:38.129 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:38.129 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:38.129 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:38.129 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:38.129 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:38.129 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:38.129 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.643 ms 00:31:38.129 00:31:38.129 --- 10.0.0.2 ping statistics --- 00:31:38.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:38.129 rtt min/avg/max/mdev = 0.643/0.643/0.643/0.000 ms 00:31:38.129 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:38.129 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:38.129 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:31:38.129 00:31:38.129 --- 10.0.0.1 ping statistics --- 00:31:38.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:38.129 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:31:38.129 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:38.129 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # return 0 00:31:38.129 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:31:38.129 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:38.129 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:31:38.129 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:31:38.129 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:38.129 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:31:38.129 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:31:38.129 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:31:38.129 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:38.129 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:38.129 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:38.129 ************************************ 00:31:38.129 START TEST nvmf_target_disconnect_tc1 00:31:38.129 ************************************ 00:31:38.129 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:31:38.129 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:38.129 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:31:38.129 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:38.129 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:31:38.129 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:38.129 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:31:38.129 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:38.129 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:31:38.129 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:38.129 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:31:38.129 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:31:38.129 08:46:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:38.129 [2024-10-01 08:46:29.008909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:38.129 [2024-10-01 08:46:29.008969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ba0 with addr=10.0.0.2, port=4420 00:31:38.129 [2024-10-01 08:46:29.009023] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:38.129 [2024-10-01 08:46:29.009035] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:38.129 [2024-10-01 08:46:29.009043] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:31:38.129 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:31:38.129 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:31:38.129 Initializing NVMe Controllers 00:31:38.129 08:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:31:38.129 08:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:38.129 08:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:38.129 08:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:38.129 00:31:38.129 real 0m0.115s 00:31:38.129 user 0m0.053s 00:31:38.129 sys 0m0.062s 00:31:38.129 08:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:38.129 08:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:38.129 ************************************ 00:31:38.129 END TEST nvmf_target_disconnect_tc1 00:31:38.129 ************************************ 00:31:38.129 08:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:31:38.129 08:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:38.129 08:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:38.129 08:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:38.129 ************************************ 00:31:38.129 START TEST nvmf_target_disconnect_tc2 00:31:38.129 ************************************ 00:31:38.129 08:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:31:38.129 08:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:31:38.129 08:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:31:38.129 08:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:31:38.129 08:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:38.129 08:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:38.129 08:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # nvmfpid=3943546 00:31:38.129 08:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # waitforlisten 3943546 00:31:38.129 08:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:31:38.130 08:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3943546 ']' 00:31:38.130 08:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:38.130 08:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:38.130 08:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:38.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:38.130 08:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:38.130 08:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:38.130 [2024-10-01 08:46:29.175837] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:31:38.130 [2024-10-01 08:46:29.175896] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:38.130 [2024-10-01 08:46:29.264597] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:38.130 [2024-10-01 08:46:29.356412] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:38.130 [2024-10-01 08:46:29.356470] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:38.130 [2024-10-01 08:46:29.356479] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:38.130 [2024-10-01 08:46:29.356486] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:38.130 [2024-10-01 08:46:29.356493] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:38.130 [2024-10-01 08:46:29.358566] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:31:38.130 [2024-10-01 08:46:29.358732] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:31:38.130 [2024-10-01 08:46:29.358895] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:31:38.130 [2024-10-01 08:46:29.358895] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:31:38.389 08:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:38.389 08:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:31:38.389 08:46:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:31:38.389 08:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:38.389 08:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:38.389 08:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:38.389 08:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:38.389 08:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.389 08:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:38.389 Malloc0 00:31:38.389 08:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.389 08:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:31:38.389 08:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.389 08:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:38.389 [2024-10-01 08:46:30.077100] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:38.389 08:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.389 08:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:38.389 08:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.389 08:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:38.389 08:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.389 08:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:38.389 08:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.389 08:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:38.389 08:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.389 08:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:38.389 08:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.389 08:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:38.389 [2024-10-01 08:46:30.117546] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:38.389 08:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.389 08:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:38.389 08:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.389 08:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:38.389 08:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.389 08:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3943644 00:31:38.389 08:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:31:38.389 08:46:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:40.949 08:46:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3943546 00:31:40.949 08:46:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:31:40.949 Read completed with error (sct=0, sc=8) 00:31:40.949 starting I/O failed 00:31:40.949 Read completed with error (sct=0, sc=8) 00:31:40.949 starting I/O failed 00:31:40.949 Read completed with error (sct=0, sc=8) 00:31:40.949 starting I/O failed 00:31:40.949 Read completed with error (sct=0, sc=8) 00:31:40.949 starting I/O failed 00:31:40.949 Read completed with error (sct=0, sc=8) 00:31:40.949 starting I/O failed 00:31:40.949 Read completed with error (sct=0, sc=8) 00:31:40.949 starting I/O failed 00:31:40.949 Read completed with error (sct=0, sc=8) 00:31:40.949 starting I/O failed 00:31:40.949 Read completed with error (sct=0, sc=8) 00:31:40.949 starting I/O failed 00:31:40.949 Read completed with error (sct=0, sc=8) 00:31:40.949 starting I/O failed 00:31:40.949 Read completed with error (sct=0, sc=8) 00:31:40.949 starting I/O failed 00:31:40.949 Read completed with error (sct=0, sc=8) 00:31:40.949 starting I/O failed 00:31:40.949 Read completed with error (sct=0, sc=8) 00:31:40.949 starting I/O failed 00:31:40.949 Read completed with error (sct=0, sc=8) 00:31:40.949 starting I/O failed 00:31:40.949 Read completed with error (sct=0, sc=8) 00:31:40.949 starting I/O failed 00:31:40.949 Read completed with error (sct=0, sc=8) 00:31:40.949 starting I/O failed 00:31:40.949 Write completed with error (sct=0, sc=8) 00:31:40.949 starting I/O failed 00:31:40.949 Write completed with error (sct=0, sc=8) 00:31:40.949 starting I/O failed 00:31:40.949 Write completed with error (sct=0, sc=8) 00:31:40.949 starting I/O failed 00:31:40.949 Write completed with error (sct=0, sc=8) 00:31:40.949 starting I/O failed 00:31:40.949 Read completed with error (sct=0, sc=8) 00:31:40.949 starting I/O failed 00:31:40.949 Read completed with error (sct=0, sc=8) 00:31:40.949 starting I/O failed 00:31:40.949 Read completed with error (sct=0, sc=8) 00:31:40.949 starting I/O failed 00:31:40.949 Write completed with error (sct=0, sc=8) 00:31:40.949 starting I/O failed 00:31:40.949 Write completed with error (sct=0, sc=8) 00:31:40.949 starting I/O failed 00:31:40.949 Write completed with error (sct=0, sc=8) 00:31:40.949 starting I/O failed 00:31:40.949 Read completed with error (sct=0, sc=8) 00:31:40.949 starting I/O failed 00:31:40.949 Write completed with error (sct=0, sc=8) 00:31:40.949 starting I/O failed 00:31:40.949 Read completed with error (sct=0, sc=8) 00:31:40.949 starting I/O failed 00:31:40.949 Read completed with error (sct=0, sc=8) 00:31:40.949 starting I/O failed 00:31:40.949 Read completed with error (sct=0, sc=8) 00:31:40.949 starting I/O failed 00:31:40.949 Write completed with error (sct=0, sc=8) 00:31:40.949 starting I/O failed 00:31:40.949 Write completed with error (sct=0, sc=8) 00:31:40.949 starting I/O failed 00:31:40.949 [2024-10-01 08:46:32.158498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:40.949 [2024-10-01 08:46:32.158962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.949 [2024-10-01 08:46:32.158990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.949 qpair failed and we were unable to recover it. 00:31:40.950 [2024-10-01 08:46:32.159431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.950 [2024-10-01 08:46:32.159471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.950 qpair failed and we were unable to recover it. 00:31:40.950 [2024-10-01 08:46:32.159659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.950 [2024-10-01 08:46:32.159677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.950 qpair failed and we were unable to recover it. 00:31:40.950 [2024-10-01 08:46:32.159823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.950 [2024-10-01 08:46:32.159838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.950 qpair failed and we were unable to recover it. 00:31:40.950 [2024-10-01 08:46:32.160294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.950 [2024-10-01 08:46:32.160331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.950 qpair failed and we were unable to recover it. 00:31:40.950 [2024-10-01 08:46:32.160533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.950 [2024-10-01 08:46:32.160546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.950 qpair failed and we were unable to recover it. 00:31:40.950 [2024-10-01 08:46:32.160917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.950 [2024-10-01 08:46:32.160928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.950 qpair failed and we were unable to recover it. 00:31:40.950 [2024-10-01 08:46:32.161348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.950 [2024-10-01 08:46:32.161386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.950 qpair failed and we were unable to recover it. 00:31:40.950 [2024-10-01 08:46:32.161727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.950 [2024-10-01 08:46:32.161745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.950 qpair failed and we were unable to recover it. 00:31:40.950 [2024-10-01 08:46:32.162052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.950 [2024-10-01 08:46:32.162062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.950 qpair failed and we were unable to recover it. 00:31:40.950 [2024-10-01 08:46:32.162443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.950 [2024-10-01 08:46:32.162453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.950 qpair failed and we were unable to recover it. 00:31:40.950 [2024-10-01 08:46:32.162794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.950 [2024-10-01 08:46:32.162804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.950 qpair failed and we were unable to recover it. 00:31:40.950 [2024-10-01 08:46:32.163137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.950 [2024-10-01 08:46:32.163148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.950 qpair failed and we were unable to recover it. 00:31:40.950 [2024-10-01 08:46:32.163471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.950 [2024-10-01 08:46:32.163482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.950 qpair failed and we were unable to recover it. 00:31:40.950 [2024-10-01 08:46:32.163706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.950 [2024-10-01 08:46:32.163717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.950 qpair failed and we were unable to recover it. 00:31:40.950 [2024-10-01 08:46:32.163980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.950 [2024-10-01 08:46:32.163991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.950 qpair failed and we were unable to recover it. 00:31:40.950 [2024-10-01 08:46:32.164227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.950 [2024-10-01 08:46:32.164238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.950 qpair failed and we were unable to recover it. 00:31:40.950 [2024-10-01 08:46:32.164472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.950 [2024-10-01 08:46:32.164483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.950 qpair failed and we were unable to recover it. 00:31:40.950 [2024-10-01 08:46:32.164689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.950 [2024-10-01 08:46:32.164700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.950 qpair failed and we were unable to recover it. 00:31:40.950 [2024-10-01 08:46:32.165013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.950 [2024-10-01 08:46:32.165025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.950 qpair failed and we were unable to recover it. 00:31:40.950 [2024-10-01 08:46:32.165336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.950 [2024-10-01 08:46:32.165346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.950 qpair failed and we were unable to recover it. 00:31:40.950 [2024-10-01 08:46:32.165633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.950 [2024-10-01 08:46:32.165643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.950 qpair failed and we were unable to recover it. 00:31:40.950 [2024-10-01 08:46:32.165978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.950 [2024-10-01 08:46:32.165989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.950 qpair failed and we were unable to recover it. 00:31:40.950 [2024-10-01 08:46:32.166185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.950 [2024-10-01 08:46:32.166198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.950 qpair failed and we were unable to recover it. 00:31:40.950 [2024-10-01 08:46:32.166396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.950 [2024-10-01 08:46:32.166406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.950 qpair failed and we were unable to recover it. 00:31:40.950 [2024-10-01 08:46:32.166687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.950 [2024-10-01 08:46:32.166698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.950 qpair failed and we were unable to recover it. 00:31:40.950 [2024-10-01 08:46:32.166960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.950 [2024-10-01 08:46:32.166970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.950 qpair failed and we were unable to recover it. 00:31:40.950 [2024-10-01 08:46:32.167226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.950 [2024-10-01 08:46:32.167238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.950 qpair failed and we were unable to recover it. 00:31:40.950 [2024-10-01 08:46:32.167558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.950 [2024-10-01 08:46:32.167569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.950 qpair failed and we were unable to recover it. 00:31:40.950 [2024-10-01 08:46:32.167894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.950 [2024-10-01 08:46:32.167905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.950 qpair failed and we were unable to recover it. 00:31:40.950 [2024-10-01 08:46:32.168220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.950 [2024-10-01 08:46:32.168231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.950 qpair failed and we were unable to recover it. 00:31:40.950 [2024-10-01 08:46:32.168589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.950 [2024-10-01 08:46:32.168599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.950 qpair failed and we were unable to recover it. 00:31:40.950 [2024-10-01 08:46:32.168930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.950 [2024-10-01 08:46:32.168941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.950 qpair failed and we were unable to recover it. 00:31:40.950 [2024-10-01 08:46:32.169261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.950 [2024-10-01 08:46:32.169272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.950 qpair failed and we were unable to recover it. 00:31:40.950 [2024-10-01 08:46:32.169621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.950 [2024-10-01 08:46:32.169632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.950 qpair failed and we were unable to recover it. 00:31:40.950 [2024-10-01 08:46:32.169918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.950 [2024-10-01 08:46:32.169929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.950 qpair failed and we were unable to recover it. 00:31:40.950 [2024-10-01 08:46:32.170061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.950 [2024-10-01 08:46:32.170072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.950 qpair failed and we were unable to recover it. 00:31:40.950 [2024-10-01 08:46:32.170397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.950 [2024-10-01 08:46:32.170408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.950 qpair failed and we were unable to recover it. 00:31:40.950 [2024-10-01 08:46:32.171040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.950 [2024-10-01 08:46:32.171051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.950 qpair failed and we were unable to recover it. 00:31:40.950 [2024-10-01 08:46:32.171380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.951 [2024-10-01 08:46:32.171391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.951 qpair failed and we were unable to recover it. 00:31:40.951 [2024-10-01 08:46:32.171676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.951 [2024-10-01 08:46:32.171686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.951 qpair failed and we were unable to recover it. 00:31:40.951 [2024-10-01 08:46:32.171968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.951 [2024-10-01 08:46:32.171979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.951 qpair failed and we were unable to recover it. 00:31:40.951 [2024-10-01 08:46:32.172293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.951 [2024-10-01 08:46:32.172304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.951 qpair failed and we were unable to recover it. 00:31:40.951 [2024-10-01 08:46:32.172670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.951 [2024-10-01 08:46:32.172681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.951 qpair failed and we were unable to recover it. 00:31:40.951 [2024-10-01 08:46:32.173026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.951 [2024-10-01 08:46:32.173038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.951 qpair failed and we were unable to recover it. 00:31:40.951 [2024-10-01 08:46:32.173340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.951 [2024-10-01 08:46:32.173350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.951 qpair failed and we were unable to recover it. 00:31:40.951 [2024-10-01 08:46:32.173680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.951 [2024-10-01 08:46:32.173690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.951 qpair failed and we were unable to recover it. 00:31:40.951 [2024-10-01 08:46:32.173975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.951 [2024-10-01 08:46:32.173984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.951 qpair failed and we were unable to recover it. 00:31:40.951 [2024-10-01 08:46:32.174293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.951 [2024-10-01 08:46:32.174306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.951 qpair failed and we were unable to recover it. 00:31:40.951 [2024-10-01 08:46:32.174672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.951 [2024-10-01 08:46:32.174682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.951 qpair failed and we were unable to recover it. 00:31:40.951 [2024-10-01 08:46:32.175017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.951 [2024-10-01 08:46:32.175027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.951 qpair failed and we were unable to recover it. 00:31:40.951 [2024-10-01 08:46:32.175388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.951 [2024-10-01 08:46:32.175397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.951 qpair failed and we were unable to recover it. 00:31:40.951 [2024-10-01 08:46:32.175569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.951 [2024-10-01 08:46:32.175580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.951 qpair failed and we were unable to recover it. 00:31:40.951 [2024-10-01 08:46:32.175895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.951 [2024-10-01 08:46:32.175905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.951 qpair failed and we were unable to recover it. 00:31:40.951 [2024-10-01 08:46:32.176106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.951 [2024-10-01 08:46:32.176116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.951 qpair failed and we were unable to recover it. 00:31:40.951 [2024-10-01 08:46:32.176451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.951 [2024-10-01 08:46:32.176461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.951 qpair failed and we were unable to recover it. 00:31:40.951 [2024-10-01 08:46:32.176842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.951 [2024-10-01 08:46:32.176851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.951 qpair failed and we were unable to recover it. 00:31:40.951 [2024-10-01 08:46:32.177150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.951 [2024-10-01 08:46:32.177160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.951 qpair failed and we were unable to recover it. 00:31:40.951 [2024-10-01 08:46:32.177481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.951 [2024-10-01 08:46:32.177491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.951 qpair failed and we were unable to recover it. 00:31:40.951 [2024-10-01 08:46:32.177771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.951 [2024-10-01 08:46:32.177781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.951 qpair failed and we were unable to recover it. 00:31:40.951 [2024-10-01 08:46:32.178113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.951 [2024-10-01 08:46:32.178123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.951 qpair failed and we were unable to recover it. 00:31:40.951 [2024-10-01 08:46:32.178424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.951 [2024-10-01 08:46:32.178434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.951 qpair failed and we were unable to recover it. 00:31:40.951 [2024-10-01 08:46:32.178787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.951 [2024-10-01 08:46:32.178797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.951 qpair failed and we were unable to recover it. 00:31:40.951 [2024-10-01 08:46:32.179109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.951 [2024-10-01 08:46:32.179120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.951 qpair failed and we were unable to recover it. 00:31:40.951 [2024-10-01 08:46:32.179433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.951 [2024-10-01 08:46:32.179443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.951 qpair failed and we were unable to recover it. 00:31:40.951 [2024-10-01 08:46:32.179724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.951 [2024-10-01 08:46:32.179734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.951 qpair failed and we were unable to recover it. 00:31:40.951 [2024-10-01 08:46:32.180021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.951 [2024-10-01 08:46:32.180032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.951 qpair failed and we were unable to recover it. 00:31:40.951 [2024-10-01 08:46:32.180339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.951 [2024-10-01 08:46:32.180349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.951 qpair failed and we were unable to recover it. 00:31:40.951 [2024-10-01 08:46:32.180587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.951 [2024-10-01 08:46:32.180597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.951 qpair failed and we were unable to recover it. 00:31:40.951 [2024-10-01 08:46:32.180899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.951 [2024-10-01 08:46:32.180909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.951 qpair failed and we were unable to recover it. 00:31:40.951 [2024-10-01 08:46:32.181124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.951 [2024-10-01 08:46:32.181134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.951 qpair failed and we were unable to recover it. 00:31:40.951 [2024-10-01 08:46:32.181315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.951 [2024-10-01 08:46:32.181325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.951 qpair failed and we were unable to recover it. 00:31:40.951 [2024-10-01 08:46:32.181635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.951 [2024-10-01 08:46:32.181645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.951 qpair failed and we were unable to recover it. 00:31:40.951 [2024-10-01 08:46:32.181980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.951 [2024-10-01 08:46:32.181990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.951 qpair failed and we were unable to recover it. 00:31:40.951 [2024-10-01 08:46:32.182320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.951 [2024-10-01 08:46:32.182330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.951 qpair failed and we were unable to recover it. 00:31:40.951 [2024-10-01 08:46:32.182637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.951 [2024-10-01 08:46:32.182650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.951 qpair failed and we were unable to recover it. 00:31:40.951 [2024-10-01 08:46:32.182931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.951 [2024-10-01 08:46:32.182944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.951 qpair failed and we were unable to recover it. 00:31:40.951 [2024-10-01 08:46:32.183243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.952 [2024-10-01 08:46:32.183256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.952 qpair failed and we were unable to recover it. 00:31:40.952 [2024-10-01 08:46:32.183552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.952 [2024-10-01 08:46:32.183564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.952 qpair failed and we were unable to recover it. 00:31:40.952 [2024-10-01 08:46:32.183902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.952 [2024-10-01 08:46:32.183915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.952 qpair failed and we were unable to recover it. 00:31:40.952 [2024-10-01 08:46:32.184240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.952 [2024-10-01 08:46:32.184253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.952 qpair failed and we were unable to recover it. 00:31:40.952 [2024-10-01 08:46:32.184588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.952 [2024-10-01 08:46:32.184601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.952 qpair failed and we were unable to recover it. 00:31:40.952 [2024-10-01 08:46:32.184928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.952 [2024-10-01 08:46:32.184941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.952 qpair failed and we were unable to recover it. 00:31:40.952 [2024-10-01 08:46:32.185248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.952 [2024-10-01 08:46:32.185262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.952 qpair failed and we were unable to recover it. 00:31:40.952 [2024-10-01 08:46:32.185571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.952 [2024-10-01 08:46:32.185584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.952 qpair failed and we were unable to recover it. 00:31:40.952 [2024-10-01 08:46:32.185872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.952 [2024-10-01 08:46:32.185884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.952 qpair failed and we were unable to recover it. 00:31:40.952 [2024-10-01 08:46:32.186199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.952 [2024-10-01 08:46:32.186212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.952 qpair failed and we were unable to recover it. 00:31:40.952 [2024-10-01 08:46:32.186553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.952 [2024-10-01 08:46:32.186566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.952 qpair failed and we were unable to recover it. 00:31:40.952 [2024-10-01 08:46:32.186955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.952 [2024-10-01 08:46:32.186971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.952 qpair failed and we were unable to recover it. 00:31:40.952 [2024-10-01 08:46:32.187276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.952 [2024-10-01 08:46:32.187289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.952 qpair failed and we were unable to recover it. 00:31:40.952 [2024-10-01 08:46:32.187616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.952 [2024-10-01 08:46:32.187629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.952 qpair failed and we were unable to recover it. 00:31:40.952 [2024-10-01 08:46:32.187957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.952 [2024-10-01 08:46:32.187969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.952 qpair failed and we were unable to recover it. 00:31:40.952 [2024-10-01 08:46:32.188236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.952 [2024-10-01 08:46:32.188250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.952 qpair failed and we were unable to recover it. 00:31:40.952 [2024-10-01 08:46:32.188577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.952 [2024-10-01 08:46:32.188590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.952 qpair failed and we were unable to recover it. 00:31:40.952 [2024-10-01 08:46:32.188895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.952 [2024-10-01 08:46:32.188908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.952 qpair failed and we were unable to recover it. 00:31:40.952 [2024-10-01 08:46:32.189129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.952 [2024-10-01 08:46:32.189142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.952 qpair failed and we were unable to recover it. 00:31:40.952 [2024-10-01 08:46:32.189427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.952 [2024-10-01 08:46:32.189440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.952 qpair failed and we were unable to recover it. 00:31:40.952 [2024-10-01 08:46:32.189778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.952 [2024-10-01 08:46:32.189791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.952 qpair failed and we were unable to recover it. 00:31:40.952 [2024-10-01 08:46:32.190068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.952 [2024-10-01 08:46:32.190080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.952 qpair failed and we were unable to recover it. 00:31:40.952 [2024-10-01 08:46:32.190387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.952 [2024-10-01 08:46:32.190400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.952 qpair failed and we were unable to recover it. 00:31:40.952 [2024-10-01 08:46:32.190690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.952 [2024-10-01 08:46:32.190702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.952 qpair failed and we were unable to recover it. 00:31:40.952 [2024-10-01 08:46:32.191018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.952 [2024-10-01 08:46:32.191031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.952 qpair failed and we were unable to recover it. 00:31:40.952 [2024-10-01 08:46:32.191293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.952 [2024-10-01 08:46:32.191305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.952 qpair failed and we were unable to recover it. 00:31:40.952 [2024-10-01 08:46:32.191494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.952 [2024-10-01 08:46:32.191506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.952 qpair failed and we were unable to recover it. 00:31:40.952 [2024-10-01 08:46:32.191836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.952 [2024-10-01 08:46:32.191847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.952 qpair failed and we were unable to recover it. 00:31:40.952 [2024-10-01 08:46:32.192223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.952 [2024-10-01 08:46:32.192235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.952 qpair failed and we were unable to recover it. 00:31:40.952 [2024-10-01 08:46:32.192540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.952 [2024-10-01 08:46:32.192551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.952 qpair failed and we were unable to recover it. 00:31:40.952 [2024-10-01 08:46:32.192769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.952 [2024-10-01 08:46:32.192781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.952 qpair failed and we were unable to recover it. 00:31:40.952 [2024-10-01 08:46:32.193092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.952 [2024-10-01 08:46:32.193105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.952 qpair failed and we were unable to recover it. 00:31:40.952 [2024-10-01 08:46:32.193423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.952 [2024-10-01 08:46:32.193436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.952 qpair failed and we were unable to recover it. 00:31:40.952 [2024-10-01 08:46:32.193723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.952 [2024-10-01 08:46:32.193736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.952 qpair failed and we were unable to recover it. 00:31:40.952 [2024-10-01 08:46:32.194026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.952 [2024-10-01 08:46:32.194039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.952 qpair failed and we were unable to recover it. 00:31:40.952 [2024-10-01 08:46:32.194370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.952 [2024-10-01 08:46:32.194382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.952 qpair failed and we were unable to recover it. 00:31:40.952 [2024-10-01 08:46:32.194583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.952 [2024-10-01 08:46:32.194596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.952 qpair failed and we were unable to recover it. 00:31:40.952 [2024-10-01 08:46:32.194917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.952 [2024-10-01 08:46:32.194929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.952 qpair failed and we were unable to recover it. 00:31:40.952 [2024-10-01 08:46:32.195270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.953 [2024-10-01 08:46:32.195284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.953 qpair failed and we were unable to recover it. 00:31:40.953 [2024-10-01 08:46:32.195589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.953 [2024-10-01 08:46:32.195601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.953 qpair failed and we were unable to recover it. 00:31:40.953 [2024-10-01 08:46:32.195870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.953 [2024-10-01 08:46:32.195883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.953 qpair failed and we were unable to recover it. 00:31:40.953 [2024-10-01 08:46:32.196204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.953 [2024-10-01 08:46:32.196219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.953 qpair failed and we were unable to recover it. 00:31:40.953 [2024-10-01 08:46:32.196526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.953 [2024-10-01 08:46:32.196540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.953 qpair failed and we were unable to recover it. 00:31:40.953 [2024-10-01 08:46:32.196919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.953 [2024-10-01 08:46:32.196933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.953 qpair failed and we were unable to recover it. 00:31:40.953 [2024-10-01 08:46:32.197239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.953 [2024-10-01 08:46:32.197255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.953 qpair failed and we were unable to recover it. 00:31:40.953 [2024-10-01 08:46:32.197558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.953 [2024-10-01 08:46:32.197572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.953 qpair failed and we were unable to recover it. 00:31:40.953 [2024-10-01 08:46:32.197946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.953 [2024-10-01 08:46:32.197960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.953 qpair failed and we were unable to recover it. 00:31:40.953 [2024-10-01 08:46:32.198168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.953 [2024-10-01 08:46:32.198183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.953 qpair failed and we were unable to recover it. 00:31:40.953 [2024-10-01 08:46:32.198550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.953 [2024-10-01 08:46:32.198565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.953 qpair failed and we were unable to recover it. 00:31:40.953 [2024-10-01 08:46:32.198857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.953 [2024-10-01 08:46:32.198871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.953 qpair failed and we were unable to recover it. 00:31:40.953 [2024-10-01 08:46:32.199114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.953 [2024-10-01 08:46:32.199130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.953 qpair failed and we were unable to recover it. 00:31:40.953 [2024-10-01 08:46:32.199441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.953 [2024-10-01 08:46:32.199459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.953 qpair failed and we were unable to recover it. 00:31:40.953 [2024-10-01 08:46:32.199745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.953 [2024-10-01 08:46:32.199759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.953 qpair failed and we were unable to recover it. 00:31:40.953 [2024-10-01 08:46:32.200145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.953 [2024-10-01 08:46:32.200161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.953 qpair failed and we were unable to recover it. 00:31:40.953 [2024-10-01 08:46:32.200501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.953 [2024-10-01 08:46:32.200516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.953 qpair failed and we were unable to recover it. 00:31:40.953 [2024-10-01 08:46:32.200803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.953 [2024-10-01 08:46:32.200819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.953 qpair failed and we were unable to recover it. 00:31:40.953 [2024-10-01 08:46:32.201149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.953 [2024-10-01 08:46:32.201165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.953 qpair failed and we were unable to recover it. 00:31:40.953 [2024-10-01 08:46:32.201454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.953 [2024-10-01 08:46:32.201506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.953 qpair failed and we were unable to recover it. 00:31:40.953 [2024-10-01 08:46:32.201724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.953 [2024-10-01 08:46:32.201738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.953 qpair failed and we were unable to recover it. 00:31:40.953 [2024-10-01 08:46:32.202125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.953 [2024-10-01 08:46:32.202142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.953 qpair failed and we were unable to recover it. 00:31:40.953 [2024-10-01 08:46:32.202452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.953 [2024-10-01 08:46:32.202467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.953 qpair failed and we were unable to recover it. 00:31:40.953 [2024-10-01 08:46:32.202745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.953 [2024-10-01 08:46:32.202759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.953 qpair failed and we were unable to recover it. 00:31:40.953 [2024-10-01 08:46:32.202964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.953 [2024-10-01 08:46:32.202979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.953 qpair failed and we were unable to recover it. 00:31:40.953 [2024-10-01 08:46:32.203325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.953 [2024-10-01 08:46:32.203341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.953 qpair failed and we were unable to recover it. 00:31:40.953 [2024-10-01 08:46:32.203541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.953 [2024-10-01 08:46:32.203557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.953 qpair failed and we were unable to recover it. 00:31:40.953 [2024-10-01 08:46:32.203841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.953 [2024-10-01 08:46:32.203856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.953 qpair failed and we were unable to recover it. 00:31:40.953 [2024-10-01 08:46:32.204168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.953 [2024-10-01 08:46:32.204183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.953 qpair failed and we were unable to recover it. 00:31:40.953 [2024-10-01 08:46:32.204559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.953 [2024-10-01 08:46:32.204574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.953 qpair failed and we were unable to recover it. 00:31:40.953 [2024-10-01 08:46:32.204888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.953 [2024-10-01 08:46:32.204903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.953 qpair failed and we were unable to recover it. 00:31:40.953 [2024-10-01 08:46:32.205182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.953 [2024-10-01 08:46:32.205197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.953 qpair failed and we were unable to recover it. 00:31:40.953 [2024-10-01 08:46:32.205582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.953 [2024-10-01 08:46:32.205596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.953 qpair failed and we were unable to recover it. 00:31:40.953 [2024-10-01 08:46:32.205893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.953 [2024-10-01 08:46:32.205908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.953 qpair failed and we were unable to recover it. 00:31:40.953 [2024-10-01 08:46:32.206232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.953 [2024-10-01 08:46:32.206248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.953 qpair failed and we were unable to recover it. 00:31:40.953 [2024-10-01 08:46:32.206559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.953 [2024-10-01 08:46:32.206573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.953 qpair failed and we were unable to recover it. 00:31:40.953 [2024-10-01 08:46:32.206931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.953 [2024-10-01 08:46:32.206945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.953 qpair failed and we were unable to recover it. 00:31:40.953 [2024-10-01 08:46:32.207155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.953 [2024-10-01 08:46:32.207171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.953 qpair failed and we were unable to recover it. 00:31:40.953 [2024-10-01 08:46:32.207480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.953 [2024-10-01 08:46:32.207494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.954 qpair failed and we were unable to recover it. 00:31:40.954 [2024-10-01 08:46:32.207711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.954 [2024-10-01 08:46:32.207726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.954 qpair failed and we were unable to recover it. 00:31:40.954 [2024-10-01 08:46:32.208029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.954 [2024-10-01 08:46:32.208044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.954 qpair failed and we were unable to recover it. 00:31:40.954 [2024-10-01 08:46:32.208338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.954 [2024-10-01 08:46:32.208352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.954 qpair failed and we were unable to recover it. 00:31:40.954 [2024-10-01 08:46:32.208662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.954 [2024-10-01 08:46:32.208677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.954 qpair failed and we were unable to recover it. 00:31:40.954 [2024-10-01 08:46:32.208882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.954 [2024-10-01 08:46:32.208896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.954 qpair failed and we were unable to recover it. 00:31:40.954 [2024-10-01 08:46:32.209217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.954 [2024-10-01 08:46:32.209233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.954 qpair failed and we were unable to recover it. 00:31:40.954 [2024-10-01 08:46:32.209570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.954 [2024-10-01 08:46:32.209584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.954 qpair failed and we were unable to recover it. 00:31:40.954 [2024-10-01 08:46:32.209896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.954 [2024-10-01 08:46:32.209910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.954 qpair failed and we were unable to recover it. 00:31:40.954 [2024-10-01 08:46:32.210231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.954 [2024-10-01 08:46:32.210247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.954 qpair failed and we were unable to recover it. 00:31:40.954 [2024-10-01 08:46:32.210582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.954 [2024-10-01 08:46:32.210597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.954 qpair failed and we were unable to recover it. 00:31:40.954 [2024-10-01 08:46:32.210923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.954 [2024-10-01 08:46:32.210938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.954 qpair failed and we were unable to recover it. 00:31:40.954 [2024-10-01 08:46:32.211211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.954 [2024-10-01 08:46:32.211226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.954 qpair failed and we were unable to recover it. 00:31:40.954 [2024-10-01 08:46:32.211520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.954 [2024-10-01 08:46:32.211534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.954 qpair failed and we were unable to recover it. 00:31:40.954 [2024-10-01 08:46:32.211850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.954 [2024-10-01 08:46:32.211865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.954 qpair failed and we were unable to recover it. 00:31:40.954 [2024-10-01 08:46:32.212044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.954 [2024-10-01 08:46:32.212066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.954 qpair failed and we were unable to recover it. 00:31:40.954 [2024-10-01 08:46:32.212387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.954 [2024-10-01 08:46:32.212402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.954 qpair failed and we were unable to recover it. 00:31:40.954 [2024-10-01 08:46:32.212716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.954 [2024-10-01 08:46:32.212730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.954 qpair failed and we were unable to recover it. 00:31:40.954 [2024-10-01 08:46:32.213053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.954 [2024-10-01 08:46:32.213068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.954 qpair failed and we were unable to recover it. 00:31:40.954 [2024-10-01 08:46:32.213385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.954 [2024-10-01 08:46:32.213399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.954 qpair failed and we were unable to recover it. 00:31:40.954 [2024-10-01 08:46:32.213748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.954 [2024-10-01 08:46:32.213763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.954 qpair failed and we were unable to recover it. 00:31:40.954 [2024-10-01 08:46:32.214102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.954 [2024-10-01 08:46:32.214117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.954 qpair failed and we were unable to recover it. 00:31:40.954 [2024-10-01 08:46:32.214511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.954 [2024-10-01 08:46:32.214525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.954 qpair failed and we were unable to recover it. 00:31:40.954 [2024-10-01 08:46:32.214847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.954 [2024-10-01 08:46:32.214861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.954 qpair failed and we were unable to recover it. 00:31:40.954 [2024-10-01 08:46:32.215166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.954 [2024-10-01 08:46:32.215181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.954 qpair failed and we were unable to recover it. 00:31:40.954 [2024-10-01 08:46:32.215523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.954 [2024-10-01 08:46:32.215539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.954 qpair failed and we were unable to recover it. 00:31:40.954 [2024-10-01 08:46:32.215746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.954 [2024-10-01 08:46:32.215761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.954 qpair failed and we were unable to recover it. 00:31:40.954 [2024-10-01 08:46:32.216071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.954 [2024-10-01 08:46:32.216085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.954 qpair failed and we were unable to recover it. 00:31:40.954 [2024-10-01 08:46:32.216419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.954 [2024-10-01 08:46:32.216433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.954 qpair failed and we were unable to recover it. 00:31:40.954 [2024-10-01 08:46:32.216751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.954 [2024-10-01 08:46:32.216766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.954 qpair failed and we were unable to recover it. 00:31:40.954 [2024-10-01 08:46:32.217140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.954 [2024-10-01 08:46:32.217156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.954 qpair failed and we were unable to recover it. 00:31:40.954 [2024-10-01 08:46:32.217449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.954 [2024-10-01 08:46:32.217463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.954 qpair failed and we were unable to recover it. 00:31:40.954 [2024-10-01 08:46:32.217773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.954 [2024-10-01 08:46:32.217795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.955 qpair failed and we were unable to recover it. 00:31:40.955 [2024-10-01 08:46:32.218113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.955 [2024-10-01 08:46:32.218128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.955 qpair failed and we were unable to recover it. 00:31:40.955 [2024-10-01 08:46:32.218326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.955 [2024-10-01 08:46:32.218342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.955 qpair failed and we were unable to recover it. 00:31:40.955 [2024-10-01 08:46:32.218666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.955 [2024-10-01 08:46:32.218680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.955 qpair failed and we were unable to recover it. 00:31:40.955 [2024-10-01 08:46:32.218875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.955 [2024-10-01 08:46:32.218889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.955 qpair failed and we were unable to recover it. 00:31:40.955 [2024-10-01 08:46:32.219192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.955 [2024-10-01 08:46:32.219208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.955 qpair failed and we were unable to recover it. 00:31:40.955 [2024-10-01 08:46:32.219522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.955 [2024-10-01 08:46:32.219536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.955 qpair failed and we were unable to recover it. 00:31:40.955 [2024-10-01 08:46:32.219759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.955 [2024-10-01 08:46:32.219773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.955 qpair failed and we were unable to recover it. 00:31:40.955 [2024-10-01 08:46:32.220015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.955 [2024-10-01 08:46:32.220031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.955 qpair failed and we were unable to recover it. 00:31:40.955 [2024-10-01 08:46:32.220350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.955 [2024-10-01 08:46:32.220364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.955 qpair failed and we were unable to recover it. 00:31:40.955 [2024-10-01 08:46:32.220690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.955 [2024-10-01 08:46:32.220705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.955 qpair failed and we were unable to recover it. 00:31:40.955 [2024-10-01 08:46:32.221010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.955 [2024-10-01 08:46:32.221025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.955 qpair failed and we were unable to recover it. 00:31:40.955 [2024-10-01 08:46:32.221418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.955 [2024-10-01 08:46:32.221433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.955 qpair failed and we were unable to recover it. 00:31:40.955 [2024-10-01 08:46:32.221721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.955 [2024-10-01 08:46:32.221735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.955 qpair failed and we were unable to recover it. 00:31:40.955 [2024-10-01 08:46:32.222034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.955 [2024-10-01 08:46:32.222050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.955 qpair failed and we were unable to recover it. 00:31:40.955 [2024-10-01 08:46:32.222343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.955 [2024-10-01 08:46:32.222358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.955 qpair failed and we were unable to recover it. 00:31:40.955 [2024-10-01 08:46:32.222696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.955 [2024-10-01 08:46:32.222710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.955 qpair failed and we were unable to recover it. 00:31:40.955 [2024-10-01 08:46:32.222908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.955 [2024-10-01 08:46:32.222922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.955 qpair failed and we were unable to recover it. 00:31:40.955 [2024-10-01 08:46:32.223164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.955 [2024-10-01 08:46:32.223180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.955 qpair failed and we were unable to recover it. 00:31:40.955 [2024-10-01 08:46:32.223497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.955 [2024-10-01 08:46:32.223511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.955 qpair failed and we were unable to recover it. 00:31:40.955 [2024-10-01 08:46:32.223841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.955 [2024-10-01 08:46:32.223856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.955 qpair failed and we were unable to recover it. 00:31:40.955 [2024-10-01 08:46:32.224176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.955 [2024-10-01 08:46:32.224191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.955 qpair failed and we were unable to recover it. 00:31:40.955 [2024-10-01 08:46:32.224492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.955 [2024-10-01 08:46:32.224506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.955 qpair failed and we were unable to recover it. 00:31:40.955 [2024-10-01 08:46:32.224876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.955 [2024-10-01 08:46:32.224893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.955 qpair failed and we were unable to recover it. 00:31:40.955 [2024-10-01 08:46:32.225100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.955 [2024-10-01 08:46:32.225115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.955 qpair failed and we were unable to recover it. 00:31:40.955 [2024-10-01 08:46:32.225462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.955 [2024-10-01 08:46:32.225477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.955 qpair failed and we were unable to recover it. 00:31:40.955 [2024-10-01 08:46:32.225790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.955 [2024-10-01 08:46:32.225811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.955 qpair failed and we were unable to recover it. 00:31:40.955 [2024-10-01 08:46:32.226028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.955 [2024-10-01 08:46:32.226043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.955 qpair failed and we were unable to recover it. 00:31:40.955 [2024-10-01 08:46:32.226365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.955 [2024-10-01 08:46:32.226379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.955 qpair failed and we were unable to recover it. 00:31:40.955 [2024-10-01 08:46:32.226717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.955 [2024-10-01 08:46:32.226732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.955 qpair failed and we were unable to recover it. 00:31:40.955 [2024-10-01 08:46:32.227053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.955 [2024-10-01 08:46:32.227068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.955 qpair failed and we were unable to recover it. 00:31:40.955 [2024-10-01 08:46:32.227247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.955 [2024-10-01 08:46:32.227261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.955 qpair failed and we were unable to recover it. 00:31:40.955 [2024-10-01 08:46:32.227580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.955 [2024-10-01 08:46:32.227594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.955 qpair failed and we were unable to recover it. 00:31:40.955 [2024-10-01 08:46:32.227920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.955 [2024-10-01 08:46:32.227935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.955 qpair failed and we were unable to recover it. 00:31:40.955 [2024-10-01 08:46:32.228237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.955 [2024-10-01 08:46:32.228252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.955 qpair failed and we were unable to recover it. 00:31:40.955 [2024-10-01 08:46:32.228580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.955 [2024-10-01 08:46:32.228595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.955 qpair failed and we were unable to recover it. 00:31:40.955 [2024-10-01 08:46:32.228808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.955 [2024-10-01 08:46:32.228823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.955 qpair failed and we were unable to recover it. 00:31:40.955 [2024-10-01 08:46:32.229095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.955 [2024-10-01 08:46:32.229111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.955 qpair failed and we were unable to recover it. 00:31:40.955 [2024-10-01 08:46:32.229370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.955 [2024-10-01 08:46:32.229385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.955 qpair failed and we were unable to recover it. 00:31:40.955 [2024-10-01 08:46:32.229703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.955 [2024-10-01 08:46:32.229719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.955 qpair failed and we were unable to recover it. 00:31:40.955 [2024-10-01 08:46:32.230054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.955 [2024-10-01 08:46:32.230070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.955 qpair failed and we were unable to recover it. 00:31:40.956 [2024-10-01 08:46:32.230282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.956 [2024-10-01 08:46:32.230298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.956 qpair failed and we were unable to recover it. 00:31:40.956 [2024-10-01 08:46:32.230663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.956 [2024-10-01 08:46:32.230678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.956 qpair failed and we were unable to recover it. 00:31:40.956 [2024-10-01 08:46:32.230981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.956 [2024-10-01 08:46:32.231001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.956 qpair failed and we were unable to recover it. 00:31:40.956 [2024-10-01 08:46:32.231285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.956 [2024-10-01 08:46:32.231300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.956 qpair failed and we were unable to recover it. 00:31:40.956 [2024-10-01 08:46:32.231498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.956 [2024-10-01 08:46:32.231512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.956 qpair failed and we were unable to recover it. 00:31:40.956 [2024-10-01 08:46:32.231864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.956 [2024-10-01 08:46:32.231878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.956 qpair failed and we were unable to recover it. 00:31:40.956 [2024-10-01 08:46:32.232170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.956 [2024-10-01 08:46:32.232186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.956 qpair failed and we were unable to recover it. 00:31:40.956 [2024-10-01 08:46:32.232500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.956 [2024-10-01 08:46:32.232515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.956 qpair failed and we were unable to recover it. 00:31:40.956 [2024-10-01 08:46:32.232883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.956 [2024-10-01 08:46:32.232897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.956 qpair failed and we were unable to recover it. 00:31:40.956 [2024-10-01 08:46:32.233274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.956 [2024-10-01 08:46:32.233290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.956 qpair failed and we were unable to recover it. 00:31:40.956 [2024-10-01 08:46:32.233570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.956 [2024-10-01 08:46:32.233585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.956 qpair failed and we were unable to recover it. 00:31:40.956 [2024-10-01 08:46:32.233910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.956 [2024-10-01 08:46:32.233926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.956 qpair failed and we were unable to recover it. 00:31:40.956 [2024-10-01 08:46:32.234257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.956 [2024-10-01 08:46:32.234273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.956 qpair failed and we were unable to recover it. 00:31:40.956 [2024-10-01 08:46:32.234589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.956 [2024-10-01 08:46:32.234605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.956 qpair failed and we were unable to recover it. 00:31:40.956 [2024-10-01 08:46:32.234942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.956 [2024-10-01 08:46:32.234956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.956 qpair failed and we were unable to recover it. 00:31:40.956 [2024-10-01 08:46:32.235243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.956 [2024-10-01 08:46:32.235259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.956 qpair failed and we were unable to recover it. 00:31:40.956 [2024-10-01 08:46:32.235579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.956 [2024-10-01 08:46:32.235593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.956 qpair failed and we were unable to recover it. 00:31:40.956 [2024-10-01 08:46:32.235905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.956 [2024-10-01 08:46:32.235925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.956 qpair failed and we were unable to recover it. 00:31:40.956 [2024-10-01 08:46:32.236233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.956 [2024-10-01 08:46:32.236248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.956 qpair failed and we were unable to recover it. 00:31:40.956 [2024-10-01 08:46:32.236549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.956 [2024-10-01 08:46:32.236564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.956 qpair failed and we were unable to recover it. 00:31:40.956 [2024-10-01 08:46:32.236866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.956 [2024-10-01 08:46:32.236880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.956 qpair failed and we were unable to recover it. 00:31:40.956 [2024-10-01 08:46:32.237208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.956 [2024-10-01 08:46:32.237223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.956 qpair failed and we were unable to recover it. 00:31:40.956 [2024-10-01 08:46:32.237538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.956 [2024-10-01 08:46:32.237552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.956 qpair failed and we were unable to recover it. 00:31:40.956 [2024-10-01 08:46:32.237920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.956 [2024-10-01 08:46:32.237934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.956 qpair failed and we were unable to recover it. 00:31:40.956 [2024-10-01 08:46:32.238263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.956 [2024-10-01 08:46:32.238278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.956 qpair failed and we were unable to recover it. 00:31:40.956 [2024-10-01 08:46:32.238605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.956 [2024-10-01 08:46:32.238620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.956 qpair failed and we were unable to recover it. 00:31:40.956 [2024-10-01 08:46:32.238999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.956 [2024-10-01 08:46:32.239015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.956 qpair failed and we were unable to recover it. 00:31:40.956 [2024-10-01 08:46:32.239363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.956 [2024-10-01 08:46:32.239377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.956 qpair failed and we were unable to recover it. 00:31:40.956 [2024-10-01 08:46:32.239707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.956 [2024-10-01 08:46:32.239721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.956 qpair failed and we were unable to recover it. 00:31:40.956 [2024-10-01 08:46:32.240021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.956 [2024-10-01 08:46:32.240036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.956 qpair failed and we were unable to recover it. 00:31:40.956 [2024-10-01 08:46:32.240227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.956 [2024-10-01 08:46:32.240243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.956 qpair failed and we were unable to recover it. 00:31:40.956 [2024-10-01 08:46:32.240641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.956 [2024-10-01 08:46:32.240656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.956 qpair failed and we were unable to recover it. 00:31:40.956 [2024-10-01 08:46:32.240923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.956 [2024-10-01 08:46:32.240937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.956 qpair failed and we were unable to recover it. 00:31:40.956 [2024-10-01 08:46:32.241243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.956 [2024-10-01 08:46:32.241258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.956 qpair failed and we were unable to recover it. 00:31:40.956 [2024-10-01 08:46:32.241518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.956 [2024-10-01 08:46:32.241532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.956 qpair failed and we were unable to recover it. 00:31:40.956 [2024-10-01 08:46:32.241831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.956 [2024-10-01 08:46:32.241846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.956 qpair failed and we were unable to recover it. 00:31:40.956 [2024-10-01 08:46:32.242106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.956 [2024-10-01 08:46:32.242121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.956 qpair failed and we were unable to recover it. 00:31:40.956 [2024-10-01 08:46:32.242460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.956 [2024-10-01 08:46:32.242474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.956 qpair failed and we were unable to recover it. 00:31:40.956 [2024-10-01 08:46:32.242756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.956 [2024-10-01 08:46:32.242770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.956 qpair failed and we were unable to recover it. 00:31:40.956 [2024-10-01 08:46:32.243095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.956 [2024-10-01 08:46:32.243110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.956 qpair failed and we were unable to recover it. 00:31:40.956 [2024-10-01 08:46:32.243347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.956 [2024-10-01 08:46:32.243361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.956 qpair failed and we were unable to recover it. 00:31:40.956 [2024-10-01 08:46:32.243660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.957 [2024-10-01 08:46:32.243674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.957 qpair failed and we were unable to recover it. 00:31:40.957 [2024-10-01 08:46:32.243988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.957 [2024-10-01 08:46:32.244008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.957 qpair failed and we were unable to recover it. 00:31:40.957 [2024-10-01 08:46:32.244322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.957 [2024-10-01 08:46:32.244337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.957 qpair failed and we were unable to recover it. 00:31:40.957 [2024-10-01 08:46:32.244713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.957 [2024-10-01 08:46:32.244728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.957 qpair failed and we were unable to recover it. 00:31:40.957 [2024-10-01 08:46:32.245038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.957 [2024-10-01 08:46:32.245054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.957 qpair failed and we were unable to recover it. 00:31:40.957 [2024-10-01 08:46:32.245356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.957 [2024-10-01 08:46:32.245371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.957 qpair failed and we were unable to recover it. 00:31:40.957 [2024-10-01 08:46:32.245723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.957 [2024-10-01 08:46:32.245737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.957 qpair failed and we were unable to recover it. 00:31:40.957 [2024-10-01 08:46:32.246046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.957 [2024-10-01 08:46:32.246061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.957 qpair failed and we were unable to recover it. 00:31:40.957 [2024-10-01 08:46:32.246379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.957 [2024-10-01 08:46:32.246398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.957 qpair failed and we were unable to recover it. 00:31:40.957 [2024-10-01 08:46:32.246678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.957 [2024-10-01 08:46:32.246698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.957 qpair failed and we were unable to recover it. 00:31:40.957 [2024-10-01 08:46:32.247005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.957 [2024-10-01 08:46:32.247021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.957 qpair failed and we were unable to recover it. 00:31:40.957 [2024-10-01 08:46:32.247340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.957 [2024-10-01 08:46:32.247355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.957 qpair failed and we were unable to recover it. 00:31:40.957 [2024-10-01 08:46:32.247692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.957 [2024-10-01 08:46:32.247706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.957 qpair failed and we were unable to recover it. 00:31:40.957 [2024-10-01 08:46:32.248009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.957 [2024-10-01 08:46:32.248024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.957 qpair failed and we were unable to recover it. 00:31:40.957 [2024-10-01 08:46:32.248354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.957 [2024-10-01 08:46:32.248369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.957 qpair failed and we were unable to recover it. 00:31:40.957 [2024-10-01 08:46:32.248680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.957 [2024-10-01 08:46:32.248695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.957 qpair failed and we were unable to recover it. 00:31:40.957 [2024-10-01 08:46:32.249033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.957 [2024-10-01 08:46:32.249050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.957 qpair failed and we were unable to recover it. 00:31:40.957 [2024-10-01 08:46:32.249371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.957 [2024-10-01 08:46:32.249386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.957 qpair failed and we were unable to recover it. 00:31:40.957 [2024-10-01 08:46:32.249757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.957 [2024-10-01 08:46:32.249772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.957 qpair failed and we were unable to recover it. 00:31:40.957 [2024-10-01 08:46:32.250109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.957 [2024-10-01 08:46:32.250124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.957 qpair failed and we were unable to recover it. 00:31:40.957 [2024-10-01 08:46:32.250457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.957 [2024-10-01 08:46:32.250471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.957 qpair failed and we were unable to recover it. 00:31:40.957 [2024-10-01 08:46:32.250783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.957 [2024-10-01 08:46:32.250798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.957 qpair failed and we were unable to recover it. 00:31:40.957 [2024-10-01 08:46:32.251170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.957 [2024-10-01 08:46:32.251186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.957 qpair failed and we were unable to recover it. 00:31:40.957 [2024-10-01 08:46:32.251490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.957 [2024-10-01 08:46:32.251504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.957 qpair failed and we were unable to recover it. 00:31:40.957 [2024-10-01 08:46:32.251683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.957 [2024-10-01 08:46:32.251699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.957 qpair failed and we were unable to recover it. 00:31:40.957 [2024-10-01 08:46:32.252080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.957 [2024-10-01 08:46:32.252095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.957 qpair failed and we were unable to recover it. 00:31:40.957 [2024-10-01 08:46:32.252446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.957 [2024-10-01 08:46:32.252460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.957 qpair failed and we were unable to recover it. 00:31:40.957 [2024-10-01 08:46:32.252785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.957 [2024-10-01 08:46:32.252798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.957 qpair failed and we were unable to recover it. 00:31:40.957 [2024-10-01 08:46:32.253100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.957 [2024-10-01 08:46:32.253116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.957 qpair failed and we were unable to recover it. 00:31:40.957 [2024-10-01 08:46:32.253443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.957 [2024-10-01 08:46:32.253458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.957 qpair failed and we were unable to recover it. 00:31:40.957 [2024-10-01 08:46:32.253783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.957 [2024-10-01 08:46:32.253797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.957 qpair failed and we were unable to recover it. 00:31:40.957 [2024-10-01 08:46:32.254011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.957 [2024-10-01 08:46:32.254026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.957 qpair failed and we were unable to recover it. 00:31:40.957 [2024-10-01 08:46:32.254237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.957 [2024-10-01 08:46:32.254251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.957 qpair failed and we were unable to recover it. 00:31:40.957 [2024-10-01 08:46:32.254561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.957 [2024-10-01 08:46:32.254575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.957 qpair failed and we were unable to recover it. 00:31:40.957 [2024-10-01 08:46:32.254801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.957 [2024-10-01 08:46:32.254815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.957 qpair failed and we were unable to recover it. 00:31:40.957 [2024-10-01 08:46:32.255171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.957 [2024-10-01 08:46:32.255186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.957 qpair failed and we were unable to recover it. 00:31:40.957 [2024-10-01 08:46:32.255492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.957 [2024-10-01 08:46:32.255506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.957 qpair failed and we were unable to recover it. 00:31:40.957 [2024-10-01 08:46:32.255847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.957 [2024-10-01 08:46:32.255861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.957 qpair failed and we were unable to recover it. 00:31:40.957 [2024-10-01 08:46:32.256168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.957 [2024-10-01 08:46:32.256182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.957 qpair failed and we were unable to recover it. 00:31:40.957 [2024-10-01 08:46:32.256564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.957 [2024-10-01 08:46:32.256578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.957 qpair failed and we were unable to recover it. 00:31:40.957 [2024-10-01 08:46:32.256878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.957 [2024-10-01 08:46:32.256892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.957 qpair failed and we were unable to recover it. 00:31:40.957 [2024-10-01 08:46:32.257114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.957 [2024-10-01 08:46:32.257128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.957 qpair failed and we were unable to recover it. 00:31:40.958 [2024-10-01 08:46:32.257467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.958 [2024-10-01 08:46:32.257481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.958 qpair failed and we were unable to recover it. 00:31:40.958 [2024-10-01 08:46:32.257766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.958 [2024-10-01 08:46:32.257780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.958 qpair failed and we were unable to recover it. 00:31:40.958 [2024-10-01 08:46:32.258109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.958 [2024-10-01 08:46:32.258124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.958 qpair failed and we were unable to recover it. 00:31:40.958 [2024-10-01 08:46:32.258440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.958 [2024-10-01 08:46:32.258454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.958 qpair failed and we were unable to recover it. 00:31:40.958 [2024-10-01 08:46:32.258783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.958 [2024-10-01 08:46:32.258797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.958 qpair failed and we were unable to recover it. 00:31:40.958 [2024-10-01 08:46:32.259078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.958 [2024-10-01 08:46:32.259092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.958 qpair failed and we were unable to recover it. 00:31:40.958 [2024-10-01 08:46:32.259428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.958 [2024-10-01 08:46:32.259445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.958 qpair failed and we were unable to recover it. 00:31:40.958 [2024-10-01 08:46:32.259685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.958 [2024-10-01 08:46:32.259700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.958 qpair failed and we were unable to recover it. 00:31:40.958 [2024-10-01 08:46:32.260025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.958 [2024-10-01 08:46:32.260040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.958 qpair failed and we were unable to recover it. 00:31:40.958 [2024-10-01 08:46:32.260436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.958 [2024-10-01 08:46:32.260450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.958 qpair failed and we were unable to recover it. 00:31:40.958 [2024-10-01 08:46:32.260754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.958 [2024-10-01 08:46:32.260769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.958 qpair failed and we were unable to recover it. 00:31:40.958 [2024-10-01 08:46:32.261061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.958 [2024-10-01 08:46:32.261077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.958 qpair failed and we were unable to recover it. 00:31:40.958 [2024-10-01 08:46:32.261391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.958 [2024-10-01 08:46:32.261411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.958 qpair failed and we were unable to recover it. 00:31:40.958 [2024-10-01 08:46:32.261702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.958 [2024-10-01 08:46:32.261716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.958 qpair failed and we were unable to recover it. 00:31:40.958 [2024-10-01 08:46:32.262030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.958 [2024-10-01 08:46:32.262045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.958 qpair failed and we were unable to recover it. 00:31:40.958 [2024-10-01 08:46:32.262364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.958 [2024-10-01 08:46:32.262379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.958 qpair failed and we were unable to recover it. 00:31:40.958 [2024-10-01 08:46:32.262564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.958 [2024-10-01 08:46:32.262579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.958 qpair failed and we were unable to recover it. 00:31:40.958 [2024-10-01 08:46:32.262861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.958 [2024-10-01 08:46:32.262876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.958 qpair failed and we were unable to recover it. 00:31:40.958 [2024-10-01 08:46:32.263174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.958 [2024-10-01 08:46:32.263189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.958 qpair failed and we were unable to recover it. 00:31:40.958 [2024-10-01 08:46:32.263523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.958 [2024-10-01 08:46:32.263537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.958 qpair failed and we were unable to recover it. 00:31:40.958 [2024-10-01 08:46:32.263848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.958 [2024-10-01 08:46:32.263870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.958 qpair failed and we were unable to recover it. 00:31:40.958 [2024-10-01 08:46:32.264161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.958 [2024-10-01 08:46:32.264175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.958 qpair failed and we were unable to recover it. 00:31:40.958 [2024-10-01 08:46:32.264514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.958 [2024-10-01 08:46:32.264529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.958 qpair failed and we were unable to recover it. 00:31:40.958 [2024-10-01 08:46:32.264849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.958 [2024-10-01 08:46:32.264864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.958 qpair failed and we were unable to recover it. 00:31:40.958 [2024-10-01 08:46:32.265238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.958 [2024-10-01 08:46:32.265253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.958 qpair failed and we were unable to recover it. 00:31:40.958 [2024-10-01 08:46:32.265594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.958 [2024-10-01 08:46:32.265608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.958 qpair failed and we were unable to recover it. 00:31:40.958 [2024-10-01 08:46:32.265931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.958 [2024-10-01 08:46:32.265946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.958 qpair failed and we were unable to recover it. 00:31:40.958 [2024-10-01 08:46:32.266349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.958 [2024-10-01 08:46:32.266364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.958 qpair failed and we were unable to recover it. 00:31:40.958 [2024-10-01 08:46:32.266664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.958 [2024-10-01 08:46:32.266679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.958 qpair failed and we were unable to recover it. 00:31:40.958 [2024-10-01 08:46:32.267011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.958 [2024-10-01 08:46:32.267027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.958 qpair failed and we were unable to recover it. 00:31:40.958 [2024-10-01 08:46:32.267358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.958 [2024-10-01 08:46:32.267372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.958 qpair failed and we were unable to recover it. 00:31:40.958 [2024-10-01 08:46:32.267706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.958 [2024-10-01 08:46:32.267721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.958 qpair failed and we were unable to recover it. 00:31:40.958 [2024-10-01 08:46:32.268052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.958 [2024-10-01 08:46:32.268067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.958 qpair failed and we were unable to recover it. 00:31:40.958 [2024-10-01 08:46:32.268385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.958 [2024-10-01 08:46:32.268400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.958 qpair failed and we were unable to recover it. 00:31:40.958 [2024-10-01 08:46:32.268728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.958 [2024-10-01 08:46:32.268742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.958 qpair failed and we were unable to recover it. 00:31:40.958 [2024-10-01 08:46:32.269056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.958 [2024-10-01 08:46:32.269071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.958 qpair failed and we were unable to recover it. 00:31:40.958 [2024-10-01 08:46:32.269399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.958 [2024-10-01 08:46:32.269413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.958 qpair failed and we were unable to recover it. 00:31:40.958 [2024-10-01 08:46:32.269740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.959 [2024-10-01 08:46:32.269754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.959 qpair failed and we were unable to recover it. 00:31:40.959 [2024-10-01 08:46:32.270068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.959 [2024-10-01 08:46:32.270093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.959 qpair failed and we were unable to recover it. 00:31:40.959 [2024-10-01 08:46:32.270330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.959 [2024-10-01 08:46:32.270344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.959 qpair failed and we were unable to recover it. 00:31:40.959 [2024-10-01 08:46:32.270670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.959 [2024-10-01 08:46:32.270684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.959 qpair failed and we were unable to recover it. 00:31:40.959 [2024-10-01 08:46:32.270953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.959 [2024-10-01 08:46:32.270967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.959 qpair failed and we were unable to recover it. 00:31:40.959 [2024-10-01 08:46:32.271348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.959 [2024-10-01 08:46:32.271363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.959 qpair failed and we were unable to recover it. 00:31:40.959 [2024-10-01 08:46:32.271647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.959 [2024-10-01 08:46:32.271661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.959 qpair failed and we were unable to recover it. 00:31:40.959 [2024-10-01 08:46:32.271840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.959 [2024-10-01 08:46:32.271855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.959 qpair failed and we were unable to recover it. 00:31:40.959 [2024-10-01 08:46:32.272176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.959 [2024-10-01 08:46:32.272190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.959 qpair failed and we were unable to recover it. 00:31:40.959 [2024-10-01 08:46:32.272410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.959 [2024-10-01 08:46:32.272428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.959 qpair failed and we were unable to recover it. 00:31:40.959 [2024-10-01 08:46:32.272752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.959 [2024-10-01 08:46:32.272766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.959 qpair failed and we were unable to recover it. 00:31:40.959 [2024-10-01 08:46:32.273099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.959 [2024-10-01 08:46:32.273115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.959 qpair failed and we were unable to recover it. 00:31:40.959 [2024-10-01 08:46:32.273433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.959 [2024-10-01 08:46:32.273447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.959 qpair failed and we were unable to recover it. 00:31:40.959 [2024-10-01 08:46:32.273764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.959 [2024-10-01 08:46:32.273785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.959 qpair failed and we were unable to recover it. 00:31:40.959 [2024-10-01 08:46:32.273981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.959 [2024-10-01 08:46:32.274004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.959 qpair failed and we were unable to recover it. 00:31:40.959 [2024-10-01 08:46:32.274324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.959 [2024-10-01 08:46:32.274338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.959 qpair failed and we were unable to recover it. 00:31:40.959 [2024-10-01 08:46:32.274642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.959 [2024-10-01 08:46:32.274657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.959 qpair failed and we were unable to recover it. 00:31:40.959 [2024-10-01 08:46:32.275007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.959 [2024-10-01 08:46:32.275023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.959 qpair failed and we were unable to recover it. 00:31:40.959 [2024-10-01 08:46:32.275351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.959 [2024-10-01 08:46:32.275365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.959 qpair failed and we were unable to recover it. 00:31:40.959 [2024-10-01 08:46:32.275542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.959 [2024-10-01 08:46:32.275557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.959 qpair failed and we were unable to recover it. 00:31:40.959 [2024-10-01 08:46:32.275868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.959 [2024-10-01 08:46:32.275883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.959 qpair failed and we were unable to recover it. 00:31:40.959 [2024-10-01 08:46:32.276066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.959 [2024-10-01 08:46:32.276083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.959 qpair failed and we were unable to recover it. 00:31:40.959 [2024-10-01 08:46:32.276374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.959 [2024-10-01 08:46:32.276388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.959 qpair failed and we were unable to recover it. 00:31:40.959 [2024-10-01 08:46:32.276696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.959 [2024-10-01 08:46:32.276711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.959 qpair failed and we were unable to recover it. 00:31:40.959 [2024-10-01 08:46:32.277047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.959 [2024-10-01 08:46:32.277061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.959 qpair failed and we were unable to recover it. 00:31:40.959 [2024-10-01 08:46:32.277379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.959 [2024-10-01 08:46:32.277393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.959 qpair failed and we were unable to recover it. 00:31:40.959 [2024-10-01 08:46:32.277716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.959 [2024-10-01 08:46:32.277731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.959 qpair failed and we were unable to recover it. 00:31:40.959 [2024-10-01 08:46:32.278064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.959 [2024-10-01 08:46:32.278079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.959 qpair failed and we were unable to recover it. 00:31:40.959 [2024-10-01 08:46:32.278394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.959 [2024-10-01 08:46:32.278409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.959 qpair failed and we were unable to recover it. 00:31:40.959 [2024-10-01 08:46:32.278738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.959 [2024-10-01 08:46:32.278752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.959 qpair failed and we were unable to recover it. 00:31:40.959 [2024-10-01 08:46:32.279082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.959 [2024-10-01 08:46:32.279097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.959 qpair failed and we were unable to recover it. 00:31:40.959 [2024-10-01 08:46:32.279399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.959 [2024-10-01 08:46:32.279414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.959 qpair failed and we were unable to recover it. 00:31:40.959 [2024-10-01 08:46:32.279629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.959 [2024-10-01 08:46:32.279643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.959 qpair failed and we were unable to recover it. 00:31:40.959 [2024-10-01 08:46:32.280007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.959 [2024-10-01 08:46:32.280022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.959 qpair failed and we were unable to recover it. 00:31:40.959 [2024-10-01 08:46:32.280341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.959 [2024-10-01 08:46:32.280364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.959 qpair failed and we were unable to recover it. 00:31:40.959 [2024-10-01 08:46:32.280669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.959 [2024-10-01 08:46:32.280683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.959 qpair failed and we were unable to recover it. 00:31:40.959 [2024-10-01 08:46:32.281005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.959 [2024-10-01 08:46:32.281021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.959 qpair failed and we were unable to recover it. 00:31:40.959 [2024-10-01 08:46:32.281313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.959 [2024-10-01 08:46:32.281327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.959 qpair failed and we were unable to recover it. 00:31:40.959 [2024-10-01 08:46:32.281610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.959 [2024-10-01 08:46:32.281624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.959 qpair failed and we were unable to recover it. 00:31:40.959 [2024-10-01 08:46:32.281938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.959 [2024-10-01 08:46:32.281952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.959 qpair failed and we were unable to recover it. 00:31:40.959 [2024-10-01 08:46:32.282239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.960 [2024-10-01 08:46:32.282254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.960 qpair failed and we were unable to recover it. 00:31:40.960 [2024-10-01 08:46:32.282446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.960 [2024-10-01 08:46:32.282462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.960 qpair failed and we were unable to recover it. 00:31:40.960 [2024-10-01 08:46:32.282756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.960 [2024-10-01 08:46:32.282770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.960 qpair failed and we were unable to recover it. 00:31:40.960 [2024-10-01 08:46:32.283115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.960 [2024-10-01 08:46:32.283130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.960 qpair failed and we were unable to recover it. 00:31:40.960 [2024-10-01 08:46:32.283446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.960 [2024-10-01 08:46:32.283462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.960 qpair failed and we were unable to recover it. 00:31:40.960 [2024-10-01 08:46:32.283792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.960 [2024-10-01 08:46:32.283806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.960 qpair failed and we were unable to recover it. 00:31:40.960 [2024-10-01 08:46:32.284124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.960 [2024-10-01 08:46:32.284138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.960 qpair failed and we were unable to recover it. 00:31:40.960 [2024-10-01 08:46:32.284458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.960 [2024-10-01 08:46:32.284472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.960 qpair failed and we were unable to recover it. 00:31:40.960 [2024-10-01 08:46:32.284797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.960 [2024-10-01 08:46:32.284812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.960 qpair failed and we were unable to recover it. 00:31:40.960 [2024-10-01 08:46:32.285178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.960 [2024-10-01 08:46:32.285196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.960 qpair failed and we were unable to recover it. 00:31:40.960 [2024-10-01 08:46:32.285488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.960 [2024-10-01 08:46:32.285504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.960 qpair failed and we were unable to recover it. 00:31:40.960 [2024-10-01 08:46:32.285841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.960 [2024-10-01 08:46:32.285855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.960 qpair failed and we were unable to recover it. 00:31:40.960 [2024-10-01 08:46:32.286166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.960 [2024-10-01 08:46:32.286181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.960 qpair failed and we were unable to recover it. 00:31:40.960 [2024-10-01 08:46:32.286480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.960 [2024-10-01 08:46:32.286494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.960 qpair failed and we were unable to recover it. 00:31:40.960 [2024-10-01 08:46:32.286833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.960 [2024-10-01 08:46:32.286848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.960 qpair failed and we were unable to recover it. 00:31:40.960 [2024-10-01 08:46:32.287172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.960 [2024-10-01 08:46:32.287187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.960 qpair failed and we were unable to recover it. 00:31:40.960 [2024-10-01 08:46:32.287497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.960 [2024-10-01 08:46:32.287511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.960 qpair failed and we were unable to recover it. 00:31:40.960 [2024-10-01 08:46:32.287801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.960 [2024-10-01 08:46:32.287815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.960 qpair failed and we were unable to recover it. 00:31:40.960 [2024-10-01 08:46:32.288127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.960 [2024-10-01 08:46:32.288143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.960 qpair failed and we were unable to recover it. 00:31:40.960 [2024-10-01 08:46:32.288465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.960 [2024-10-01 08:46:32.288479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.960 qpair failed and we were unable to recover it. 00:31:40.960 [2024-10-01 08:46:32.288812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.960 [2024-10-01 08:46:32.288827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.960 qpair failed and we were unable to recover it. 00:31:40.960 [2024-10-01 08:46:32.289145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.960 [2024-10-01 08:46:32.289160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.960 qpair failed and we were unable to recover it. 00:31:40.960 [2024-10-01 08:46:32.289454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.960 [2024-10-01 08:46:32.289469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.960 qpair failed and we were unable to recover it. 00:31:40.960 [2024-10-01 08:46:32.289801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.960 [2024-10-01 08:46:32.289815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.960 qpair failed and we were unable to recover it. 00:31:40.960 [2024-10-01 08:46:32.290123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.960 [2024-10-01 08:46:32.290138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.960 qpair failed and we were unable to recover it. 00:31:40.960 [2024-10-01 08:46:32.290462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.960 [2024-10-01 08:46:32.290477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.960 qpair failed and we were unable to recover it. 00:31:40.960 [2024-10-01 08:46:32.290752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.960 [2024-10-01 08:46:32.290767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.960 qpair failed and we were unable to recover it. 00:31:40.960 [2024-10-01 08:46:32.291096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.960 [2024-10-01 08:46:32.291110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.960 qpair failed and we were unable to recover it. 00:31:40.960 [2024-10-01 08:46:32.291436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.960 [2024-10-01 08:46:32.291450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.960 qpair failed and we were unable to recover it. 00:31:40.960 [2024-10-01 08:46:32.291745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.960 [2024-10-01 08:46:32.291760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.960 qpair failed and we were unable to recover it. 00:31:40.960 [2024-10-01 08:46:32.292070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.960 [2024-10-01 08:46:32.292085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.960 qpair failed and we were unable to recover it. 00:31:40.960 [2024-10-01 08:46:32.292404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.960 [2024-10-01 08:46:32.292420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.960 qpair failed and we were unable to recover it. 00:31:40.960 [2024-10-01 08:46:32.292704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.960 [2024-10-01 08:46:32.292719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.960 qpair failed and we were unable to recover it. 00:31:40.960 [2024-10-01 08:46:32.293033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.960 [2024-10-01 08:46:32.293048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.960 qpair failed and we were unable to recover it. 00:31:40.960 [2024-10-01 08:46:32.293340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.960 [2024-10-01 08:46:32.293361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.960 qpair failed and we were unable to recover it. 00:31:40.960 [2024-10-01 08:46:32.293701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.960 [2024-10-01 08:46:32.293716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.960 qpair failed and we were unable to recover it. 00:31:40.960 [2024-10-01 08:46:32.294032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.960 [2024-10-01 08:46:32.294047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.960 qpair failed and we were unable to recover it. 00:31:40.960 [2024-10-01 08:46:32.294367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.960 [2024-10-01 08:46:32.294382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.960 qpair failed and we were unable to recover it. 00:31:40.960 [2024-10-01 08:46:32.294697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.960 [2024-10-01 08:46:32.294712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.960 qpair failed and we were unable to recover it. 00:31:40.960 [2024-10-01 08:46:32.294989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.960 [2024-10-01 08:46:32.295014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.960 qpair failed and we were unable to recover it. 00:31:40.960 [2024-10-01 08:46:32.295341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.960 [2024-10-01 08:46:32.295356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.960 qpair failed and we were unable to recover it. 00:31:40.960 [2024-10-01 08:46:32.295523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.960 [2024-10-01 08:46:32.295539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.960 qpair failed and we were unable to recover it. 00:31:40.960 [2024-10-01 08:46:32.295856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.960 [2024-10-01 08:46:32.295871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.960 qpair failed and we were unable to recover it. 00:31:40.960 [2024-10-01 08:46:32.296236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.960 [2024-10-01 08:46:32.296250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.960 qpair failed and we were unable to recover it. 00:31:40.960 [2024-10-01 08:46:32.296545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.960 [2024-10-01 08:46:32.296560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.960 qpair failed and we were unable to recover it. 00:31:40.960 [2024-10-01 08:46:32.296887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.961 [2024-10-01 08:46:32.296902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.961 qpair failed and we were unable to recover it. 00:31:40.961 [2024-10-01 08:46:32.297256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.961 [2024-10-01 08:46:32.297271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.961 qpair failed and we were unable to recover it. 00:31:40.961 [2024-10-01 08:46:32.297565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.961 [2024-10-01 08:46:32.297579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.961 qpair failed and we were unable to recover it. 00:31:40.961 [2024-10-01 08:46:32.297901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.961 [2024-10-01 08:46:32.297916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.961 qpair failed and we were unable to recover it. 00:31:40.961 [2024-10-01 08:46:32.298254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.961 [2024-10-01 08:46:32.298272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.961 qpair failed and we were unable to recover it. 00:31:40.961 [2024-10-01 08:46:32.298598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.961 [2024-10-01 08:46:32.298612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.961 qpair failed and we were unable to recover it. 00:31:40.961 [2024-10-01 08:46:32.298921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.961 [2024-10-01 08:46:32.298936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.961 qpair failed and we were unable to recover it. 00:31:40.961 [2024-10-01 08:46:32.299150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.961 [2024-10-01 08:46:32.299165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.961 qpair failed and we were unable to recover it. 00:31:40.961 [2024-10-01 08:46:32.299487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.961 [2024-10-01 08:46:32.299501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.961 qpair failed and we were unable to recover it. 00:31:40.961 [2024-10-01 08:46:32.299812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.961 [2024-10-01 08:46:32.299832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.961 qpair failed and we were unable to recover it. 00:31:40.961 [2024-10-01 08:46:32.300156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.961 [2024-10-01 08:46:32.300171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.961 qpair failed and we were unable to recover it. 00:31:40.961 [2024-10-01 08:46:32.300500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.961 [2024-10-01 08:46:32.300515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.961 qpair failed and we were unable to recover it. 00:31:40.961 [2024-10-01 08:46:32.300836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.961 [2024-10-01 08:46:32.300851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.961 qpair failed and we were unable to recover it. 00:31:40.961 [2024-10-01 08:46:32.301139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.961 [2024-10-01 08:46:32.301154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.961 qpair failed and we were unable to recover it. 00:31:40.961 [2024-10-01 08:46:32.301492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.961 [2024-10-01 08:46:32.301506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.961 qpair failed and we were unable to recover it. 00:31:40.961 [2024-10-01 08:46:32.301818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.961 [2024-10-01 08:46:32.301840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.961 qpair failed and we were unable to recover it. 00:31:40.961 [2024-10-01 08:46:32.302013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.961 [2024-10-01 08:46:32.302028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.961 qpair failed and we were unable to recover it. 00:31:40.961 [2024-10-01 08:46:32.302421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.961 [2024-10-01 08:46:32.302436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.961 qpair failed and we were unable to recover it. 00:31:40.961 [2024-10-01 08:46:32.302756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.961 [2024-10-01 08:46:32.302771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.961 qpair failed and we were unable to recover it. 00:31:40.961 [2024-10-01 08:46:32.303173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.961 [2024-10-01 08:46:32.303188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.961 qpair failed and we were unable to recover it. 00:31:40.961 [2024-10-01 08:46:32.303424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.961 [2024-10-01 08:46:32.303439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.961 qpair failed and we were unable to recover it. 00:31:40.961 [2024-10-01 08:46:32.303825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.961 [2024-10-01 08:46:32.303839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.961 qpair failed and we were unable to recover it. 00:31:40.961 [2024-10-01 08:46:32.304173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.961 [2024-10-01 08:46:32.304189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.961 qpair failed and we were unable to recover it. 00:31:40.961 [2024-10-01 08:46:32.304526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.961 [2024-10-01 08:46:32.304540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.961 qpair failed and we were unable to recover it. 00:31:40.961 [2024-10-01 08:46:32.304865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.961 [2024-10-01 08:46:32.304879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.961 qpair failed and we were unable to recover it. 00:31:40.961 [2024-10-01 08:46:32.305169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.961 [2024-10-01 08:46:32.305184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.961 qpair failed and we were unable to recover it. 00:31:40.961 [2024-10-01 08:46:32.305524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.961 [2024-10-01 08:46:32.305538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.961 qpair failed and we were unable to recover it. 00:31:40.961 [2024-10-01 08:46:32.305940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.961 [2024-10-01 08:46:32.305954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.961 qpair failed and we were unable to recover it. 00:31:40.961 [2024-10-01 08:46:32.306309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.961 [2024-10-01 08:46:32.306325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.961 qpair failed and we were unable to recover it. 00:31:40.961 [2024-10-01 08:46:32.306658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.961 [2024-10-01 08:46:32.306672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.961 qpair failed and we were unable to recover it. 00:31:40.961 [2024-10-01 08:46:32.306977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.961 [2024-10-01 08:46:32.307001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.961 qpair failed and we were unable to recover it. 00:31:40.961 [2024-10-01 08:46:32.307324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.961 [2024-10-01 08:46:32.307339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.961 qpair failed and we were unable to recover it. 00:31:40.961 [2024-10-01 08:46:32.307676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.961 [2024-10-01 08:46:32.307691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.961 qpair failed and we were unable to recover it. 00:31:40.961 [2024-10-01 08:46:32.308021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.961 [2024-10-01 08:46:32.308036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.961 qpair failed and we were unable to recover it. 00:31:40.961 [2024-10-01 08:46:32.308373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.961 [2024-10-01 08:46:32.308388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.961 qpair failed and we were unable to recover it. 00:31:40.961 [2024-10-01 08:46:32.308720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.961 [2024-10-01 08:46:32.308734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.961 qpair failed and we were unable to recover it. 00:31:40.961 [2024-10-01 08:46:32.309051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.961 [2024-10-01 08:46:32.309066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.961 qpair failed and we were unable to recover it. 00:31:40.961 [2024-10-01 08:46:32.309299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.961 [2024-10-01 08:46:32.309314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.961 qpair failed and we were unable to recover it. 00:31:40.961 [2024-10-01 08:46:32.309620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.961 [2024-10-01 08:46:32.309634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.961 qpair failed and we were unable to recover it. 00:31:40.961 [2024-10-01 08:46:32.309960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.961 [2024-10-01 08:46:32.309974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.961 qpair failed and we were unable to recover it. 00:31:40.961 [2024-10-01 08:46:32.310280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.961 [2024-10-01 08:46:32.310295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.961 qpair failed and we were unable to recover it. 00:31:40.961 [2024-10-01 08:46:32.310628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.961 [2024-10-01 08:46:32.310643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.961 qpair failed and we were unable to recover it. 00:31:40.961 [2024-10-01 08:46:32.310958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.961 [2024-10-01 08:46:32.310978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.961 qpair failed and we were unable to recover it. 00:31:40.961 [2024-10-01 08:46:32.311301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.961 [2024-10-01 08:46:32.311316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.961 qpair failed and we were unable to recover it. 00:31:40.961 [2024-10-01 08:46:32.311606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.961 [2024-10-01 08:46:32.311624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.961 qpair failed and we were unable to recover it. 00:31:40.961 [2024-10-01 08:46:32.311929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.962 [2024-10-01 08:46:32.311943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.962 qpair failed and we were unable to recover it. 00:31:40.962 [2024-10-01 08:46:32.312256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.962 [2024-10-01 08:46:32.312271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.962 qpair failed and we were unable to recover it. 00:31:40.962 [2024-10-01 08:46:32.312605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.962 [2024-10-01 08:46:32.312619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.962 qpair failed and we were unable to recover it. 00:31:40.962 [2024-10-01 08:46:32.312935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.962 [2024-10-01 08:46:32.312957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.962 qpair failed and we were unable to recover it. 00:31:40.962 [2024-10-01 08:46:32.313292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.962 [2024-10-01 08:46:32.313306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.962 qpair failed and we were unable to recover it. 00:31:40.962 [2024-10-01 08:46:32.313617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.962 [2024-10-01 08:46:32.313631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.962 qpair failed and we were unable to recover it. 00:31:40.962 [2024-10-01 08:46:32.313944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.962 [2024-10-01 08:46:32.313959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.962 qpair failed and we were unable to recover it. 00:31:40.962 [2024-10-01 08:46:32.314235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.962 [2024-10-01 08:46:32.314249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.962 qpair failed and we were unable to recover it. 00:31:40.962 [2024-10-01 08:46:32.314532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.962 [2024-10-01 08:46:32.314546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.962 qpair failed and we were unable to recover it. 00:31:40.962 [2024-10-01 08:46:32.314861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.962 [2024-10-01 08:46:32.314874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.962 qpair failed and we were unable to recover it. 00:31:40.962 [2024-10-01 08:46:32.315187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.962 [2024-10-01 08:46:32.315202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.962 qpair failed and we were unable to recover it. 00:31:40.962 [2024-10-01 08:46:32.315550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.962 [2024-10-01 08:46:32.315565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.962 qpair failed and we were unable to recover it. 00:31:40.962 [2024-10-01 08:46:32.315887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.962 [2024-10-01 08:46:32.315902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.962 qpair failed and we were unable to recover it. 00:31:40.962 [2024-10-01 08:46:32.316208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.962 [2024-10-01 08:46:32.316224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.962 qpair failed and we were unable to recover it. 00:31:40.962 [2024-10-01 08:46:32.316463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.962 [2024-10-01 08:46:32.316478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.962 qpair failed and we were unable to recover it. 00:31:40.962 [2024-10-01 08:46:32.316799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.962 [2024-10-01 08:46:32.316813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.962 qpair failed and we were unable to recover it. 00:31:40.962 [2024-10-01 08:46:32.317150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.962 [2024-10-01 08:46:32.317165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.962 qpair failed and we were unable to recover it. 00:31:40.962 [2024-10-01 08:46:32.317387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.962 [2024-10-01 08:46:32.317401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.962 qpair failed and we were unable to recover it. 00:31:40.962 [2024-10-01 08:46:32.317714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.962 [2024-10-01 08:46:32.317729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.962 qpair failed and we were unable to recover it. 00:31:40.962 [2024-10-01 08:46:32.318028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.962 [2024-10-01 08:46:32.318043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.962 qpair failed and we were unable to recover it. 00:31:40.962 [2024-10-01 08:46:32.318288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.962 [2024-10-01 08:46:32.318302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.962 qpair failed and we were unable to recover it. 00:31:40.962 [2024-10-01 08:46:32.318616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.962 [2024-10-01 08:46:32.318630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.962 qpair failed and we were unable to recover it. 00:31:40.962 [2024-10-01 08:46:32.318929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.962 [2024-10-01 08:46:32.318943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.962 qpair failed and we were unable to recover it. 00:31:40.962 [2024-10-01 08:46:32.319256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.962 [2024-10-01 08:46:32.319271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.962 qpair failed and we were unable to recover it. 00:31:40.962 [2024-10-01 08:46:32.319469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.962 [2024-10-01 08:46:32.319483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.962 qpair failed and we were unable to recover it. 00:31:40.962 [2024-10-01 08:46:32.319806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.962 [2024-10-01 08:46:32.319821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.962 qpair failed and we were unable to recover it. 00:31:40.962 [2024-10-01 08:46:32.320156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.962 [2024-10-01 08:46:32.320172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.962 qpair failed and we were unable to recover it. 00:31:40.962 [2024-10-01 08:46:32.320487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.962 [2024-10-01 08:46:32.320501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.962 qpair failed and we were unable to recover it. 00:31:40.962 [2024-10-01 08:46:32.320836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.962 [2024-10-01 08:46:32.320851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.962 qpair failed and we were unable to recover it. 00:31:40.962 [2024-10-01 08:46:32.321185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.962 [2024-10-01 08:46:32.321200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.962 qpair failed and we were unable to recover it. 00:31:40.962 [2024-10-01 08:46:32.321479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.962 [2024-10-01 08:46:32.321493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.962 qpair failed and we were unable to recover it. 00:31:40.962 [2024-10-01 08:46:32.321800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.962 [2024-10-01 08:46:32.321814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.962 qpair failed and we were unable to recover it. 00:31:40.962 [2024-10-01 08:46:32.322155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.962 [2024-10-01 08:46:32.322171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.962 qpair failed and we were unable to recover it. 00:31:40.962 [2024-10-01 08:46:32.322482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.962 [2024-10-01 08:46:32.322496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.962 qpair failed and we were unable to recover it. 00:31:40.962 [2024-10-01 08:46:32.322683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.962 [2024-10-01 08:46:32.322697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.962 qpair failed and we were unable to recover it. 00:31:40.962 [2024-10-01 08:46:32.322944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.962 [2024-10-01 08:46:32.322958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.962 qpair failed and we were unable to recover it. 00:31:40.962 [2024-10-01 08:46:32.323357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.962 [2024-10-01 08:46:32.323372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.962 qpair failed and we were unable to recover it. 00:31:40.962 [2024-10-01 08:46:32.323669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.962 [2024-10-01 08:46:32.323684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.962 qpair failed and we were unable to recover it. 00:31:40.962 [2024-10-01 08:46:32.323990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.962 [2024-10-01 08:46:32.324011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.962 qpair failed and we were unable to recover it. 00:31:40.962 [2024-10-01 08:46:32.324327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.962 [2024-10-01 08:46:32.324346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.962 qpair failed and we were unable to recover it. 00:31:40.962 [2024-10-01 08:46:32.324674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.963 [2024-10-01 08:46:32.324689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.963 qpair failed and we were unable to recover it. 00:31:40.963 [2024-10-01 08:46:32.324934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.963 [2024-10-01 08:46:32.324948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.963 qpair failed and we were unable to recover it. 00:31:40.963 [2024-10-01 08:46:32.325269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.963 [2024-10-01 08:46:32.325284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.963 qpair failed and we were unable to recover it. 00:31:40.963 [2024-10-01 08:46:32.325609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.963 [2024-10-01 08:46:32.325624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.963 qpair failed and we were unable to recover it. 00:31:40.963 [2024-10-01 08:46:32.325959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.963 [2024-10-01 08:46:32.325973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.963 qpair failed and we were unable to recover it. 00:31:40.963 [2024-10-01 08:46:32.326342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.963 [2024-10-01 08:46:32.326357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.963 qpair failed and we were unable to recover it. 00:31:40.963 [2024-10-01 08:46:32.326663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.963 [2024-10-01 08:46:32.326678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.963 qpair failed and we were unable to recover it. 00:31:40.963 [2024-10-01 08:46:32.327021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.963 [2024-10-01 08:46:32.327036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.963 qpair failed and we were unable to recover it. 00:31:40.963 [2024-10-01 08:46:32.327275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.963 [2024-10-01 08:46:32.327289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.963 qpair failed and we were unable to recover it. 00:31:40.963 [2024-10-01 08:46:32.327634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.963 [2024-10-01 08:46:32.327648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.963 qpair failed and we were unable to recover it. 00:31:40.963 [2024-10-01 08:46:32.327964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.963 [2024-10-01 08:46:32.327978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.963 qpair failed and we were unable to recover it. 00:31:40.963 [2024-10-01 08:46:32.328313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.963 [2024-10-01 08:46:32.328329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.963 qpair failed and we were unable to recover it. 00:31:40.963 [2024-10-01 08:46:32.328699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.963 [2024-10-01 08:46:32.328714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.963 qpair failed and we were unable to recover it. 00:31:40.963 [2024-10-01 08:46:32.328974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.963 [2024-10-01 08:46:32.328988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.963 qpair failed and we were unable to recover it. 00:31:40.963 [2024-10-01 08:46:32.329324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.963 [2024-10-01 08:46:32.329339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.963 qpair failed and we were unable to recover it. 00:31:40.963 [2024-10-01 08:46:32.329631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.963 [2024-10-01 08:46:32.329646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.963 qpair failed and we were unable to recover it. 00:31:40.963 [2024-10-01 08:46:32.329943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.963 [2024-10-01 08:46:32.329957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.963 qpair failed and we were unable to recover it. 00:31:40.963 [2024-10-01 08:46:32.330266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.963 [2024-10-01 08:46:32.330281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.963 qpair failed and we were unable to recover it. 00:31:40.963 [2024-10-01 08:46:32.330626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.963 [2024-10-01 08:46:32.330640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.963 qpair failed and we were unable to recover it. 00:31:40.963 [2024-10-01 08:46:32.330963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.963 [2024-10-01 08:46:32.330979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.963 qpair failed and we were unable to recover it. 00:31:40.963 [2024-10-01 08:46:32.331305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.963 [2024-10-01 08:46:32.331320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.963 qpair failed and we were unable to recover it. 00:31:40.963 [2024-10-01 08:46:32.331636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.963 [2024-10-01 08:46:32.331652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.963 qpair failed and we were unable to recover it. 00:31:40.963 [2024-10-01 08:46:32.331984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.963 [2024-10-01 08:46:32.332004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.963 qpair failed and we were unable to recover it. 00:31:40.963 [2024-10-01 08:46:32.332332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.963 [2024-10-01 08:46:32.332346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.963 qpair failed and we were unable to recover it. 00:31:40.963 [2024-10-01 08:46:32.332685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.963 [2024-10-01 08:46:32.332700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.963 qpair failed and we were unable to recover it. 00:31:40.963 [2024-10-01 08:46:32.333017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.963 [2024-10-01 08:46:32.333033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.963 qpair failed and we were unable to recover it. 00:31:40.963 [2024-10-01 08:46:32.333267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.963 [2024-10-01 08:46:32.333281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.963 qpair failed and we were unable to recover it. 00:31:40.963 [2024-10-01 08:46:32.333598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.963 [2024-10-01 08:46:32.333612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.963 qpair failed and we were unable to recover it. 00:31:40.963 [2024-10-01 08:46:32.333936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.963 [2024-10-01 08:46:32.333950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.963 qpair failed and we were unable to recover it. 00:31:40.963 [2024-10-01 08:46:32.334244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.963 [2024-10-01 08:46:32.334259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.963 qpair failed and we were unable to recover it. 00:31:40.963 [2024-10-01 08:46:32.334632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.963 [2024-10-01 08:46:32.334646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.963 qpair failed and we were unable to recover it. 00:31:40.963 [2024-10-01 08:46:32.334955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.963 [2024-10-01 08:46:32.334970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.963 qpair failed and we were unable to recover it. 00:31:40.963 [2024-10-01 08:46:32.335258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.963 [2024-10-01 08:46:32.335273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.963 qpair failed and we were unable to recover it. 00:31:40.963 [2024-10-01 08:46:32.335660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.963 [2024-10-01 08:46:32.335674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.963 qpair failed and we were unable to recover it. 00:31:40.963 [2024-10-01 08:46:32.336032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.963 [2024-10-01 08:46:32.336047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.963 qpair failed and we were unable to recover it. 00:31:40.963 [2024-10-01 08:46:32.336383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.963 [2024-10-01 08:46:32.336397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.963 qpair failed and we were unable to recover it. 00:31:40.963 [2024-10-01 08:46:32.336754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.963 [2024-10-01 08:46:32.336769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.963 qpair failed and we were unable to recover it. 00:31:40.963 [2024-10-01 08:46:32.337073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.963 [2024-10-01 08:46:32.337088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.963 qpair failed and we were unable to recover it. 00:31:40.963 [2024-10-01 08:46:32.337425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.963 [2024-10-01 08:46:32.337440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.963 qpair failed and we were unable to recover it. 00:31:40.963 [2024-10-01 08:46:32.337786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.963 [2024-10-01 08:46:32.337804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.963 qpair failed and we were unable to recover it. 00:31:40.963 [2024-10-01 08:46:32.338102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.963 [2024-10-01 08:46:32.338117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.963 qpair failed and we were unable to recover it. 00:31:40.963 [2024-10-01 08:46:32.338413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.963 [2024-10-01 08:46:32.338428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.963 qpair failed and we were unable to recover it. 00:31:40.963 [2024-10-01 08:46:32.338733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.963 [2024-10-01 08:46:32.338748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.963 qpair failed and we were unable to recover it. 00:31:40.963 [2024-10-01 08:46:32.339090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.963 [2024-10-01 08:46:32.339105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.963 qpair failed and we were unable to recover it. 00:31:40.963 [2024-10-01 08:46:32.339416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.963 [2024-10-01 08:46:32.339431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.963 qpair failed and we were unable to recover it. 00:31:40.963 [2024-10-01 08:46:32.339730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.963 [2024-10-01 08:46:32.339744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.963 qpair failed and we were unable to recover it. 00:31:40.963 [2024-10-01 08:46:32.340094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.963 [2024-10-01 08:46:32.340109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.963 qpair failed and we were unable to recover it. 00:31:40.963 [2024-10-01 08:46:32.340431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.963 [2024-10-01 08:46:32.340446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.963 qpair failed and we were unable to recover it. 00:31:40.963 [2024-10-01 08:46:32.340764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.963 [2024-10-01 08:46:32.340778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.964 qpair failed and we were unable to recover it. 00:31:40.964 [2024-10-01 08:46:32.341096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.964 [2024-10-01 08:46:32.341111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.964 qpair failed and we were unable to recover it. 00:31:40.964 [2024-10-01 08:46:32.341431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.964 [2024-10-01 08:46:32.341446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.964 qpair failed and we were unable to recover it. 00:31:40.964 [2024-10-01 08:46:32.341823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.964 [2024-10-01 08:46:32.341838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.964 qpair failed and we were unable to recover it. 00:31:40.964 [2024-10-01 08:46:32.342160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.964 [2024-10-01 08:46:32.342175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.964 qpair failed and we were unable to recover it. 00:31:40.964 [2024-10-01 08:46:32.342478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.964 [2024-10-01 08:46:32.342500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.964 qpair failed and we were unable to recover it. 00:31:40.964 [2024-10-01 08:46:32.342778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.964 [2024-10-01 08:46:32.342792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.964 qpair failed and we were unable to recover it. 00:31:40.964 [2024-10-01 08:46:32.343083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.964 [2024-10-01 08:46:32.343097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.964 qpair failed and we were unable to recover it. 00:31:40.964 [2024-10-01 08:46:32.343412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.964 [2024-10-01 08:46:32.343427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.964 qpair failed and we were unable to recover it. 00:31:40.964 [2024-10-01 08:46:32.343799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.964 [2024-10-01 08:46:32.343813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.964 qpair failed and we were unable to recover it. 00:31:40.964 [2024-10-01 08:46:32.344155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.964 [2024-10-01 08:46:32.344171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.964 qpair failed and we were unable to recover it. 00:31:40.964 [2024-10-01 08:46:32.344344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.964 [2024-10-01 08:46:32.344359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.964 qpair failed and we were unable to recover it. 00:31:40.964 [2024-10-01 08:46:32.344740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.964 [2024-10-01 08:46:32.344755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.964 qpair failed and we were unable to recover it. 00:31:40.964 [2024-10-01 08:46:32.345054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.964 [2024-10-01 08:46:32.345069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.964 qpair failed and we were unable to recover it. 00:31:40.964 [2024-10-01 08:46:32.345420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.964 [2024-10-01 08:46:32.345435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.964 qpair failed and we were unable to recover it. 00:31:40.964 [2024-10-01 08:46:32.345814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.964 [2024-10-01 08:46:32.345829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.964 qpair failed and we were unable to recover it. 00:31:40.964 [2024-10-01 08:46:32.346131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.964 [2024-10-01 08:46:32.346147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.964 qpair failed and we were unable to recover it. 00:31:40.964 [2024-10-01 08:46:32.346465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.964 [2024-10-01 08:46:32.346479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.964 qpair failed and we were unable to recover it. 00:31:40.964 [2024-10-01 08:46:32.346816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.964 [2024-10-01 08:46:32.346831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.964 qpair failed and we were unable to recover it. 00:31:40.964 [2024-10-01 08:46:32.347120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.964 [2024-10-01 08:46:32.347135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.964 qpair failed and we were unable to recover it. 00:31:40.964 [2024-10-01 08:46:32.347460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.964 [2024-10-01 08:46:32.347474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.964 qpair failed and we were unable to recover it. 00:31:40.964 [2024-10-01 08:46:32.347847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.964 [2024-10-01 08:46:32.347861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.964 qpair failed and we were unable to recover it. 00:31:40.964 [2024-10-01 08:46:32.348159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.964 [2024-10-01 08:46:32.348175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.964 qpair failed and we were unable to recover it. 00:31:40.964 [2024-10-01 08:46:32.348516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.964 [2024-10-01 08:46:32.348530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.964 qpair failed and we were unable to recover it. 00:31:40.964 [2024-10-01 08:46:32.348812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.964 [2024-10-01 08:46:32.348827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.964 qpair failed and we were unable to recover it. 00:31:40.964 [2024-10-01 08:46:32.349192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.964 [2024-10-01 08:46:32.349207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.964 qpair failed and we were unable to recover it. 00:31:40.964 [2024-10-01 08:46:32.349523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.964 [2024-10-01 08:46:32.349537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.964 qpair failed and we were unable to recover it. 00:31:40.964 [2024-10-01 08:46:32.349877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.964 [2024-10-01 08:46:32.349891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.964 qpair failed and we were unable to recover it. 00:31:40.964 [2024-10-01 08:46:32.350208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.964 [2024-10-01 08:46:32.350223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.964 qpair failed and we were unable to recover it. 00:31:40.964 [2024-10-01 08:46:32.350546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.964 [2024-10-01 08:46:32.350561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.964 qpair failed and we were unable to recover it. 00:31:40.964 [2024-10-01 08:46:32.350737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.964 [2024-10-01 08:46:32.350753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.964 qpair failed and we were unable to recover it. 00:31:40.964 [2024-10-01 08:46:32.351069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.964 [2024-10-01 08:46:32.351087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.964 qpair failed and we were unable to recover it. 00:31:40.964 [2024-10-01 08:46:32.351396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.964 [2024-10-01 08:46:32.351411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.964 qpair failed and we were unable to recover it. 00:31:40.964 [2024-10-01 08:46:32.351763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.964 [2024-10-01 08:46:32.351777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.964 qpair failed and we were unable to recover it. 00:31:40.964 [2024-10-01 08:46:32.352101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.964 [2024-10-01 08:46:32.352116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.964 qpair failed and we were unable to recover it. 00:31:40.964 [2024-10-01 08:46:32.352454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.964 [2024-10-01 08:46:32.352469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.964 qpair failed and we were unable to recover it. 00:31:40.964 [2024-10-01 08:46:32.352840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.964 [2024-10-01 08:46:32.352855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.964 qpair failed and we were unable to recover it. 00:31:40.964 [2024-10-01 08:46:32.353176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.964 [2024-10-01 08:46:32.353191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.964 qpair failed and we were unable to recover it. 00:31:40.964 [2024-10-01 08:46:32.353513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.964 [2024-10-01 08:46:32.353535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.964 qpair failed and we were unable to recover it. 00:31:40.964 [2024-10-01 08:46:32.353833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.964 [2024-10-01 08:46:32.353848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.964 qpair failed and we were unable to recover it. 00:31:40.964 [2024-10-01 08:46:32.354171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.964 [2024-10-01 08:46:32.354186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.964 qpair failed and we were unable to recover it. 00:31:40.964 [2024-10-01 08:46:32.354511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.964 [2024-10-01 08:46:32.354526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.964 qpair failed and we were unable to recover it. 00:31:40.964 [2024-10-01 08:46:32.354830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.964 [2024-10-01 08:46:32.354845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.964 qpair failed and we were unable to recover it. 00:31:40.964 [2024-10-01 08:46:32.355178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.964 [2024-10-01 08:46:32.355193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.964 qpair failed and we were unable to recover it. 00:31:40.964 [2024-10-01 08:46:32.355476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.964 [2024-10-01 08:46:32.355490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.964 qpair failed and we were unable to recover it. 00:31:40.964 [2024-10-01 08:46:32.355822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.964 [2024-10-01 08:46:32.355837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.964 qpair failed and we were unable to recover it. 00:31:40.964 [2024-10-01 08:46:32.356135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.964 [2024-10-01 08:46:32.356149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.964 qpair failed and we were unable to recover it. 00:31:40.964 [2024-10-01 08:46:32.356479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.964 [2024-10-01 08:46:32.356494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.965 qpair failed and we were unable to recover it. 00:31:40.965 [2024-10-01 08:46:32.356804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.965 [2024-10-01 08:46:32.356819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.965 qpair failed and we were unable to recover it. 00:31:40.965 [2024-10-01 08:46:32.357140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.965 [2024-10-01 08:46:32.357155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.965 qpair failed and we were unable to recover it. 00:31:40.965 [2024-10-01 08:46:32.357555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.965 [2024-10-01 08:46:32.357570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.965 qpair failed and we were unable to recover it. 00:31:40.965 [2024-10-01 08:46:32.357869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.965 [2024-10-01 08:46:32.357884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.965 qpair failed and we were unable to recover it. 00:31:40.965 [2024-10-01 08:46:32.358206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.965 [2024-10-01 08:46:32.358221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.965 qpair failed and we were unable to recover it. 00:31:40.965 [2024-10-01 08:46:32.358553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.965 [2024-10-01 08:46:32.358568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.965 qpair failed and we were unable to recover it. 00:31:40.965 [2024-10-01 08:46:32.358887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.965 [2024-10-01 08:46:32.358901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.965 qpair failed and we were unable to recover it. 00:31:40.965 [2024-10-01 08:46:32.359213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.965 [2024-10-01 08:46:32.359228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.965 qpair failed and we were unable to recover it. 00:31:40.965 [2024-10-01 08:46:32.359536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.965 [2024-10-01 08:46:32.359551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.965 qpair failed and we were unable to recover it. 00:31:40.965 [2024-10-01 08:46:32.359853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.965 [2024-10-01 08:46:32.359868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.965 qpair failed and we were unable to recover it. 00:31:40.965 [2024-10-01 08:46:32.360172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.965 [2024-10-01 08:46:32.360187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.965 qpair failed and we were unable to recover it. 00:31:40.965 [2024-10-01 08:46:32.360506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.965 [2024-10-01 08:46:32.360521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.965 qpair failed and we were unable to recover it. 00:31:40.965 [2024-10-01 08:46:32.360859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.965 [2024-10-01 08:46:32.360874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.965 qpair failed and we were unable to recover it. 00:31:40.965 [2024-10-01 08:46:32.361170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.965 [2024-10-01 08:46:32.361185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.965 qpair failed and we were unable to recover it. 00:31:40.965 [2024-10-01 08:46:32.361444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.965 [2024-10-01 08:46:32.361458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.965 qpair failed and we were unable to recover it. 00:31:40.965 [2024-10-01 08:46:32.361649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.965 [2024-10-01 08:46:32.361663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.965 qpair failed and we were unable to recover it. 00:31:40.965 [2024-10-01 08:46:32.362014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.965 [2024-10-01 08:46:32.362030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.965 qpair failed and we were unable to recover it. 00:31:40.965 [2024-10-01 08:46:32.362360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.965 [2024-10-01 08:46:32.362374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.965 qpair failed and we were unable to recover it. 00:31:40.965 [2024-10-01 08:46:32.362668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.965 [2024-10-01 08:46:32.362683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.965 qpair failed and we were unable to recover it. 00:31:40.965 [2024-10-01 08:46:32.363006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.965 [2024-10-01 08:46:32.363021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.965 qpair failed and we were unable to recover it. 00:31:40.965 [2024-10-01 08:46:32.363335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.965 [2024-10-01 08:46:32.363356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.965 qpair failed and we were unable to recover it. 00:31:40.965 [2024-10-01 08:46:32.363672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.965 [2024-10-01 08:46:32.363686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.965 qpair failed and we were unable to recover it. 00:31:40.965 [2024-10-01 08:46:32.364019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.965 [2024-10-01 08:46:32.364035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.965 qpair failed and we were unable to recover it. 00:31:40.965 [2024-10-01 08:46:32.364354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.965 [2024-10-01 08:46:32.364372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.965 qpair failed and we were unable to recover it. 00:31:40.965 [2024-10-01 08:46:32.364739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.965 [2024-10-01 08:46:32.364753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.965 qpair failed and we were unable to recover it. 00:31:40.965 [2024-10-01 08:46:32.364975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.965 [2024-10-01 08:46:32.364989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.965 qpair failed and we were unable to recover it. 00:31:40.965 [2024-10-01 08:46:32.365291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.965 [2024-10-01 08:46:32.365306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.965 qpair failed and we were unable to recover it. 00:31:40.965 [2024-10-01 08:46:32.365639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.965 [2024-10-01 08:46:32.365653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.965 qpair failed and we were unable to recover it. 00:31:40.965 [2024-10-01 08:46:32.365951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.965 [2024-10-01 08:46:32.365966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.965 qpair failed and we were unable to recover it. 00:31:40.965 [2024-10-01 08:46:32.366267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.965 [2024-10-01 08:46:32.366282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.965 qpair failed and we were unable to recover it. 00:31:40.965 [2024-10-01 08:46:32.366501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.965 [2024-10-01 08:46:32.366515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.965 qpair failed and we were unable to recover it. 00:31:40.965 [2024-10-01 08:46:32.366830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.965 [2024-10-01 08:46:32.366845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.965 qpair failed and we were unable to recover it. 00:31:40.965 [2024-10-01 08:46:32.367065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.965 [2024-10-01 08:46:32.367080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.965 qpair failed and we were unable to recover it. 00:31:40.965 [2024-10-01 08:46:32.367405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.965 [2024-10-01 08:46:32.367420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.965 qpair failed and we were unable to recover it. 00:31:40.965 [2024-10-01 08:46:32.367757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.965 [2024-10-01 08:46:32.367772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.965 qpair failed and we were unable to recover it. 00:31:40.965 [2024-10-01 08:46:32.368097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.965 [2024-10-01 08:46:32.368112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.965 qpair failed and we were unable to recover it. 00:31:40.965 [2024-10-01 08:46:32.368424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.965 [2024-10-01 08:46:32.368444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.965 qpair failed and we were unable to recover it. 00:31:40.965 [2024-10-01 08:46:32.368746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.965 [2024-10-01 08:46:32.368760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.965 qpair failed and we were unable to recover it. 00:31:40.965 [2024-10-01 08:46:32.369062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.965 [2024-10-01 08:46:32.369077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.965 qpair failed and we were unable to recover it. 00:31:40.965 [2024-10-01 08:46:32.369392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.965 [2024-10-01 08:46:32.369407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.965 qpair failed and we were unable to recover it. 00:31:40.965 [2024-10-01 08:46:32.369700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.965 [2024-10-01 08:46:32.369715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.965 qpair failed and we were unable to recover it. 00:31:40.965 [2024-10-01 08:46:32.370049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.965 [2024-10-01 08:46:32.370064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.965 qpair failed and we were unable to recover it. 00:31:40.965 [2024-10-01 08:46:32.370381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.965 [2024-10-01 08:46:32.370402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.965 qpair failed and we were unable to recover it. 00:31:40.965 [2024-10-01 08:46:32.370721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.965 [2024-10-01 08:46:32.370736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.965 qpair failed and we were unable to recover it. 00:31:40.965 [2024-10-01 08:46:32.370859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.965 [2024-10-01 08:46:32.370872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.965 qpair failed and we were unable to recover it. 00:31:40.965 [2024-10-01 08:46:32.371168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.965 [2024-10-01 08:46:32.371183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.965 qpair failed and we were unable to recover it. 00:31:40.965 [2024-10-01 08:46:32.371514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.965 [2024-10-01 08:46:32.371529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.965 qpair failed and we were unable to recover it. 00:31:40.965 [2024-10-01 08:46:32.371869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.965 [2024-10-01 08:46:32.371883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.966 qpair failed and we were unable to recover it. 00:31:40.966 [2024-10-01 08:46:32.372230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.966 [2024-10-01 08:46:32.372246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.966 qpair failed and we were unable to recover it. 00:31:40.966 [2024-10-01 08:46:32.372580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.966 [2024-10-01 08:46:32.372596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.966 qpair failed and we were unable to recover it. 00:31:40.966 [2024-10-01 08:46:32.372917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.966 [2024-10-01 08:46:32.372933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.966 qpair failed and we were unable to recover it. 00:31:40.966 [2024-10-01 08:46:32.373287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.966 [2024-10-01 08:46:32.373302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.966 qpair failed and we were unable to recover it. 00:31:40.966 [2024-10-01 08:46:32.373646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.966 [2024-10-01 08:46:32.373660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.966 qpair failed and we were unable to recover it. 00:31:40.966 [2024-10-01 08:46:32.374021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.966 [2024-10-01 08:46:32.374037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.966 qpair failed and we were unable to recover it. 00:31:40.966 [2024-10-01 08:46:32.374343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.966 [2024-10-01 08:46:32.374358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.966 qpair failed and we were unable to recover it. 00:31:40.966 [2024-10-01 08:46:32.374696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.966 [2024-10-01 08:46:32.374711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.966 qpair failed and we were unable to recover it. 00:31:40.966 [2024-10-01 08:46:32.374919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.966 [2024-10-01 08:46:32.374934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.966 qpair failed and we were unable to recover it. 00:31:40.966 [2024-10-01 08:46:32.375335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.966 [2024-10-01 08:46:32.375350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.966 qpair failed and we were unable to recover it. 00:31:40.966 [2024-10-01 08:46:32.375644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.966 [2024-10-01 08:46:32.375659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.966 qpair failed and we were unable to recover it. 00:31:40.966 [2024-10-01 08:46:32.375983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.966 [2024-10-01 08:46:32.376014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.966 qpair failed and we were unable to recover it. 00:31:40.966 [2024-10-01 08:46:32.376333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.966 [2024-10-01 08:46:32.376348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.966 qpair failed and we were unable to recover it. 00:31:40.966 [2024-10-01 08:46:32.376681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.966 [2024-10-01 08:46:32.376696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.966 qpair failed and we were unable to recover it. 00:31:40.966 [2024-10-01 08:46:32.377057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.966 [2024-10-01 08:46:32.377073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.966 qpair failed and we were unable to recover it. 00:31:40.966 [2024-10-01 08:46:32.377370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.966 [2024-10-01 08:46:32.377388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.966 qpair failed and we were unable to recover it. 00:31:40.966 [2024-10-01 08:46:32.377684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.966 [2024-10-01 08:46:32.377700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.966 qpair failed and we were unable to recover it. 00:31:40.966 [2024-10-01 08:46:32.378013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.966 [2024-10-01 08:46:32.378029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.966 qpair failed and we were unable to recover it. 00:31:40.966 [2024-10-01 08:46:32.378261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.966 [2024-10-01 08:46:32.378276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.966 qpair failed and we were unable to recover it. 00:31:40.966 [2024-10-01 08:46:32.378464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.966 [2024-10-01 08:46:32.378478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.966 qpair failed and we were unable to recover it. 00:31:40.966 [2024-10-01 08:46:32.378694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.966 [2024-10-01 08:46:32.378709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.966 qpair failed and we were unable to recover it. 00:31:40.966 [2024-10-01 08:46:32.379041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.966 [2024-10-01 08:46:32.379056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.966 qpair failed and we were unable to recover it. 00:31:40.966 [2024-10-01 08:46:32.379389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.966 [2024-10-01 08:46:32.379403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.966 qpair failed and we were unable to recover it. 00:31:40.966 [2024-10-01 08:46:32.379721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.966 [2024-10-01 08:46:32.379737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.966 qpair failed and we were unable to recover it. 00:31:40.966 [2024-10-01 08:46:32.380049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.966 [2024-10-01 08:46:32.380064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.966 qpair failed and we were unable to recover it. 00:31:40.966 [2024-10-01 08:46:32.380388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.966 [2024-10-01 08:46:32.380403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.966 qpair failed and we were unable to recover it. 00:31:40.966 [2024-10-01 08:46:32.380705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.966 [2024-10-01 08:46:32.380719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.966 qpair failed and we were unable to recover it. 00:31:40.966 [2024-10-01 08:46:32.381038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.966 [2024-10-01 08:46:32.381053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.966 qpair failed and we were unable to recover it. 00:31:40.966 [2024-10-01 08:46:32.381371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.966 [2024-10-01 08:46:32.381386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.966 qpair failed and we were unable to recover it. 00:31:40.966 [2024-10-01 08:46:32.381721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.966 [2024-10-01 08:46:32.381736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.966 qpair failed and we were unable to recover it. 00:31:40.966 [2024-10-01 08:46:32.382074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.966 [2024-10-01 08:46:32.382090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.966 qpair failed and we were unable to recover it. 00:31:40.966 [2024-10-01 08:46:32.382390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.966 [2024-10-01 08:46:32.382412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.966 qpair failed and we were unable to recover it. 00:31:40.966 [2024-10-01 08:46:32.382762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.966 [2024-10-01 08:46:32.382777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.966 qpair failed and we were unable to recover it. 00:31:40.966 [2024-10-01 08:46:32.383060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.966 [2024-10-01 08:46:32.383074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.966 qpair failed and we were unable to recover it. 00:31:40.966 [2024-10-01 08:46:32.383425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.966 [2024-10-01 08:46:32.383440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.966 qpair failed and we were unable to recover it. 00:31:40.966 [2024-10-01 08:46:32.383766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.966 [2024-10-01 08:46:32.383781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.966 qpair failed and we were unable to recover it. 00:31:40.966 [2024-10-01 08:46:32.384115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.966 [2024-10-01 08:46:32.384131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.966 qpair failed and we were unable to recover it. 00:31:40.966 [2024-10-01 08:46:32.384432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.966 [2024-10-01 08:46:32.384455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.966 qpair failed and we were unable to recover it. 00:31:40.966 [2024-10-01 08:46:32.384759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.966 [2024-10-01 08:46:32.384773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.966 qpair failed and we were unable to recover it. 00:31:40.966 [2024-10-01 08:46:32.385103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.966 [2024-10-01 08:46:32.385118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.966 qpair failed and we were unable to recover it. 00:31:40.966 [2024-10-01 08:46:32.385433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.966 [2024-10-01 08:46:32.385447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.966 qpair failed and we were unable to recover it. 00:31:40.966 [2024-10-01 08:46:32.385756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.966 [2024-10-01 08:46:32.385777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.966 qpair failed and we were unable to recover it. 00:31:40.966 [2024-10-01 08:46:32.386076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.967 [2024-10-01 08:46:32.386092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.967 qpair failed and we were unable to recover it. 00:31:40.967 [2024-10-01 08:46:32.386426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.967 [2024-10-01 08:46:32.386441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.967 qpair failed and we were unable to recover it. 00:31:40.967 [2024-10-01 08:46:32.386731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.967 [2024-10-01 08:46:32.386746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.967 qpair failed and we were unable to recover it. 00:31:40.967 [2024-10-01 08:46:32.387075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.967 [2024-10-01 08:46:32.387090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.967 qpair failed and we were unable to recover it. 00:31:40.967 [2024-10-01 08:46:32.387417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.967 [2024-10-01 08:46:32.387431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.967 qpair failed and we were unable to recover it. 00:31:40.967 [2024-10-01 08:46:32.387766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.967 [2024-10-01 08:46:32.387780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.967 qpair failed and we were unable to recover it. 00:31:40.967 [2024-10-01 08:46:32.388142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.967 [2024-10-01 08:46:32.388158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.967 qpair failed and we were unable to recover it. 00:31:40.967 [2024-10-01 08:46:32.388493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.967 [2024-10-01 08:46:32.388508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.967 qpair failed and we were unable to recover it. 00:31:40.967 [2024-10-01 08:46:32.388882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.967 [2024-10-01 08:46:32.388896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.967 qpair failed and we were unable to recover it. 00:31:40.967 [2024-10-01 08:46:32.389215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.967 [2024-10-01 08:46:32.389230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.967 qpair failed and we were unable to recover it. 00:31:40.967 [2024-10-01 08:46:32.389533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.967 [2024-10-01 08:46:32.389547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.967 qpair failed and we were unable to recover it. 00:31:40.967 [2024-10-01 08:46:32.389860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.967 [2024-10-01 08:46:32.389873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.967 qpair failed and we were unable to recover it. 00:31:40.967 [2024-10-01 08:46:32.390186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.967 [2024-10-01 08:46:32.390201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.967 qpair failed and we were unable to recover it. 00:31:40.967 [2024-10-01 08:46:32.390534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.967 [2024-10-01 08:46:32.390552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.967 qpair failed and we were unable to recover it. 00:31:40.967 [2024-10-01 08:46:32.390859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.967 [2024-10-01 08:46:32.390873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.967 qpair failed and we were unable to recover it. 00:31:40.967 [2024-10-01 08:46:32.391262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.967 [2024-10-01 08:46:32.391277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.967 qpair failed and we were unable to recover it. 00:31:40.967 [2024-10-01 08:46:32.391588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.967 [2024-10-01 08:46:32.391602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.967 qpair failed and we were unable to recover it. 00:31:40.967 [2024-10-01 08:46:32.391802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.967 [2024-10-01 08:46:32.391818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.967 qpair failed and we were unable to recover it. 00:31:40.967 [2024-10-01 08:46:32.392103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.967 [2024-10-01 08:46:32.392117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.967 qpair failed and we were unable to recover it. 00:31:40.967 [2024-10-01 08:46:32.392355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.967 [2024-10-01 08:46:32.392369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.967 qpair failed and we were unable to recover it. 00:31:40.967 [2024-10-01 08:46:32.392688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.967 [2024-10-01 08:46:32.392702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.967 qpair failed and we were unable to recover it. 00:31:40.967 [2024-10-01 08:46:32.392988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.967 [2024-10-01 08:46:32.393011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.967 qpair failed and we were unable to recover it. 00:31:40.967 [2024-10-01 08:46:32.393336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.967 [2024-10-01 08:46:32.393351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.967 qpair failed and we were unable to recover it. 00:31:40.967 [2024-10-01 08:46:32.393661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.967 [2024-10-01 08:46:32.393675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.967 qpair failed and we were unable to recover it. 00:31:40.967 [2024-10-01 08:46:32.394014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.967 [2024-10-01 08:46:32.394029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.967 qpair failed and we were unable to recover it. 00:31:40.967 [2024-10-01 08:46:32.394232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.967 [2024-10-01 08:46:32.394248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.967 qpair failed and we were unable to recover it. 00:31:40.967 [2024-10-01 08:46:32.394592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.967 [2024-10-01 08:46:32.394608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.967 qpair failed and we were unable to recover it. 00:31:40.967 [2024-10-01 08:46:32.394899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.967 [2024-10-01 08:46:32.394914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.967 qpair failed and we were unable to recover it. 00:31:40.967 [2024-10-01 08:46:32.395134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.967 [2024-10-01 08:46:32.395150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.967 qpair failed and we were unable to recover it. 00:31:40.967 [2024-10-01 08:46:32.395502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.967 [2024-10-01 08:46:32.395516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.967 qpair failed and we were unable to recover it. 00:31:40.967 [2024-10-01 08:46:32.395814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.967 [2024-10-01 08:46:32.395828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.967 qpair failed and we were unable to recover it. 00:31:40.967 [2024-10-01 08:46:32.396164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.967 [2024-10-01 08:46:32.396178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.967 qpair failed and we were unable to recover it. 00:31:40.967 [2024-10-01 08:46:32.396482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.967 [2024-10-01 08:46:32.396497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.967 qpair failed and we were unable to recover it. 00:31:40.967 [2024-10-01 08:46:32.396833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.967 [2024-10-01 08:46:32.396848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.967 qpair failed and we were unable to recover it. 00:31:40.967 [2024-10-01 08:46:32.397224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.967 [2024-10-01 08:46:32.397239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.967 qpair failed and we were unable to recover it. 00:31:40.967 [2024-10-01 08:46:32.397563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.967 [2024-10-01 08:46:32.397577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.967 qpair failed and we were unable to recover it. 00:31:40.967 [2024-10-01 08:46:32.397913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.967 [2024-10-01 08:46:32.397927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.967 qpair failed and we were unable to recover it. 00:31:40.967 [2024-10-01 08:46:32.398287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.967 [2024-10-01 08:46:32.398302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.967 qpair failed and we were unable to recover it. 00:31:40.967 [2024-10-01 08:46:32.398497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.967 [2024-10-01 08:46:32.398512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.967 qpair failed and we were unable to recover it. 00:31:40.967 [2024-10-01 08:46:32.398883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.967 [2024-10-01 08:46:32.398898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.967 qpair failed and we were unable to recover it. 00:31:40.967 [2024-10-01 08:46:32.399213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.967 [2024-10-01 08:46:32.399229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.967 qpair failed and we were unable to recover it. 00:31:40.967 [2024-10-01 08:46:32.399569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.967 [2024-10-01 08:46:32.399584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.967 qpair failed and we were unable to recover it. 00:31:40.967 [2024-10-01 08:46:32.399908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.967 [2024-10-01 08:46:32.399923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.967 qpair failed and we were unable to recover it. 00:31:40.967 [2024-10-01 08:46:32.400250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.967 [2024-10-01 08:46:32.400265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.967 qpair failed and we were unable to recover it. 00:31:40.967 [2024-10-01 08:46:32.400550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.967 [2024-10-01 08:46:32.400571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.967 qpair failed and we were unable to recover it. 00:31:40.967 [2024-10-01 08:46:32.400905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.967 [2024-10-01 08:46:32.400919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.967 qpair failed and we were unable to recover it. 00:31:40.967 [2024-10-01 08:46:32.401223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.967 [2024-10-01 08:46:32.401238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.967 qpair failed and we were unable to recover it. 00:31:40.967 [2024-10-01 08:46:32.401539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.967 [2024-10-01 08:46:32.401554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.967 qpair failed and we were unable to recover it. 00:31:40.967 [2024-10-01 08:46:32.401862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.967 [2024-10-01 08:46:32.401876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.967 qpair failed and we were unable to recover it. 00:31:40.967 [2024-10-01 08:46:32.402233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.967 [2024-10-01 08:46:32.402248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.967 qpair failed and we were unable to recover it. 00:31:40.967 [2024-10-01 08:46:32.402618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.967 [2024-10-01 08:46:32.402633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.967 qpair failed and we were unable to recover it. 00:31:40.967 [2024-10-01 08:46:32.402896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.968 [2024-10-01 08:46:32.402910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.968 qpair failed and we were unable to recover it. 00:31:40.968 [2024-10-01 08:46:32.403235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.968 [2024-10-01 08:46:32.403250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.968 qpair failed and we were unable to recover it. 00:31:40.968 [2024-10-01 08:46:32.403582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.968 [2024-10-01 08:46:32.403597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.968 qpair failed and we were unable to recover it. 00:31:40.968 [2024-10-01 08:46:32.403967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.968 [2024-10-01 08:46:32.403982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.968 qpair failed and we were unable to recover it. 00:31:40.968 [2024-10-01 08:46:32.404279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.968 [2024-10-01 08:46:32.404294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.968 qpair failed and we were unable to recover it. 00:31:40.968 [2024-10-01 08:46:32.404593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.968 [2024-10-01 08:46:32.404608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.968 qpair failed and we were unable to recover it. 00:31:40.968 [2024-10-01 08:46:32.404940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.968 [2024-10-01 08:46:32.404955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.968 qpair failed and we were unable to recover it. 00:31:40.968 [2024-10-01 08:46:32.405290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.968 [2024-10-01 08:46:32.405306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.968 qpair failed and we were unable to recover it. 00:31:40.968 [2024-10-01 08:46:32.405639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.968 [2024-10-01 08:46:32.405655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.968 qpair failed and we were unable to recover it. 00:31:40.968 [2024-10-01 08:46:32.405951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.968 [2024-10-01 08:46:32.405966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.968 qpair failed and we were unable to recover it. 00:31:40.968 [2024-10-01 08:46:32.406294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.968 [2024-10-01 08:46:32.406309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.968 qpair failed and we were unable to recover it. 00:31:40.968 [2024-10-01 08:46:32.406647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.968 [2024-10-01 08:46:32.406662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.968 qpair failed and we were unable to recover it. 00:31:40.968 [2024-10-01 08:46:32.407025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.968 [2024-10-01 08:46:32.407041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.968 qpair failed and we were unable to recover it. 00:31:40.968 [2024-10-01 08:46:32.407341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.968 [2024-10-01 08:46:32.407355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.968 qpair failed and we were unable to recover it. 00:31:40.968 [2024-10-01 08:46:32.407684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.968 [2024-10-01 08:46:32.407698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.968 qpair failed and we were unable to recover it. 00:31:40.968 [2024-10-01 08:46:32.408033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.968 [2024-10-01 08:46:32.408049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.968 qpair failed and we were unable to recover it. 00:31:40.968 [2024-10-01 08:46:32.408358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.968 [2024-10-01 08:46:32.408380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.968 qpair failed and we were unable to recover it. 00:31:40.968 [2024-10-01 08:46:32.408684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.968 [2024-10-01 08:46:32.408698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.968 qpair failed and we were unable to recover it. 00:31:40.968 [2024-10-01 08:46:32.409031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.968 [2024-10-01 08:46:32.409045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.968 qpair failed and we were unable to recover it. 00:31:40.968 [2024-10-01 08:46:32.409383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.968 [2024-10-01 08:46:32.409398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.968 qpair failed and we were unable to recover it. 00:31:40.968 [2024-10-01 08:46:32.409714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.968 [2024-10-01 08:46:32.409729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.968 qpair failed and we were unable to recover it. 00:31:40.968 [2024-10-01 08:46:32.410063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.968 [2024-10-01 08:46:32.410078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.968 qpair failed and we were unable to recover it. 00:31:40.968 [2024-10-01 08:46:32.410415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.968 [2024-10-01 08:46:32.410431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.968 qpair failed and we were unable to recover it. 00:31:40.968 [2024-10-01 08:46:32.410776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.968 [2024-10-01 08:46:32.410791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.968 qpair failed and we were unable to recover it. 00:31:40.968 [2024-10-01 08:46:32.411074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.968 [2024-10-01 08:46:32.411089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.968 qpair failed and we were unable to recover it. 00:31:40.968 [2024-10-01 08:46:32.411438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.968 [2024-10-01 08:46:32.411453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.968 qpair failed and we were unable to recover it. 00:31:40.968 [2024-10-01 08:46:32.411785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.968 [2024-10-01 08:46:32.411800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.968 qpair failed and we were unable to recover it. 00:31:40.968 [2024-10-01 08:46:32.412139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.968 [2024-10-01 08:46:32.412154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.968 qpair failed and we were unable to recover it. 00:31:40.968 [2024-10-01 08:46:32.412464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.968 [2024-10-01 08:46:32.412479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.968 qpair failed and we were unable to recover it. 00:31:40.968 [2024-10-01 08:46:32.412798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.968 [2024-10-01 08:46:32.412816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.968 qpair failed and we were unable to recover it. 00:31:40.968 [2024-10-01 08:46:32.413140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.968 [2024-10-01 08:46:32.413155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.968 qpair failed and we were unable to recover it. 00:31:40.968 [2024-10-01 08:46:32.413505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.968 [2024-10-01 08:46:32.413519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.968 qpair failed and we were unable to recover it. 00:31:40.968 [2024-10-01 08:46:32.413801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.968 [2024-10-01 08:46:32.413815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.968 qpair failed and we were unable to recover it. 00:31:40.968 [2024-10-01 08:46:32.414134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.968 [2024-10-01 08:46:32.414149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.968 qpair failed and we were unable to recover it. 00:31:40.968 [2024-10-01 08:46:32.414478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.968 [2024-10-01 08:46:32.414493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.968 qpair failed and we were unable to recover it. 00:31:40.968 [2024-10-01 08:46:32.414831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.968 [2024-10-01 08:46:32.414847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.968 qpair failed and we were unable to recover it. 00:31:40.968 [2024-10-01 08:46:32.415167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.968 [2024-10-01 08:46:32.415182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.968 qpair failed and we were unable to recover it. 00:31:40.968 [2024-10-01 08:46:32.415527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.968 [2024-10-01 08:46:32.415542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.968 qpair failed and we were unable to recover it. 00:31:40.968 [2024-10-01 08:46:32.415922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.968 [2024-10-01 08:46:32.415937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.968 qpair failed and we were unable to recover it. 00:31:40.968 [2024-10-01 08:46:32.416266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.968 [2024-10-01 08:46:32.416281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.968 qpair failed and we were unable to recover it. 00:31:40.968 [2024-10-01 08:46:32.416594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.968 [2024-10-01 08:46:32.416609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.968 qpair failed and we were unable to recover it. 00:31:40.968 [2024-10-01 08:46:32.416944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.968 [2024-10-01 08:46:32.416959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.968 qpair failed and we were unable to recover it. 00:31:40.968 [2024-10-01 08:46:32.417154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.968 [2024-10-01 08:46:32.417171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.968 qpair failed and we were unable to recover it. 00:31:40.968 [2024-10-01 08:46:32.417493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.968 [2024-10-01 08:46:32.417508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.968 qpair failed and we were unable to recover it. 00:31:40.968 [2024-10-01 08:46:32.417833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.968 [2024-10-01 08:46:32.417848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.968 qpair failed and we were unable to recover it. 00:31:40.968 [2024-10-01 08:46:32.418187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.968 [2024-10-01 08:46:32.418202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.968 qpair failed and we were unable to recover it. 00:31:40.968 [2024-10-01 08:46:32.418532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.968 [2024-10-01 08:46:32.418546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.968 qpair failed and we were unable to recover it. 00:31:40.968 [2024-10-01 08:46:32.418859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.968 [2024-10-01 08:46:32.418875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.968 qpair failed and we were unable to recover it. 00:31:40.968 [2024-10-01 08:46:32.419166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.968 [2024-10-01 08:46:32.419182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.968 qpair failed and we were unable to recover it. 00:31:40.968 [2024-10-01 08:46:32.419450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.968 [2024-10-01 08:46:32.419465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.968 qpair failed and we were unable to recover it. 00:31:40.968 [2024-10-01 08:46:32.419783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.968 [2024-10-01 08:46:32.419798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.968 qpair failed and we were unable to recover it. 00:31:40.968 [2024-10-01 08:46:32.420174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.968 [2024-10-01 08:46:32.420190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.968 qpair failed and we were unable to recover it. 00:31:40.969 [2024-10-01 08:46:32.420527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.969 [2024-10-01 08:46:32.420541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.969 qpair failed and we were unable to recover it. 00:31:40.969 [2024-10-01 08:46:32.420838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.969 [2024-10-01 08:46:32.420853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.969 qpair failed and we were unable to recover it. 00:31:40.969 [2024-10-01 08:46:32.421161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.969 [2024-10-01 08:46:32.421177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.969 qpair failed and we were unable to recover it. 00:31:40.969 [2024-10-01 08:46:32.421394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.969 [2024-10-01 08:46:32.421408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.969 qpair failed and we were unable to recover it. 00:31:40.969 [2024-10-01 08:46:32.421726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.969 [2024-10-01 08:46:32.421741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.969 qpair failed and we were unable to recover it. 00:31:40.969 [2024-10-01 08:46:32.422083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.969 [2024-10-01 08:46:32.422105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.969 qpair failed and we were unable to recover it. 00:31:40.969 [2024-10-01 08:46:32.422442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.969 [2024-10-01 08:46:32.422456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.969 qpair failed and we were unable to recover it. 00:31:40.969 [2024-10-01 08:46:32.422827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.969 [2024-10-01 08:46:32.422843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.969 qpair failed and we were unable to recover it. 00:31:40.969 [2024-10-01 08:46:32.423152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.969 [2024-10-01 08:46:32.423168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.969 qpair failed and we were unable to recover it. 00:31:40.969 [2024-10-01 08:46:32.423485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.969 [2024-10-01 08:46:32.423500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.969 qpair failed and we were unable to recover it. 00:31:40.969 [2024-10-01 08:46:32.423757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.969 [2024-10-01 08:46:32.423771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.969 qpair failed and we were unable to recover it. 00:31:40.969 [2024-10-01 08:46:32.424058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.969 [2024-10-01 08:46:32.424072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.969 qpair failed and we were unable to recover it. 00:31:40.969 [2024-10-01 08:46:32.424378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.969 [2024-10-01 08:46:32.424392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.969 qpair failed and we were unable to recover it. 00:31:40.969 [2024-10-01 08:46:32.424704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.969 [2024-10-01 08:46:32.424719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.969 qpair failed and we were unable to recover it. 00:31:40.969 [2024-10-01 08:46:32.425062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.969 [2024-10-01 08:46:32.425077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.969 qpair failed and we were unable to recover it. 00:31:40.969 [2024-10-01 08:46:32.425268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.969 [2024-10-01 08:46:32.425284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.969 qpair failed and we were unable to recover it. 00:31:40.969 [2024-10-01 08:46:32.425617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.969 [2024-10-01 08:46:32.425632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.969 qpair failed and we were unable to recover it. 00:31:40.969 [2024-10-01 08:46:32.425925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.969 [2024-10-01 08:46:32.425950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.969 qpair failed and we were unable to recover it. 00:31:40.969 [2024-10-01 08:46:32.426253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.969 [2024-10-01 08:46:32.426269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.969 qpair failed and we were unable to recover it. 00:31:40.969 [2024-10-01 08:46:32.426633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.969 [2024-10-01 08:46:32.426649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.969 qpair failed and we were unable to recover it. 00:31:40.969 [2024-10-01 08:46:32.427006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.969 [2024-10-01 08:46:32.427022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.969 qpair failed and we were unable to recover it. 00:31:40.969 [2024-10-01 08:46:32.427346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.969 [2024-10-01 08:46:32.427361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.969 qpair failed and we were unable to recover it. 00:31:40.969 [2024-10-01 08:46:32.427668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.969 [2024-10-01 08:46:32.427683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.969 qpair failed and we were unable to recover it. 00:31:40.969 [2024-10-01 08:46:32.428006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.969 [2024-10-01 08:46:32.428021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.969 qpair failed and we were unable to recover it. 00:31:40.969 [2024-10-01 08:46:32.428336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.969 [2024-10-01 08:46:32.428351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.969 qpair failed and we were unable to recover it. 00:31:40.969 [2024-10-01 08:46:32.428686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.969 [2024-10-01 08:46:32.428702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.969 qpair failed and we were unable to recover it. 00:31:40.969 [2024-10-01 08:46:32.428990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.969 [2024-10-01 08:46:32.429011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.969 qpair failed and we were unable to recover it. 00:31:40.969 [2024-10-01 08:46:32.429343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.969 [2024-10-01 08:46:32.429358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.969 qpair failed and we were unable to recover it. 00:31:40.969 [2024-10-01 08:46:32.429681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.969 [2024-10-01 08:46:32.429696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.969 qpair failed and we were unable to recover it. 00:31:40.969 [2024-10-01 08:46:32.430004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.969 [2024-10-01 08:46:32.430020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.969 qpair failed and we were unable to recover it. 00:31:40.969 [2024-10-01 08:46:32.430338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.969 [2024-10-01 08:46:32.430353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.969 qpair failed and we were unable to recover it. 00:31:40.969 [2024-10-01 08:46:32.430657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.969 [2024-10-01 08:46:32.430672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.969 qpair failed and we were unable to recover it. 00:31:40.969 [2024-10-01 08:46:32.431015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.969 [2024-10-01 08:46:32.431031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.969 qpair failed and we were unable to recover it. 00:31:40.969 [2024-10-01 08:46:32.431352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.969 [2024-10-01 08:46:32.431367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.969 qpair failed and we were unable to recover it. 00:31:40.969 [2024-10-01 08:46:32.431656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.969 [2024-10-01 08:46:32.431671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.969 qpair failed and we were unable to recover it. 00:31:40.969 [2024-10-01 08:46:32.432011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.969 [2024-10-01 08:46:32.432026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.969 qpair failed and we were unable to recover it. 00:31:40.969 [2024-10-01 08:46:32.433448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.969 [2024-10-01 08:46:32.433482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.969 qpair failed and we were unable to recover it. 00:31:40.969 [2024-10-01 08:46:32.433888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.969 [2024-10-01 08:46:32.433904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.969 qpair failed and we were unable to recover it. 00:31:40.969 [2024-10-01 08:46:32.434902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.969 [2024-10-01 08:46:32.434931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.969 qpair failed and we were unable to recover it. 00:31:40.969 [2024-10-01 08:46:32.435255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.969 [2024-10-01 08:46:32.435273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.969 qpair failed and we were unable to recover it. 00:31:40.969 [2024-10-01 08:46:32.435604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.969 [2024-10-01 08:46:32.435619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.969 qpair failed and we were unable to recover it. 00:31:40.969 [2024-10-01 08:46:32.436002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.969 [2024-10-01 08:46:32.436018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.969 qpair failed and we were unable to recover it. 00:31:40.969 [2024-10-01 08:46:32.436319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.969 [2024-10-01 08:46:32.436334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.969 qpair failed and we were unable to recover it. 00:31:40.969 [2024-10-01 08:46:32.436562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.969 [2024-10-01 08:46:32.436577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.969 qpair failed and we were unable to recover it. 00:31:40.969 [2024-10-01 08:46:32.436906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.969 [2024-10-01 08:46:32.436921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.969 qpair failed and we were unable to recover it. 00:31:40.969 [2024-10-01 08:46:32.437232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.969 [2024-10-01 08:46:32.437247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.969 qpair failed and we were unable to recover it. 00:31:40.969 [2024-10-01 08:46:32.437545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.970 [2024-10-01 08:46:32.437560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.970 qpair failed and we were unable to recover it. 00:31:40.970 [2024-10-01 08:46:32.437793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.970 [2024-10-01 08:46:32.437809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.970 qpair failed and we were unable to recover it. 00:31:40.970 [2024-10-01 08:46:32.438207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.970 [2024-10-01 08:46:32.438223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.970 qpair failed and we were unable to recover it. 00:31:40.970 [2024-10-01 08:46:32.438511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.970 [2024-10-01 08:46:32.438526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.970 qpair failed and we were unable to recover it. 00:31:40.970 [2024-10-01 08:46:32.438859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.970 [2024-10-01 08:46:32.438875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.970 qpair failed and we were unable to recover it. 00:31:40.970 [2024-10-01 08:46:32.439202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.970 [2024-10-01 08:46:32.439217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.970 qpair failed and we were unable to recover it. 00:31:40.970 [2024-10-01 08:46:32.439537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.970 [2024-10-01 08:46:32.439552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.970 qpair failed and we were unable to recover it. 00:31:40.970 [2024-10-01 08:46:32.439927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.970 [2024-10-01 08:46:32.439941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.970 qpair failed and we were unable to recover it. 00:31:40.970 [2024-10-01 08:46:32.440182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.970 [2024-10-01 08:46:32.440198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.970 qpair failed and we were unable to recover it. 00:31:40.970 [2024-10-01 08:46:32.440546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.970 [2024-10-01 08:46:32.440561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.970 qpair failed and we were unable to recover it. 00:31:40.970 [2024-10-01 08:46:32.440886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.970 [2024-10-01 08:46:32.440900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.970 qpair failed and we were unable to recover it. 00:31:40.970 [2024-10-01 08:46:32.441219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.970 [2024-10-01 08:46:32.441239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.970 qpair failed and we were unable to recover it. 00:31:40.970 [2024-10-01 08:46:32.441580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.970 [2024-10-01 08:46:32.441594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.970 qpair failed and we were unable to recover it. 00:31:40.970 [2024-10-01 08:46:32.441900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.970 [2024-10-01 08:46:32.441915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.970 qpair failed and we were unable to recover it. 00:31:40.970 [2024-10-01 08:46:32.442243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.970 [2024-10-01 08:46:32.442259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.970 qpair failed and we were unable to recover it. 00:31:40.970 [2024-10-01 08:46:32.442564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.970 [2024-10-01 08:46:32.442580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.970 qpair failed and we were unable to recover it. 00:31:40.970 [2024-10-01 08:46:32.442913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.970 [2024-10-01 08:46:32.442929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.970 qpair failed and we were unable to recover it. 00:31:40.970 [2024-10-01 08:46:32.443253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.970 [2024-10-01 08:46:32.443269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.970 qpair failed and we were unable to recover it. 00:31:40.970 [2024-10-01 08:46:32.443578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.970 [2024-10-01 08:46:32.443594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.970 qpair failed and we were unable to recover it. 00:31:40.970 [2024-10-01 08:46:32.443925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.970 [2024-10-01 08:46:32.443940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.970 qpair failed and we were unable to recover it. 00:31:40.970 [2024-10-01 08:46:32.444251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.970 [2024-10-01 08:46:32.444266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.970 qpair failed and we were unable to recover it. 00:31:40.970 [2024-10-01 08:46:32.444507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.970 [2024-10-01 08:46:32.444523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.970 qpair failed and we were unable to recover it. 00:31:40.970 [2024-10-01 08:46:32.444846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.970 [2024-10-01 08:46:32.444862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.970 qpair failed and we were unable to recover it. 00:31:40.970 [2024-10-01 08:46:32.445183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.970 [2024-10-01 08:46:32.445199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.970 qpair failed and we were unable to recover it. 00:31:40.970 [2024-10-01 08:46:32.445575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.970 [2024-10-01 08:46:32.445591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.970 qpair failed and we were unable to recover it. 00:31:40.970 [2024-10-01 08:46:32.445882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.970 [2024-10-01 08:46:32.445897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.970 qpair failed and we were unable to recover it. 00:31:40.970 [2024-10-01 08:46:32.446216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.970 [2024-10-01 08:46:32.446231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.970 qpair failed and we were unable to recover it. 00:31:40.970 [2024-10-01 08:46:32.446551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.970 [2024-10-01 08:46:32.446567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.970 qpair failed and we were unable to recover it. 00:31:40.970 [2024-10-01 08:46:32.446863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.970 [2024-10-01 08:46:32.446878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.970 qpair failed and we were unable to recover it. 00:31:40.970 [2024-10-01 08:46:32.447200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.970 [2024-10-01 08:46:32.447216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.970 qpair failed and we were unable to recover it. 00:31:40.970 [2024-10-01 08:46:32.447536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.970 [2024-10-01 08:46:32.447552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.970 qpair failed and we were unable to recover it. 00:31:40.970 [2024-10-01 08:46:32.447923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.970 [2024-10-01 08:46:32.447938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.970 qpair failed and we were unable to recover it. 00:31:40.970 [2024-10-01 08:46:32.448249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.970 [2024-10-01 08:46:32.448265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.970 qpair failed and we were unable to recover it. 00:31:40.970 [2024-10-01 08:46:32.448568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.970 [2024-10-01 08:46:32.448584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.970 qpair failed and we were unable to recover it. 00:31:40.970 [2024-10-01 08:46:32.448921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.970 [2024-10-01 08:46:32.448938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.970 qpair failed and we were unable to recover it. 00:31:40.970 [2024-10-01 08:46:32.449257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.970 [2024-10-01 08:46:32.449273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.970 qpair failed and we were unable to recover it. 00:31:40.970 [2024-10-01 08:46:32.449605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.970 [2024-10-01 08:46:32.449621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.970 qpair failed and we were unable to recover it. 00:31:40.970 [2024-10-01 08:46:32.449949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.970 [2024-10-01 08:46:32.449963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.970 qpair failed and we were unable to recover it. 00:31:40.970 [2024-10-01 08:46:32.450293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.970 [2024-10-01 08:46:32.450310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.970 qpair failed and we were unable to recover it. 00:31:40.970 [2024-10-01 08:46:32.450642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.970 [2024-10-01 08:46:32.450657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.970 qpair failed and we were unable to recover it. 00:31:40.970 [2024-10-01 08:46:32.450973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.970 [2024-10-01 08:46:32.450988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.970 qpair failed and we were unable to recover it. 00:31:40.970 [2024-10-01 08:46:32.451367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.970 [2024-10-01 08:46:32.451382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.970 qpair failed and we were unable to recover it. 00:31:40.970 [2024-10-01 08:46:32.451617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.970 [2024-10-01 08:46:32.451632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.970 qpair failed and we were unable to recover it. 00:31:40.970 [2024-10-01 08:46:32.451934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.970 [2024-10-01 08:46:32.451950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.970 qpair failed and we were unable to recover it. 00:31:40.970 [2024-10-01 08:46:32.452162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.970 [2024-10-01 08:46:32.452179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.970 qpair failed and we were unable to recover it. 00:31:40.970 [2024-10-01 08:46:32.452378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.970 [2024-10-01 08:46:32.452393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.970 qpair failed and we were unable to recover it. 00:31:40.970 [2024-10-01 08:46:32.452737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.970 [2024-10-01 08:46:32.452752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.970 qpair failed and we were unable to recover it. 00:31:40.970 [2024-10-01 08:46:32.453052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.971 [2024-10-01 08:46:32.453068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.971 qpair failed and we were unable to recover it. 00:31:40.971 [2024-10-01 08:46:32.453390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.971 [2024-10-01 08:46:32.453405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.971 qpair failed and we were unable to recover it. 00:31:40.971 [2024-10-01 08:46:32.453589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.971 [2024-10-01 08:46:32.453605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.971 qpair failed and we were unable to recover it. 00:31:40.971 [2024-10-01 08:46:32.453926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.971 [2024-10-01 08:46:32.453941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.971 qpair failed and we were unable to recover it. 00:31:40.971 [2024-10-01 08:46:32.454248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.971 [2024-10-01 08:46:32.454268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.971 qpair failed and we were unable to recover it. 00:31:40.971 [2024-10-01 08:46:32.454630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.971 [2024-10-01 08:46:32.454645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.971 qpair failed and we were unable to recover it. 00:31:40.971 [2024-10-01 08:46:32.454968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.971 [2024-10-01 08:46:32.454984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.971 qpair failed and we were unable to recover it. 00:31:40.971 [2024-10-01 08:46:32.455359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.971 [2024-10-01 08:46:32.455378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.971 qpair failed and we were unable to recover it. 00:31:40.971 [2024-10-01 08:46:32.455676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.971 [2024-10-01 08:46:32.455692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.971 qpair failed and we were unable to recover it. 00:31:40.971 [2024-10-01 08:46:32.455931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.971 [2024-10-01 08:46:32.455947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.971 qpair failed and we were unable to recover it. 00:31:40.971 [2024-10-01 08:46:32.456152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.971 [2024-10-01 08:46:32.456169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.971 qpair failed and we were unable to recover it. 00:31:40.971 [2024-10-01 08:46:32.456484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.971 [2024-10-01 08:46:32.456499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.971 qpair failed and we were unable to recover it. 00:31:40.971 [2024-10-01 08:46:32.456839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.971 [2024-10-01 08:46:32.456855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.971 qpair failed and we were unable to recover it. 00:31:40.971 [2024-10-01 08:46:32.457162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.971 [2024-10-01 08:46:32.457179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.971 qpair failed and we were unable to recover it. 00:31:40.971 [2024-10-01 08:46:32.457517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.971 [2024-10-01 08:46:32.457532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.971 qpair failed and we were unable to recover it. 00:31:40.971 [2024-10-01 08:46:32.457837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.971 [2024-10-01 08:46:32.457852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.971 qpair failed and we were unable to recover it. 00:31:40.971 [2024-10-01 08:46:32.458184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.971 [2024-10-01 08:46:32.458200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.971 qpair failed and we were unable to recover it. 00:31:40.971 [2024-10-01 08:46:32.458584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.971 [2024-10-01 08:46:32.458600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.971 qpair failed and we were unable to recover it. 00:31:40.971 [2024-10-01 08:46:32.458935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.971 [2024-10-01 08:46:32.458951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.971 qpair failed and we were unable to recover it. 00:31:40.971 [2024-10-01 08:46:32.459224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.971 [2024-10-01 08:46:32.459241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.971 qpair failed and we were unable to recover it. 00:31:40.971 [2024-10-01 08:46:32.459573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.971 [2024-10-01 08:46:32.459588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.971 qpair failed and we were unable to recover it. 00:31:40.971 [2024-10-01 08:46:32.459912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.971 [2024-10-01 08:46:32.459928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.971 qpair failed and we were unable to recover it. 00:31:40.971 [2024-10-01 08:46:32.460230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.971 [2024-10-01 08:46:32.460246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.971 qpair failed and we were unable to recover it. 00:31:40.971 [2024-10-01 08:46:32.460546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.971 [2024-10-01 08:46:32.460562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.971 qpair failed and we were unable to recover it. 00:31:40.971 [2024-10-01 08:46:32.460865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.971 [2024-10-01 08:46:32.460880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.971 qpair failed and we were unable to recover it. 00:31:40.971 [2024-10-01 08:46:32.461218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.971 [2024-10-01 08:46:32.461234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.971 qpair failed and we were unable to recover it. 00:31:40.971 [2024-10-01 08:46:32.461532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.971 [2024-10-01 08:46:32.461548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.971 qpair failed and we were unable to recover it. 00:31:40.971 [2024-10-01 08:46:32.461840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.971 [2024-10-01 08:46:32.461856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.971 qpair failed and we were unable to recover it. 00:31:40.971 [2024-10-01 08:46:32.462187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.971 [2024-10-01 08:46:32.462203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.971 qpair failed and we were unable to recover it. 00:31:40.971 [2024-10-01 08:46:32.462402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.971 [2024-10-01 08:46:32.462418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.971 qpair failed and we were unable to recover it. 00:31:40.971 [2024-10-01 08:46:32.462751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.971 [2024-10-01 08:46:32.462767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.971 qpair failed and we were unable to recover it. 00:31:40.971 [2024-10-01 08:46:32.463108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.971 [2024-10-01 08:46:32.463126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.971 qpair failed and we were unable to recover it. 00:31:40.971 [2024-10-01 08:46:32.463476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.971 [2024-10-01 08:46:32.463492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.971 qpair failed and we were unable to recover it. 00:31:40.971 [2024-10-01 08:46:32.463827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.971 [2024-10-01 08:46:32.463843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.971 qpair failed and we were unable to recover it. 00:31:40.971 [2024-10-01 08:46:32.464160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.971 [2024-10-01 08:46:32.464175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.971 qpair failed and we were unable to recover it. 00:31:40.971 [2024-10-01 08:46:32.464557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.971 [2024-10-01 08:46:32.464574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.971 qpair failed and we were unable to recover it. 00:31:40.971 [2024-10-01 08:46:32.464905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.971 [2024-10-01 08:46:32.464919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.971 qpair failed and we were unable to recover it. 00:31:40.971 [2024-10-01 08:46:32.465194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.971 [2024-10-01 08:46:32.465210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.971 qpair failed and we were unable to recover it. 00:31:40.971 [2024-10-01 08:46:32.465541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.971 [2024-10-01 08:46:32.465556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.971 qpair failed and we were unable to recover it. 00:31:40.971 [2024-10-01 08:46:32.465854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.971 [2024-10-01 08:46:32.465877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.971 qpair failed and we were unable to recover it. 00:31:40.971 [2024-10-01 08:46:32.466212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.971 [2024-10-01 08:46:32.466228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.971 qpair failed and we were unable to recover it. 00:31:40.971 [2024-10-01 08:46:32.466430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.971 [2024-10-01 08:46:32.466445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.971 qpair failed and we were unable to recover it. 00:31:40.971 [2024-10-01 08:46:32.466781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.971 [2024-10-01 08:46:32.466795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.971 qpair failed and we were unable to recover it. 00:31:40.971 [2024-10-01 08:46:32.467133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.971 [2024-10-01 08:46:32.467150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.971 qpair failed and we were unable to recover it. 00:31:40.971 [2024-10-01 08:46:32.467505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.971 [2024-10-01 08:46:32.467524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.971 qpair failed and we were unable to recover it. 00:31:40.971 [2024-10-01 08:46:32.467836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.971 [2024-10-01 08:46:32.467850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.971 qpair failed and we were unable to recover it. 00:31:40.971 [2024-10-01 08:46:32.468168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.971 [2024-10-01 08:46:32.468183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.971 qpair failed and we were unable to recover it. 00:31:40.971 [2024-10-01 08:46:32.468500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.971 [2024-10-01 08:46:32.468514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.971 qpair failed and we were unable to recover it. 00:31:40.971 [2024-10-01 08:46:32.468819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.971 [2024-10-01 08:46:32.468833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.971 qpair failed and we were unable to recover it. 00:31:40.971 [2024-10-01 08:46:32.469154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.971 [2024-10-01 08:46:32.469169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.971 qpair failed and we were unable to recover it. 00:31:40.971 [2024-10-01 08:46:32.469376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.972 [2024-10-01 08:46:32.469392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.972 qpair failed and we were unable to recover it. 00:31:40.972 [2024-10-01 08:46:32.469611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.972 [2024-10-01 08:46:32.469626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.972 qpair failed and we were unable to recover it. 00:31:40.972 [2024-10-01 08:46:32.469901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.972 [2024-10-01 08:46:32.469916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.972 qpair failed and we were unable to recover it. 00:31:40.972 [2024-10-01 08:46:32.470229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.972 [2024-10-01 08:46:32.470245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.972 qpair failed and we were unable to recover it. 00:31:40.972 [2024-10-01 08:46:32.470545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.972 [2024-10-01 08:46:32.470560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.972 qpair failed and we were unable to recover it. 00:31:40.972 [2024-10-01 08:46:32.470895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.972 [2024-10-01 08:46:32.470910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.972 qpair failed and we were unable to recover it. 00:31:40.972 [2024-10-01 08:46:32.471221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.972 [2024-10-01 08:46:32.471237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.972 qpair failed and we were unable to recover it. 00:31:40.972 [2024-10-01 08:46:32.471562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.972 [2024-10-01 08:46:32.471578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.972 qpair failed and we were unable to recover it. 00:31:40.972 [2024-10-01 08:46:32.471875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.972 [2024-10-01 08:46:32.471889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.972 qpair failed and we were unable to recover it. 00:31:40.972 [2024-10-01 08:46:32.472962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.972 [2024-10-01 08:46:32.473005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.972 qpair failed and we were unable to recover it. 00:31:40.972 [2024-10-01 08:46:32.473331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.972 [2024-10-01 08:46:32.473348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.972 qpair failed and we were unable to recover it. 00:31:40.972 [2024-10-01 08:46:32.473586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.972 [2024-10-01 08:46:32.473602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.972 qpair failed and we were unable to recover it. 00:31:40.972 [2024-10-01 08:46:32.473948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.972 [2024-10-01 08:46:32.473964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.972 qpair failed and we were unable to recover it. 00:31:40.972 [2024-10-01 08:46:32.474256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.972 [2024-10-01 08:46:32.474272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.972 qpair failed and we were unable to recover it. 00:31:40.972 [2024-10-01 08:46:32.474608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.972 [2024-10-01 08:46:32.474624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.972 qpair failed and we were unable to recover it. 00:31:40.972 [2024-10-01 08:46:32.474956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.972 [2024-10-01 08:46:32.474972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.972 qpair failed and we were unable to recover it. 00:31:40.972 [2024-10-01 08:46:32.475306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.972 [2024-10-01 08:46:32.475323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.972 qpair failed and we were unable to recover it. 00:31:40.972 [2024-10-01 08:46:32.475630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.972 [2024-10-01 08:46:32.475645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.972 qpair failed and we were unable to recover it. 00:31:40.972 [2024-10-01 08:46:32.475976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.972 [2024-10-01 08:46:32.475991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.972 qpair failed and we were unable to recover it. 00:31:40.972 [2024-10-01 08:46:32.476334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.972 [2024-10-01 08:46:32.476353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.972 qpair failed and we were unable to recover it. 00:31:40.972 [2024-10-01 08:46:32.476685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.972 [2024-10-01 08:46:32.476700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.972 qpair failed and we were unable to recover it. 00:31:40.972 [2024-10-01 08:46:32.477035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.972 [2024-10-01 08:46:32.477051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.972 qpair failed and we were unable to recover it. 00:31:40.972 [2024-10-01 08:46:32.477381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.972 [2024-10-01 08:46:32.477395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.972 qpair failed and we were unable to recover it. 00:31:40.972 [2024-10-01 08:46:32.477736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.972 [2024-10-01 08:46:32.477752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.972 qpair failed and we were unable to recover it. 00:31:40.972 [2024-10-01 08:46:32.478060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.972 [2024-10-01 08:46:32.478075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.972 qpair failed and we were unable to recover it. 00:31:40.972 [2024-10-01 08:46:32.478394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.972 [2024-10-01 08:46:32.478409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.972 qpair failed and we were unable to recover it. 00:31:40.972 [2024-10-01 08:46:32.478736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.972 [2024-10-01 08:46:32.478751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.972 qpair failed and we were unable to recover it. 00:31:40.972 [2024-10-01 08:46:32.478970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.972 [2024-10-01 08:46:32.478987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.972 qpair failed and we were unable to recover it. 00:31:40.972 [2024-10-01 08:46:32.479307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.972 [2024-10-01 08:46:32.479322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.972 qpair failed and we were unable to recover it. 00:31:40.972 [2024-10-01 08:46:32.479633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.972 [2024-10-01 08:46:32.479649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.972 qpair failed and we were unable to recover it. 00:31:40.972 [2024-10-01 08:46:32.479973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.972 [2024-10-01 08:46:32.479988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.972 qpair failed and we were unable to recover it. 00:31:40.972 [2024-10-01 08:46:32.480105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.972 [2024-10-01 08:46:32.480122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.972 qpair failed and we were unable to recover it. 00:31:40.972 [2024-10-01 08:46:32.480431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.972 [2024-10-01 08:46:32.480446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.972 qpair failed and we were unable to recover it. 00:31:40.972 [2024-10-01 08:46:32.480633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.972 [2024-10-01 08:46:32.480649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.972 qpair failed and we were unable to recover it. 00:31:40.972 [2024-10-01 08:46:32.480960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.972 [2024-10-01 08:46:32.480979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.972 qpair failed and we were unable to recover it. 00:31:40.972 [2024-10-01 08:46:32.481321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.972 [2024-10-01 08:46:32.481336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.972 qpair failed and we were unable to recover it. 00:31:40.972 [2024-10-01 08:46:32.481523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.972 [2024-10-01 08:46:32.481538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.972 qpair failed and we were unable to recover it. 00:31:40.972 [2024-10-01 08:46:32.481829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.972 [2024-10-01 08:46:32.481843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.972 qpair failed and we were unable to recover it. 00:31:40.972 [2024-10-01 08:46:32.482168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.972 [2024-10-01 08:46:32.482183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.972 qpair failed and we were unable to recover it. 00:31:40.972 [2024-10-01 08:46:32.482477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.972 [2024-10-01 08:46:32.482492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.972 qpair failed and we were unable to recover it. 00:31:40.972 [2024-10-01 08:46:32.482793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.972 [2024-10-01 08:46:32.482808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.972 qpair failed and we were unable to recover it. 00:31:40.972 [2024-10-01 08:46:32.483130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.972 [2024-10-01 08:46:32.483146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.972 qpair failed and we were unable to recover it. 00:31:40.972 [2024-10-01 08:46:32.483553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.972 [2024-10-01 08:46:32.483567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.972 qpair failed and we were unable to recover it. 00:31:40.972 [2024-10-01 08:46:32.483855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.972 [2024-10-01 08:46:32.483870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.972 qpair failed and we were unable to recover it. 00:31:40.972 [2024-10-01 08:46:32.484187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.972 [2024-10-01 08:46:32.484202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.972 qpair failed and we were unable to recover it. 00:31:40.972 [2024-10-01 08:46:32.484580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.972 [2024-10-01 08:46:32.484595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.972 qpair failed and we were unable to recover it. 00:31:40.972 [2024-10-01 08:46:32.484937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.972 [2024-10-01 08:46:32.484951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.972 qpair failed and we were unable to recover it. 00:31:40.972 [2024-10-01 08:46:32.485298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.972 [2024-10-01 08:46:32.485313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.972 qpair failed and we were unable to recover it. 00:31:40.972 [2024-10-01 08:46:32.485649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.972 [2024-10-01 08:46:32.485664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.972 qpair failed and we were unable to recover it. 00:31:40.972 [2024-10-01 08:46:32.485842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.972 [2024-10-01 08:46:32.485857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.972 qpair failed and we were unable to recover it. 00:31:40.972 [2024-10-01 08:46:32.486262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.972 [2024-10-01 08:46:32.486279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.972 qpair failed and we were unable to recover it. 00:31:40.972 [2024-10-01 08:46:32.486592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.972 [2024-10-01 08:46:32.486606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.972 qpair failed and we were unable to recover it. 00:31:40.972 [2024-10-01 08:46:32.486949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.972 [2024-10-01 08:46:32.486965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.972 qpair failed and we were unable to recover it. 00:31:40.972 [2024-10-01 08:46:32.487185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.973 [2024-10-01 08:46:32.487200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.973 qpair failed and we were unable to recover it. 00:31:40.973 [2024-10-01 08:46:32.487398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.973 [2024-10-01 08:46:32.487413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.973 qpair failed and we were unable to recover it. 00:31:40.973 [2024-10-01 08:46:32.487644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.973 [2024-10-01 08:46:32.487658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.973 qpair failed and we were unable to recover it. 00:31:40.973 [2024-10-01 08:46:32.488015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.973 [2024-10-01 08:46:32.488031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.973 qpair failed and we were unable to recover it. 00:31:40.973 [2024-10-01 08:46:32.488362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.973 [2024-10-01 08:46:32.488376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.973 qpair failed and we were unable to recover it. 00:31:40.973 [2024-10-01 08:46:32.488698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.973 [2024-10-01 08:46:32.488713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.973 qpair failed and we were unable to recover it. 00:31:40.973 [2024-10-01 08:46:32.489054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.973 [2024-10-01 08:46:32.489069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.973 qpair failed and we were unable to recover it. 00:31:40.973 [2024-10-01 08:46:32.489363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.973 [2024-10-01 08:46:32.489378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.973 qpair failed and we were unable to recover it. 00:31:40.973 [2024-10-01 08:46:32.489687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.973 [2024-10-01 08:46:32.489701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.973 qpair failed and we were unable to recover it. 00:31:40.973 [2024-10-01 08:46:32.490019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.973 [2024-10-01 08:46:32.490035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.973 qpair failed and we were unable to recover it. 00:31:40.973 [2024-10-01 08:46:32.490365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.973 [2024-10-01 08:46:32.490380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.973 qpair failed and we were unable to recover it. 00:31:40.973 [2024-10-01 08:46:32.490581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.973 [2024-10-01 08:46:32.490597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.973 qpair failed and we were unable to recover it. 00:31:40.973 [2024-10-01 08:46:32.490767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.973 [2024-10-01 08:46:32.490783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.973 qpair failed and we were unable to recover it. 00:31:40.973 [2024-10-01 08:46:32.491070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.973 [2024-10-01 08:46:32.491085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.973 qpair failed and we were unable to recover it. 00:31:40.973 [2024-10-01 08:46:32.491416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.973 [2024-10-01 08:46:32.491431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.973 qpair failed and we were unable to recover it. 00:31:40.973 [2024-10-01 08:46:32.491779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.973 [2024-10-01 08:46:32.491795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.973 qpair failed and we were unable to recover it. 00:31:40.973 [2024-10-01 08:46:32.492120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.973 [2024-10-01 08:46:32.492136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.973 qpair failed and we were unable to recover it. 00:31:40.973 [2024-10-01 08:46:32.492366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.973 [2024-10-01 08:46:32.492380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.973 qpair failed and we were unable to recover it. 00:31:40.973 [2024-10-01 08:46:32.492577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.973 [2024-10-01 08:46:32.492592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.973 qpair failed and we were unable to recover it. 00:31:40.973 [2024-10-01 08:46:32.492819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.973 [2024-10-01 08:46:32.492834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.973 qpair failed and we were unable to recover it. 00:31:40.973 [2024-10-01 08:46:32.493012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.973 [2024-10-01 08:46:32.493028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.973 qpair failed and we were unable to recover it. 00:31:40.973 [2024-10-01 08:46:32.493359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.973 [2024-10-01 08:46:32.493377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.973 qpair failed and we were unable to recover it. 00:31:40.973 [2024-10-01 08:46:32.493715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.973 [2024-10-01 08:46:32.493730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.973 qpair failed and we were unable to recover it. 00:31:40.973 [2024-10-01 08:46:32.494039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.973 [2024-10-01 08:46:32.494054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.973 qpair failed and we were unable to recover it. 00:31:40.973 [2024-10-01 08:46:32.494394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.973 [2024-10-01 08:46:32.494408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.973 qpair failed and we were unable to recover it. 00:31:40.973 [2024-10-01 08:46:32.494743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.973 [2024-10-01 08:46:32.494757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.973 qpair failed and we were unable to recover it. 00:31:40.973 [2024-10-01 08:46:32.495156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.973 [2024-10-01 08:46:32.495171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.973 qpair failed and we were unable to recover it. 00:31:40.973 [2024-10-01 08:46:32.495477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.973 [2024-10-01 08:46:32.495491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.973 qpair failed and we were unable to recover it. 00:31:40.973 [2024-10-01 08:46:32.495840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.973 [2024-10-01 08:46:32.495854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.973 qpair failed and we were unable to recover it. 00:31:40.973 [2024-10-01 08:46:32.496166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.973 [2024-10-01 08:46:32.496183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.973 qpair failed and we were unable to recover it. 00:31:40.973 [2024-10-01 08:46:32.496488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.973 [2024-10-01 08:46:32.496503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.973 qpair failed and we were unable to recover it. 00:31:40.973 [2024-10-01 08:46:32.496832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.973 [2024-10-01 08:46:32.496846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.973 qpair failed and we were unable to recover it. 00:31:40.973 [2024-10-01 08:46:32.497224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.973 [2024-10-01 08:46:32.497240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.973 qpair failed and we were unable to recover it. 00:31:40.973 [2024-10-01 08:46:32.497576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.973 [2024-10-01 08:46:32.497590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.973 qpair failed and we were unable to recover it. 00:31:40.973 [2024-10-01 08:46:32.497968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.973 [2024-10-01 08:46:32.497982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.973 qpair failed and we were unable to recover it. 00:31:40.973 [2024-10-01 08:46:32.498315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.973 [2024-10-01 08:46:32.498330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.973 qpair failed and we were unable to recover it. 00:31:40.973 [2024-10-01 08:46:32.498666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.973 [2024-10-01 08:46:32.498680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.973 qpair failed and we were unable to recover it. 00:31:40.973 [2024-10-01 08:46:32.499006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.973 [2024-10-01 08:46:32.499021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.973 qpair failed and we were unable to recover it. 00:31:40.973 [2024-10-01 08:46:32.499358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.973 [2024-10-01 08:46:32.499372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.973 qpair failed and we were unable to recover it. 00:31:40.973 [2024-10-01 08:46:32.499705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.973 [2024-10-01 08:46:32.499720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.973 qpair failed and we were unable to recover it. 00:31:40.973 [2024-10-01 08:46:32.500032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.973 [2024-10-01 08:46:32.500048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.973 qpair failed and we were unable to recover it. 00:31:40.973 [2024-10-01 08:46:32.500370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.973 [2024-10-01 08:46:32.500385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.973 qpair failed and we were unable to recover it. 00:31:40.973 [2024-10-01 08:46:32.500686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.973 [2024-10-01 08:46:32.500701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.973 qpair failed and we were unable to recover it. 00:31:40.973 [2024-10-01 08:46:32.501043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.973 [2024-10-01 08:46:32.501058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.973 qpair failed and we were unable to recover it. 00:31:40.973 [2024-10-01 08:46:32.501408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.973 [2024-10-01 08:46:32.501422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.973 qpair failed and we were unable to recover it. 00:31:40.973 [2024-10-01 08:46:32.501741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.973 [2024-10-01 08:46:32.501756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.973 qpair failed and we were unable to recover it. 00:31:40.973 [2024-10-01 08:46:32.502094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.973 [2024-10-01 08:46:32.502109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.973 qpair failed and we were unable to recover it. 00:31:40.974 [2024-10-01 08:46:32.502401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.974 [2024-10-01 08:46:32.502415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.974 qpair failed and we were unable to recover it. 00:31:40.974 [2024-10-01 08:46:32.502751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.974 [2024-10-01 08:46:32.502767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.974 qpair failed and we were unable to recover it. 00:31:40.974 [2024-10-01 08:46:32.503070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.974 [2024-10-01 08:46:32.503085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.974 qpair failed and we were unable to recover it. 00:31:40.974 [2024-10-01 08:46:32.503422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.974 [2024-10-01 08:46:32.503437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.974 qpair failed and we were unable to recover it. 00:31:40.974 [2024-10-01 08:46:32.503767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.974 [2024-10-01 08:46:32.503782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.974 qpair failed and we were unable to recover it. 00:31:40.974 [2024-10-01 08:46:32.504093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.974 [2024-10-01 08:46:32.504109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.974 qpair failed and we were unable to recover it. 00:31:40.974 [2024-10-01 08:46:32.504389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.974 [2024-10-01 08:46:32.504403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.974 qpair failed and we were unable to recover it. 00:31:40.974 [2024-10-01 08:46:32.504725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.974 [2024-10-01 08:46:32.504739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.974 qpair failed and we were unable to recover it. 00:31:40.974 [2024-10-01 08:46:32.505073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.974 [2024-10-01 08:46:32.505087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.974 qpair failed and we were unable to recover it. 00:31:40.974 [2024-10-01 08:46:32.505427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.974 [2024-10-01 08:46:32.505441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.974 qpair failed and we were unable to recover it. 00:31:40.974 [2024-10-01 08:46:32.505758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.974 [2024-10-01 08:46:32.505773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.974 qpair failed and we were unable to recover it. 00:31:40.974 [2024-10-01 08:46:32.506079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.974 [2024-10-01 08:46:32.506094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.974 qpair failed and we were unable to recover it. 00:31:40.974 [2024-10-01 08:46:32.506477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.974 [2024-10-01 08:46:32.506492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.974 qpair failed and we were unable to recover it. 00:31:40.974 [2024-10-01 08:46:32.506861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.974 [2024-10-01 08:46:32.506875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.974 qpair failed and we were unable to recover it. 00:31:40.974 [2024-10-01 08:46:32.507115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.974 [2024-10-01 08:46:32.507133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.974 qpair failed and we were unable to recover it. 00:31:40.974 [2024-10-01 08:46:32.507483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.974 [2024-10-01 08:46:32.507497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.974 qpair failed and we were unable to recover it. 00:31:40.974 [2024-10-01 08:46:32.507802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.974 [2024-10-01 08:46:32.507817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.974 qpair failed and we were unable to recover it. 00:31:40.974 [2024-10-01 08:46:32.508160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.974 [2024-10-01 08:46:32.508176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.974 qpair failed and we were unable to recover it. 00:31:40.974 [2024-10-01 08:46:32.508462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.974 [2024-10-01 08:46:32.508476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.974 qpair failed and we were unable to recover it. 00:31:40.974 [2024-10-01 08:46:32.508655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.974 [2024-10-01 08:46:32.508671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.974 qpair failed and we were unable to recover it. 00:31:40.974 [2024-10-01 08:46:32.509017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.974 [2024-10-01 08:46:32.509032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.974 qpair failed and we were unable to recover it. 00:31:40.974 [2024-10-01 08:46:32.509430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.974 [2024-10-01 08:46:32.509444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.974 qpair failed and we were unable to recover it. 00:31:40.974 [2024-10-01 08:46:32.509768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.974 [2024-10-01 08:46:32.509782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.974 qpair failed and we were unable to recover it. 00:31:40.974 [2024-10-01 08:46:32.510072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.974 [2024-10-01 08:46:32.510087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.974 qpair failed and we were unable to recover it. 00:31:40.974 [2024-10-01 08:46:32.510434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.974 [2024-10-01 08:46:32.510448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.974 qpair failed and we were unable to recover it. 00:31:40.974 [2024-10-01 08:46:32.510824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.974 [2024-10-01 08:46:32.510839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.974 qpair failed and we were unable to recover it. 00:31:40.974 [2024-10-01 08:46:32.511162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.974 [2024-10-01 08:46:32.511177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.974 qpair failed and we were unable to recover it. 00:31:40.974 [2024-10-01 08:46:32.511548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.974 [2024-10-01 08:46:32.511563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.974 qpair failed and we were unable to recover it. 00:31:40.974 [2024-10-01 08:46:32.511898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.974 [2024-10-01 08:46:32.511913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.974 qpair failed and we were unable to recover it. 00:31:40.974 [2024-10-01 08:46:32.512226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.974 [2024-10-01 08:46:32.512242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.974 qpair failed and we were unable to recover it. 00:31:40.974 [2024-10-01 08:46:32.512555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.974 [2024-10-01 08:46:32.512569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.974 qpair failed and we were unable to recover it. 00:31:40.974 [2024-10-01 08:46:32.512909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.974 [2024-10-01 08:46:32.512924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.974 qpair failed and we were unable to recover it. 00:31:40.974 [2024-10-01 08:46:32.513135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.974 [2024-10-01 08:46:32.513151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.974 qpair failed and we were unable to recover it. 00:31:40.974 [2024-10-01 08:46:32.513461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.974 [2024-10-01 08:46:32.513476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.974 qpair failed and we were unable to recover it. 00:31:40.974 [2024-10-01 08:46:32.513815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.974 [2024-10-01 08:46:32.513830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.974 qpair failed and we were unable to recover it. 00:31:40.974 [2024-10-01 08:46:32.514168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.974 [2024-10-01 08:46:32.514183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.974 qpair failed and we were unable to recover it. 00:31:40.974 [2024-10-01 08:46:32.514546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.974 [2024-10-01 08:46:32.514560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.974 qpair failed and we were unable to recover it. 00:31:40.974 [2024-10-01 08:46:32.514865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.974 [2024-10-01 08:46:32.514881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.974 qpair failed and we were unable to recover it. 00:31:40.974 [2024-10-01 08:46:32.515181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.974 [2024-10-01 08:46:32.515196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.974 qpair failed and we were unable to recover it. 00:31:40.974 [2024-10-01 08:46:32.515576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.974 [2024-10-01 08:46:32.515590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.974 qpair failed and we were unable to recover it. 00:31:40.974 [2024-10-01 08:46:32.515890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.974 [2024-10-01 08:46:32.515905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.974 qpair failed and we were unable to recover it. 00:31:40.974 [2024-10-01 08:46:32.516223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.974 [2024-10-01 08:46:32.516239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.974 qpair failed and we were unable to recover it. 00:31:40.974 [2024-10-01 08:46:32.516550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.974 [2024-10-01 08:46:32.516565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.974 qpair failed and we were unable to recover it. 00:31:40.974 [2024-10-01 08:46:32.516937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.974 [2024-10-01 08:46:32.516952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.974 qpair failed and we were unable to recover it. 00:31:40.974 [2024-10-01 08:46:32.517268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.974 [2024-10-01 08:46:32.517282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.974 qpair failed and we were unable to recover it. 00:31:40.974 [2024-10-01 08:46:32.517663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.974 [2024-10-01 08:46:32.517677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.974 qpair failed and we were unable to recover it. 00:31:40.974 [2024-10-01 08:46:32.517858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.974 [2024-10-01 08:46:32.517874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.974 qpair failed and we were unable to recover it. 00:31:40.974 [2024-10-01 08:46:32.518132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.974 [2024-10-01 08:46:32.518147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.974 qpair failed and we were unable to recover it. 00:31:40.974 [2024-10-01 08:46:32.518466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.974 [2024-10-01 08:46:32.518480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.974 qpair failed and we were unable to recover it. 00:31:40.974 [2024-10-01 08:46:32.518859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.974 [2024-10-01 08:46:32.518873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.974 qpair failed and we were unable to recover it. 00:31:40.974 [2024-10-01 08:46:32.519309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.974 [2024-10-01 08:46:32.519323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.974 qpair failed and we were unable to recover it. 00:31:40.974 [2024-10-01 08:46:32.519646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.974 [2024-10-01 08:46:32.519660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.974 qpair failed and we were unable to recover it. 00:31:40.974 [2024-10-01 08:46:32.519876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.974 [2024-10-01 08:46:32.519891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.974 qpair failed and we were unable to recover it. 00:31:40.974 [2024-10-01 08:46:32.520213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.974 [2024-10-01 08:46:32.520228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.974 qpair failed and we were unable to recover it. 00:31:40.974 [2024-10-01 08:46:32.520546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.974 [2024-10-01 08:46:32.520564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.974 qpair failed and we were unable to recover it. 00:31:40.974 [2024-10-01 08:46:32.520715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.974 [2024-10-01 08:46:32.520730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.974 qpair failed and we were unable to recover it. 00:31:40.974 [2024-10-01 08:46:32.520954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.975 [2024-10-01 08:46:32.520969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.975 qpair failed and we were unable to recover it. 00:31:40.975 [2024-10-01 08:46:32.521291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.975 [2024-10-01 08:46:32.521306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.975 qpair failed and we were unable to recover it. 00:31:40.975 [2024-10-01 08:46:32.521634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.975 [2024-10-01 08:46:32.521649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.975 qpair failed and we were unable to recover it. 00:31:40.975 [2024-10-01 08:46:32.521965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.975 [2024-10-01 08:46:32.521980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.975 qpair failed and we were unable to recover it. 00:31:40.975 [2024-10-01 08:46:32.522281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.975 [2024-10-01 08:46:32.522297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.975 qpair failed and we were unable to recover it. 00:31:40.975 [2024-10-01 08:46:32.522655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.975 [2024-10-01 08:46:32.522670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.975 qpair failed and we were unable to recover it. 00:31:40.975 [2024-10-01 08:46:32.522997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.975 [2024-10-01 08:46:32.523013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.975 qpair failed and we were unable to recover it. 00:31:40.975 [2024-10-01 08:46:32.523313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.975 [2024-10-01 08:46:32.523328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.975 qpair failed and we were unable to recover it. 00:31:40.975 [2024-10-01 08:46:32.523658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.975 [2024-10-01 08:46:32.523673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.975 qpair failed and we were unable to recover it. 00:31:40.975 [2024-10-01 08:46:32.523855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.975 [2024-10-01 08:46:32.523870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.975 qpair failed and we were unable to recover it. 00:31:40.975 [2024-10-01 08:46:32.524155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.975 [2024-10-01 08:46:32.524170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.975 qpair failed and we were unable to recover it. 00:31:40.975 [2024-10-01 08:46:32.524339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.975 [2024-10-01 08:46:32.524354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.975 qpair failed and we were unable to recover it. 00:31:40.975 [2024-10-01 08:46:32.524645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.975 [2024-10-01 08:46:32.524660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.975 qpair failed and we were unable to recover it. 00:31:40.975 [2024-10-01 08:46:32.524948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.975 [2024-10-01 08:46:32.524962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.975 qpair failed and we were unable to recover it. 00:31:40.975 [2024-10-01 08:46:32.525333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.975 [2024-10-01 08:46:32.525348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.975 qpair failed and we were unable to recover it. 00:31:40.975 [2024-10-01 08:46:32.525656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.975 [2024-10-01 08:46:32.525671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.975 qpair failed and we were unable to recover it. 00:31:40.975 [2024-10-01 08:46:32.526069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.975 [2024-10-01 08:46:32.526084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.975 qpair failed and we were unable to recover it. 00:31:40.975 [2024-10-01 08:46:32.526425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.975 [2024-10-01 08:46:32.526439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.975 qpair failed and we were unable to recover it. 00:31:40.975 [2024-10-01 08:46:32.526757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.975 [2024-10-01 08:46:32.526771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.975 qpair failed and we were unable to recover it. 00:31:40.975 [2024-10-01 08:46:32.527104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.975 [2024-10-01 08:46:32.527121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.975 qpair failed and we were unable to recover it. 00:31:40.975 [2024-10-01 08:46:32.527423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.975 [2024-10-01 08:46:32.527445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.975 qpair failed and we were unable to recover it. 00:31:40.975 [2024-10-01 08:46:32.527801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.975 [2024-10-01 08:46:32.527816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.975 qpair failed and we were unable to recover it. 00:31:40.975 [2024-10-01 08:46:32.528107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.975 [2024-10-01 08:46:32.528122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.975 qpair failed and we were unable to recover it. 00:31:40.975 [2024-10-01 08:46:32.528449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.975 [2024-10-01 08:46:32.528464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.975 qpair failed and we were unable to recover it. 00:31:40.975 [2024-10-01 08:46:32.528792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.975 [2024-10-01 08:46:32.528807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.975 qpair failed and we were unable to recover it. 00:31:40.975 [2024-10-01 08:46:32.529016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.975 [2024-10-01 08:46:32.529033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.975 qpair failed and we were unable to recover it. 00:31:40.975 [2024-10-01 08:46:32.529401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.975 [2024-10-01 08:46:32.529416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.975 qpair failed and we were unable to recover it. 00:31:40.975 [2024-10-01 08:46:32.529750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.975 [2024-10-01 08:46:32.529765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.975 qpair failed and we were unable to recover it. 00:31:40.975 [2024-10-01 08:46:32.530105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.975 [2024-10-01 08:46:32.530120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.975 qpair failed and we were unable to recover it. 00:31:40.975 [2024-10-01 08:46:32.530436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.975 [2024-10-01 08:46:32.530451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.975 qpair failed and we were unable to recover it. 00:31:40.975 [2024-10-01 08:46:32.530774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.975 [2024-10-01 08:46:32.530790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.975 qpair failed and we were unable to recover it. 00:31:40.975 [2024-10-01 08:46:32.531105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.975 [2024-10-01 08:46:32.531121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.975 qpair failed and we were unable to recover it. 00:31:40.975 [2024-10-01 08:46:32.531462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.975 [2024-10-01 08:46:32.531476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.975 qpair failed and we were unable to recover it. 00:31:40.975 [2024-10-01 08:46:32.531721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.975 [2024-10-01 08:46:32.531737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.975 qpair failed and we were unable to recover it. 00:31:40.975 [2024-10-01 08:46:32.531919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.975 [2024-10-01 08:46:32.531935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.975 qpair failed and we were unable to recover it. 00:31:40.975 [2024-10-01 08:46:32.532260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.975 [2024-10-01 08:46:32.532276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.975 qpair failed and we were unable to recover it. 00:31:40.975 [2024-10-01 08:46:32.532651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.975 [2024-10-01 08:46:32.532666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.975 qpair failed and we were unable to recover it. 00:31:40.975 [2024-10-01 08:46:32.533008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.975 [2024-10-01 08:46:32.533023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.975 qpair failed and we were unable to recover it. 00:31:40.975 [2024-10-01 08:46:32.533242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.975 [2024-10-01 08:46:32.533260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.975 qpair failed and we were unable to recover it. 00:31:40.975 [2024-10-01 08:46:32.533565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.975 [2024-10-01 08:46:32.533580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.975 qpair failed and we were unable to recover it. 00:31:40.975 [2024-10-01 08:46:32.533918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.975 [2024-10-01 08:46:32.533932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.975 qpair failed and we were unable to recover it. 00:31:40.975 [2024-10-01 08:46:32.534241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.975 [2024-10-01 08:46:32.534256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.975 qpair failed and we were unable to recover it. 00:31:40.975 [2024-10-01 08:46:32.534480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.975 [2024-10-01 08:46:32.534494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.975 qpair failed and we were unable to recover it. 00:31:40.975 [2024-10-01 08:46:32.534795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.975 [2024-10-01 08:46:32.534809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.975 qpair failed and we were unable to recover it. 00:31:40.975 [2024-10-01 08:46:32.535150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.975 [2024-10-01 08:46:32.535165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.975 qpair failed and we were unable to recover it. 00:31:40.975 [2024-10-01 08:46:32.535554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.975 [2024-10-01 08:46:32.535569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.975 qpair failed and we were unable to recover it. 00:31:40.975 [2024-10-01 08:46:32.535902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.975 [2024-10-01 08:46:32.535916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.975 qpair failed and we were unable to recover it. 00:31:40.975 [2024-10-01 08:46:32.536234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.975 [2024-10-01 08:46:32.536249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.975 qpair failed and we were unable to recover it. 00:31:40.975 [2024-10-01 08:46:32.536585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.975 [2024-10-01 08:46:32.536600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.975 qpair failed and we were unable to recover it. 00:31:40.975 [2024-10-01 08:46:32.536918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.975 [2024-10-01 08:46:32.536932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.975 qpair failed and we were unable to recover it. 00:31:40.975 [2024-10-01 08:46:32.537221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.975 [2024-10-01 08:46:32.537236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.975 qpair failed and we were unable to recover it. 00:31:40.975 [2024-10-01 08:46:32.537543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.975 [2024-10-01 08:46:32.537557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.975 qpair failed and we were unable to recover it. 00:31:40.975 [2024-10-01 08:46:32.537884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.975 [2024-10-01 08:46:32.537899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.975 qpair failed and we were unable to recover it. 00:31:40.975 [2024-10-01 08:46:32.538211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.975 [2024-10-01 08:46:32.538226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.975 qpair failed and we were unable to recover it. 00:31:40.975 [2024-10-01 08:46:32.538431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.975 [2024-10-01 08:46:32.538447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.975 qpair failed and we were unable to recover it. 00:31:40.975 [2024-10-01 08:46:32.538773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.975 [2024-10-01 08:46:32.538787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.975 qpair failed and we were unable to recover it. 00:31:40.976 [2024-10-01 08:46:32.539076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.976 [2024-10-01 08:46:32.539091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.976 qpair failed and we were unable to recover it. 00:31:40.976 [2024-10-01 08:46:32.539432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.976 [2024-10-01 08:46:32.539446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.976 qpair failed and we were unable to recover it. 00:31:40.976 [2024-10-01 08:46:32.539763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.976 [2024-10-01 08:46:32.539784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.976 qpair failed and we were unable to recover it. 00:31:40.976 [2024-10-01 08:46:32.540103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.976 [2024-10-01 08:46:32.540118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.976 qpair failed and we were unable to recover it. 00:31:40.976 [2024-10-01 08:46:32.540430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.976 [2024-10-01 08:46:32.540446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.976 qpair failed and we were unable to recover it. 00:31:40.976 [2024-10-01 08:46:32.540780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.976 [2024-10-01 08:46:32.540794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.976 qpair failed and we were unable to recover it. 00:31:40.976 [2024-10-01 08:46:32.541118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.976 [2024-10-01 08:46:32.541133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.976 qpair failed and we were unable to recover it. 00:31:40.976 [2024-10-01 08:46:32.541481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.976 [2024-10-01 08:46:32.541497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.976 qpair failed and we were unable to recover it. 00:31:40.976 [2024-10-01 08:46:32.541797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.976 [2024-10-01 08:46:32.541812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.976 qpair failed and we were unable to recover it. 00:31:40.976 [2024-10-01 08:46:32.542098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.976 [2024-10-01 08:46:32.542114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.976 qpair failed and we were unable to recover it. 00:31:40.976 [2024-10-01 08:46:32.542402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.976 [2024-10-01 08:46:32.542455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.976 qpair failed and we were unable to recover it. 00:31:40.976 [2024-10-01 08:46:32.542768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.976 [2024-10-01 08:46:32.542782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.976 qpair failed and we were unable to recover it. 00:31:40.976 [2024-10-01 08:46:32.543092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.976 [2024-10-01 08:46:32.543108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.976 qpair failed and we were unable to recover it. 00:31:40.976 [2024-10-01 08:46:32.543440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.976 [2024-10-01 08:46:32.543456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.976 qpair failed and we were unable to recover it. 00:31:40.976 [2024-10-01 08:46:32.543785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.976 [2024-10-01 08:46:32.543800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.976 qpair failed and we were unable to recover it. 00:31:40.976 [2024-10-01 08:46:32.544117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.976 [2024-10-01 08:46:32.544132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.976 qpair failed and we were unable to recover it. 00:31:40.976 [2024-10-01 08:46:32.544475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.976 [2024-10-01 08:46:32.544490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.976 qpair failed and we were unable to recover it. 00:31:40.976 [2024-10-01 08:46:32.544830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.976 [2024-10-01 08:46:32.544844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.976 qpair failed and we were unable to recover it. 00:31:40.976 [2024-10-01 08:46:32.545171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.976 [2024-10-01 08:46:32.545185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.976 qpair failed and we were unable to recover it. 00:31:40.976 [2024-10-01 08:46:32.545498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.976 [2024-10-01 08:46:32.545512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.976 qpair failed and we were unable to recover it. 00:31:40.976 [2024-10-01 08:46:32.545840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.976 [2024-10-01 08:46:32.545855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.976 qpair failed and we were unable to recover it. 00:31:40.976 [2024-10-01 08:46:32.546159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.976 [2024-10-01 08:46:32.546175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.976 qpair failed and we were unable to recover it. 00:31:40.976 [2024-10-01 08:46:32.546471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.976 [2024-10-01 08:46:32.546489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.976 qpair failed and we were unable to recover it. 00:31:40.976 [2024-10-01 08:46:32.546840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.976 [2024-10-01 08:46:32.546855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.976 qpair failed and we were unable to recover it. 00:31:40.976 [2024-10-01 08:46:32.547149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.976 [2024-10-01 08:46:32.547164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.976 qpair failed and we were unable to recover it. 00:31:40.976 [2024-10-01 08:46:32.547474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.976 [2024-10-01 08:46:32.547489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.976 qpair failed and we were unable to recover it. 00:31:40.976 [2024-10-01 08:46:32.547813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.976 [2024-10-01 08:46:32.547828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.976 qpair failed and we were unable to recover it. 00:31:40.976 [2024-10-01 08:46:32.548144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.976 [2024-10-01 08:46:32.548160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.976 qpair failed and we were unable to recover it. 00:31:40.976 [2024-10-01 08:46:32.548555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.976 [2024-10-01 08:46:32.548570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.976 qpair failed and we were unable to recover it. 00:31:40.976 [2024-10-01 08:46:32.548905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.976 [2024-10-01 08:46:32.548919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.976 qpair failed and we were unable to recover it. 00:31:40.976 [2024-10-01 08:46:32.549236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.976 [2024-10-01 08:46:32.549252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.976 qpair failed and we were unable to recover it. 00:31:40.976 [2024-10-01 08:46:32.549465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.976 [2024-10-01 08:46:32.549479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.976 qpair failed and we were unable to recover it. 00:31:40.976 [2024-10-01 08:46:32.549863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.976 [2024-10-01 08:46:32.549877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.976 qpair failed and we were unable to recover it. 00:31:40.976 [2024-10-01 08:46:32.550096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.976 [2024-10-01 08:46:32.550112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.976 qpair failed and we were unable to recover it. 00:31:40.976 [2024-10-01 08:46:32.550452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.976 [2024-10-01 08:46:32.550467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.976 qpair failed and we were unable to recover it. 00:31:40.976 [2024-10-01 08:46:32.550759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.976 [2024-10-01 08:46:32.550773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.976 qpair failed and we were unable to recover it. 00:31:40.976 [2024-10-01 08:46:32.551182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.976 [2024-10-01 08:46:32.551198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.976 qpair failed and we were unable to recover it. 00:31:40.976 [2024-10-01 08:46:32.551492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.976 [2024-10-01 08:46:32.551508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.976 qpair failed and we were unable to recover it. 00:31:40.976 [2024-10-01 08:46:32.551823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.976 [2024-10-01 08:46:32.551838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.976 qpair failed and we were unable to recover it. 00:31:40.976 [2024-10-01 08:46:32.552144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.976 [2024-10-01 08:46:32.552159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.976 qpair failed and we were unable to recover it. 00:31:40.976 [2024-10-01 08:46:32.552506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.976 [2024-10-01 08:46:32.552522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.976 qpair failed and we were unable to recover it. 00:31:40.976 [2024-10-01 08:46:32.552814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.976 [2024-10-01 08:46:32.552828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.976 qpair failed and we were unable to recover it. 00:31:40.976 [2024-10-01 08:46:32.553146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.976 [2024-10-01 08:46:32.553162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.976 qpair failed and we were unable to recover it. 00:31:40.976 [2024-10-01 08:46:32.553383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.976 [2024-10-01 08:46:32.553397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.976 qpair failed and we were unable to recover it. 00:31:40.976 [2024-10-01 08:46:32.553716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.976 [2024-10-01 08:46:32.553730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.976 qpair failed and we were unable to recover it. 00:31:40.976 [2024-10-01 08:46:32.554021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.976 [2024-10-01 08:46:32.554036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.976 qpair failed and we were unable to recover it. 00:31:40.976 [2024-10-01 08:46:32.554376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.976 [2024-10-01 08:46:32.554391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.976 qpair failed and we were unable to recover it. 00:31:40.976 [2024-10-01 08:46:32.554685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.976 [2024-10-01 08:46:32.554707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.976 qpair failed and we were unable to recover it. 00:31:40.976 [2024-10-01 08:46:32.555043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.976 [2024-10-01 08:46:32.555059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.976 qpair failed and we were unable to recover it. 00:31:40.976 [2024-10-01 08:46:32.555391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.976 [2024-10-01 08:46:32.555413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.976 qpair failed and we were unable to recover it. 00:31:40.976 [2024-10-01 08:46:32.555735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.976 [2024-10-01 08:46:32.555749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.977 qpair failed and we were unable to recover it. 00:31:40.977 [2024-10-01 08:46:32.555985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.977 [2024-10-01 08:46:32.556006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.977 qpair failed and we were unable to recover it. 00:31:40.977 [2024-10-01 08:46:32.556323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.977 [2024-10-01 08:46:32.556338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.977 qpair failed and we were unable to recover it. 00:31:40.977 [2024-10-01 08:46:32.556650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.977 [2024-10-01 08:46:32.556665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.977 qpair failed and we were unable to recover it. 00:31:40.977 [2024-10-01 08:46:32.556864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.977 [2024-10-01 08:46:32.556878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.977 qpair failed and we were unable to recover it. 00:31:40.977 [2024-10-01 08:46:32.557084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.977 [2024-10-01 08:46:32.557100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.977 qpair failed and we were unable to recover it. 00:31:40.977 [2024-10-01 08:46:32.557423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.977 [2024-10-01 08:46:32.557440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.977 qpair failed and we were unable to recover it. 00:31:40.977 [2024-10-01 08:46:32.557813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.977 [2024-10-01 08:46:32.557828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.977 qpair failed and we were unable to recover it. 00:31:40.977 [2024-10-01 08:46:32.558152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.977 [2024-10-01 08:46:32.558167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.977 qpair failed and we were unable to recover it. 00:31:40.977 [2024-10-01 08:46:32.558538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.977 [2024-10-01 08:46:32.558552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.977 qpair failed and we were unable to recover it. 00:31:40.977 [2024-10-01 08:46:32.558901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.977 [2024-10-01 08:46:32.558915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.977 qpair failed and we were unable to recover it. 00:31:40.977 [2024-10-01 08:46:32.559234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.977 [2024-10-01 08:46:32.559249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.977 qpair failed and we were unable to recover it. 00:31:40.977 [2024-10-01 08:46:32.559574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.977 [2024-10-01 08:46:32.559592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.977 qpair failed and we were unable to recover it. 00:31:40.977 [2024-10-01 08:46:32.559896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.977 [2024-10-01 08:46:32.559910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.977 qpair failed and we were unable to recover it. 00:31:40.977 [2024-10-01 08:46:32.560243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.977 [2024-10-01 08:46:32.560259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.977 qpair failed and we were unable to recover it. 00:31:40.977 [2024-10-01 08:46:32.560574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.977 [2024-10-01 08:46:32.560588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.977 qpair failed and we were unable to recover it. 00:31:40.977 [2024-10-01 08:46:32.560889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.977 [2024-10-01 08:46:32.560903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.977 qpair failed and we were unable to recover it. 00:31:40.977 [2024-10-01 08:46:32.561228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.977 [2024-10-01 08:46:32.561243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.977 qpair failed and we were unable to recover it. 00:31:40.977 [2024-10-01 08:46:32.561634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.977 [2024-10-01 08:46:32.561649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.977 qpair failed and we were unable to recover it. 00:31:40.977 [2024-10-01 08:46:32.561986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.977 [2024-10-01 08:46:32.562006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.977 qpair failed and we were unable to recover it. 00:31:40.977 [2024-10-01 08:46:32.562321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.977 [2024-10-01 08:46:32.562337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.977 qpair failed and we were unable to recover it. 00:31:40.977 [2024-10-01 08:46:32.562655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.977 [2024-10-01 08:46:32.562669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.977 qpair failed and we were unable to recover it. 00:31:40.977 [2024-10-01 08:46:32.562973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.977 [2024-10-01 08:46:32.562988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.977 qpair failed and we were unable to recover it. 00:31:40.977 [2024-10-01 08:46:32.563292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.977 [2024-10-01 08:46:32.563307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.977 qpair failed and we were unable to recover it. 00:31:40.977 [2024-10-01 08:46:32.563614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.977 [2024-10-01 08:46:32.563628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.977 qpair failed and we were unable to recover it. 00:31:40.977 [2024-10-01 08:46:32.563935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.977 [2024-10-01 08:46:32.563949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.977 qpair failed and we were unable to recover it. 00:31:40.977 [2024-10-01 08:46:32.564286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.977 [2024-10-01 08:46:32.564302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.977 qpair failed and we were unable to recover it. 00:31:40.977 [2024-10-01 08:46:32.564636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.977 [2024-10-01 08:46:32.564650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.977 qpair failed and we were unable to recover it. 00:31:40.977 [2024-10-01 08:46:32.564972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.977 [2024-10-01 08:46:32.565001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.977 qpair failed and we were unable to recover it. 00:31:40.977 [2024-10-01 08:46:32.565331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.977 [2024-10-01 08:46:32.565345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.977 qpair failed and we were unable to recover it. 00:31:40.977 [2024-10-01 08:46:32.565673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.977 [2024-10-01 08:46:32.565688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.977 qpair failed and we were unable to recover it. 00:31:40.977 [2024-10-01 08:46:32.566021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.977 [2024-10-01 08:46:32.566037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.977 qpair failed and we were unable to recover it. 00:31:40.977 [2024-10-01 08:46:32.566233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.977 [2024-10-01 08:46:32.566248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.977 qpair failed and we were unable to recover it. 00:31:40.977 [2024-10-01 08:46:32.566621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.977 [2024-10-01 08:46:32.566636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.977 qpair failed and we were unable to recover it. 00:31:40.977 [2024-10-01 08:46:32.566938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.977 [2024-10-01 08:46:32.566953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.977 qpair failed and we were unable to recover it. 00:31:40.977 [2024-10-01 08:46:32.567290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.977 [2024-10-01 08:46:32.567306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.977 qpair failed and we were unable to recover it. 00:31:40.977 [2024-10-01 08:46:32.567676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.977 [2024-10-01 08:46:32.567691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.977 qpair failed and we were unable to recover it. 00:31:40.977 [2024-10-01 08:46:32.568036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.977 [2024-10-01 08:46:32.568052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.977 qpair failed and we were unable to recover it. 00:31:40.977 [2024-10-01 08:46:32.568417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.977 [2024-10-01 08:46:32.568431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.977 qpair failed and we were unable to recover it. 00:31:40.977 [2024-10-01 08:46:32.568536] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdd7ed0 is same with the state(6) to be set 00:31:40.977 Read completed with error (sct=0, sc=8) 00:31:40.977 starting I/O failed 00:31:40.977 Write completed with error (sct=0, sc=8) 00:31:40.977 starting I/O failed 00:31:40.977 Read completed with error (sct=0, sc=8) 00:31:40.977 starting I/O failed 00:31:40.977 Read completed with error (sct=0, sc=8) 00:31:40.977 starting I/O failed 00:31:40.977 Write completed with error (sct=0, sc=8) 00:31:40.977 starting I/O failed 00:31:40.977 Read completed with error (sct=0, sc=8) 00:31:40.977 starting I/O failed 00:31:40.977 Write completed with error (sct=0, sc=8) 00:31:40.977 starting I/O failed 00:31:40.977 Read completed with error (sct=0, sc=8) 00:31:40.977 starting I/O failed 00:31:40.977 Write completed with error (sct=0, sc=8) 00:31:40.977 starting I/O failed 00:31:40.977 Write completed with error (sct=0, sc=8) 00:31:40.977 starting I/O failed 00:31:40.977 Read completed with error (sct=0, sc=8) 00:31:40.977 starting I/O failed 00:31:40.977 Write completed with error (sct=0, sc=8) 00:31:40.977 starting I/O failed 00:31:40.977 Write completed with error (sct=0, sc=8) 00:31:40.977 starting I/O failed 00:31:40.977 Write completed with error (sct=0, sc=8) 00:31:40.977 starting I/O failed 00:31:40.977 Write completed with error (sct=0, sc=8) 00:31:40.977 starting I/O failed 00:31:40.977 Read completed with error (sct=0, sc=8) 00:31:40.977 starting I/O failed 00:31:40.977 Write completed with error (sct=0, sc=8) 00:31:40.977 starting I/O failed 00:31:40.977 Write completed with error (sct=0, sc=8) 00:31:40.977 starting I/O failed 00:31:40.977 Read completed with error (sct=0, sc=8) 00:31:40.977 starting I/O failed 00:31:40.977 Write completed with error (sct=0, sc=8) 00:31:40.977 starting I/O failed 00:31:40.977 Read completed with error (sct=0, sc=8) 00:31:40.977 starting I/O failed 00:31:40.977 Write completed with error (sct=0, sc=8) 00:31:40.977 starting I/O failed 00:31:40.977 Read completed with error (sct=0, sc=8) 00:31:40.977 starting I/O failed 00:31:40.977 Write completed with error (sct=0, sc=8) 00:31:40.977 starting I/O failed 00:31:40.977 Write completed with error (sct=0, sc=8) 00:31:40.977 starting I/O failed 00:31:40.977 Write completed with error (sct=0, sc=8) 00:31:40.977 starting I/O failed 00:31:40.977 Write completed with error (sct=0, sc=8) 00:31:40.977 starting I/O failed 00:31:40.977 Write completed with error (sct=0, sc=8) 00:31:40.977 starting I/O failed 00:31:40.977 Write completed with error (sct=0, sc=8) 00:31:40.977 starting I/O failed 00:31:40.977 Write completed with error (sct=0, sc=8) 00:31:40.977 starting I/O failed 00:31:40.977 Read completed with error (sct=0, sc=8) 00:31:40.977 starting I/O failed 00:31:40.977 Write completed with error (sct=0, sc=8) 00:31:40.977 starting I/O failed 00:31:40.977 [2024-10-01 08:46:32.569462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:40.977 Read completed with error (sct=0, sc=8) 00:31:40.977 starting I/O failed 00:31:40.977 Read completed with error (sct=0, sc=8) 00:31:40.977 starting I/O failed 00:31:40.977 Read completed with error (sct=0, sc=8) 00:31:40.977 starting I/O failed 00:31:40.977 Read completed with error (sct=0, sc=8) 00:31:40.977 starting I/O failed 00:31:40.977 Read completed with error (sct=0, sc=8) 00:31:40.977 starting I/O failed 00:31:40.977 Read completed with error (sct=0, sc=8) 00:31:40.977 starting I/O failed 00:31:40.977 Read completed with error (sct=0, sc=8) 00:31:40.977 starting I/O failed 00:31:40.977 Read completed with error (sct=0, sc=8) 00:31:40.977 starting I/O failed 00:31:40.977 Read completed with error (sct=0, sc=8) 00:31:40.977 starting I/O failed 00:31:40.977 Read completed with error (sct=0, sc=8) 00:31:40.977 starting I/O failed 00:31:40.977 Write completed with error (sct=0, sc=8) 00:31:40.977 starting I/O failed 00:31:40.977 Write completed with error (sct=0, sc=8) 00:31:40.977 starting I/O failed 00:31:40.977 Write completed with error (sct=0, sc=8) 00:31:40.977 starting I/O failed 00:31:40.977 Read completed with error (sct=0, sc=8) 00:31:40.977 starting I/O failed 00:31:40.977 Write completed with error (sct=0, sc=8) 00:31:40.977 starting I/O failed 00:31:40.977 Read completed with error (sct=0, sc=8) 00:31:40.977 starting I/O failed 00:31:40.977 Read completed with error (sct=0, sc=8) 00:31:40.977 starting I/O failed 00:31:40.977 Write completed with error (sct=0, sc=8) 00:31:40.977 starting I/O failed 00:31:40.977 Write completed with error (sct=0, sc=8) 00:31:40.978 starting I/O failed 00:31:40.978 Write completed with error (sct=0, sc=8) 00:31:40.978 starting I/O failed 00:31:40.978 Read completed with error (sct=0, sc=8) 00:31:40.978 starting I/O failed 00:31:40.978 Write completed with error (sct=0, sc=8) 00:31:40.978 starting I/O failed 00:31:40.978 Write completed with error (sct=0, sc=8) 00:31:40.978 starting I/O failed 00:31:40.978 Read completed with error (sct=0, sc=8) 00:31:40.978 starting I/O failed 00:31:40.978 Write completed with error (sct=0, sc=8) 00:31:40.978 starting I/O failed 00:31:40.978 Write completed with error (sct=0, sc=8) 00:31:40.978 starting I/O failed 00:31:40.978 Read completed with error (sct=0, sc=8) 00:31:40.978 starting I/O failed 00:31:40.978 Write completed with error (sct=0, sc=8) 00:31:40.978 starting I/O failed 00:31:40.978 Write completed with error (sct=0, sc=8) 00:31:40.978 starting I/O failed 00:31:40.978 Read completed with error (sct=0, sc=8) 00:31:40.978 starting I/O failed 00:31:40.978 Write completed with error (sct=0, sc=8) 00:31:40.978 starting I/O failed 00:31:40.978 Write completed with error (sct=0, sc=8) 00:31:40.978 starting I/O failed 00:31:40.978 [2024-10-01 08:46:32.569799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.978 [2024-10-01 08:46:32.570213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.978 [2024-10-01 08:46:32.570262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.978 qpair failed and we were unable to recover it. 00:31:40.978 [2024-10-01 08:46:32.570448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.978 [2024-10-01 08:46:32.570460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.978 qpair failed and we were unable to recover it. 00:31:40.978 [2024-10-01 08:46:32.570634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.978 [2024-10-01 08:46:32.570644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.978 qpair failed and we were unable to recover it. 00:31:40.978 [2024-10-01 08:46:32.570820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.978 [2024-10-01 08:46:32.570831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.978 qpair failed and we were unable to recover it. 00:31:40.978 [2024-10-01 08:46:32.571103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.978 [2024-10-01 08:46:32.571114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.978 qpair failed and we were unable to recover it. 00:31:40.978 [2024-10-01 08:46:32.571403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.978 [2024-10-01 08:46:32.571413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.978 qpair failed and we were unable to recover it. 00:31:40.978 [2024-10-01 08:46:32.571612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.978 [2024-10-01 08:46:32.571622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.978 qpair failed and we were unable to recover it. 00:31:40.978 [2024-10-01 08:46:32.571942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.978 [2024-10-01 08:46:32.571952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.978 qpair failed and we were unable to recover it. 00:31:40.978 [2024-10-01 08:46:32.572237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.978 [2024-10-01 08:46:32.572248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.978 qpair failed and we were unable to recover it. 00:31:40.978 [2024-10-01 08:46:32.572433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.978 [2024-10-01 08:46:32.572443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.978 qpair failed and we were unable to recover it. 00:31:40.978 [2024-10-01 08:46:32.572784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.978 [2024-10-01 08:46:32.572794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.978 qpair failed and we were unable to recover it. 00:31:40.978 [2024-10-01 08:46:32.573130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.978 [2024-10-01 08:46:32.573140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.978 qpair failed and we were unable to recover it. 00:31:40.978 [2024-10-01 08:46:32.573348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.978 [2024-10-01 08:46:32.573358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.978 qpair failed and we were unable to recover it. 00:31:40.978 [2024-10-01 08:46:32.573529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.978 [2024-10-01 08:46:32.573540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.978 qpair failed and we were unable to recover it. 00:31:40.978 [2024-10-01 08:46:32.573704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.978 [2024-10-01 08:46:32.573714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.978 qpair failed and we were unable to recover it. 00:31:40.978 [2024-10-01 08:46:32.574035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.978 [2024-10-01 08:46:32.574045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.978 qpair failed and we were unable to recover it. 00:31:40.978 [2024-10-01 08:46:32.574378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.978 [2024-10-01 08:46:32.574388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.978 qpair failed and we were unable to recover it. 00:31:40.978 [2024-10-01 08:46:32.574666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.978 [2024-10-01 08:46:32.574676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.978 qpair failed and we were unable to recover it. 00:31:40.978 [2024-10-01 08:46:32.574976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.978 [2024-10-01 08:46:32.574987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.978 qpair failed and we were unable to recover it. 00:31:40.978 [2024-10-01 08:46:32.575317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.978 [2024-10-01 08:46:32.575328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.978 qpair failed and we were unable to recover it. 00:31:40.978 [2024-10-01 08:46:32.575642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.978 [2024-10-01 08:46:32.575651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.978 qpair failed and we were unable to recover it. 00:31:40.978 [2024-10-01 08:46:32.575929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.978 [2024-10-01 08:46:32.575939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.978 qpair failed and we were unable to recover it. 00:31:40.978 [2024-10-01 08:46:32.576173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.978 [2024-10-01 08:46:32.576184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.978 qpair failed and we were unable to recover it. 00:31:40.978 [2024-10-01 08:46:32.576523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.978 [2024-10-01 08:46:32.576534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.978 qpair failed and we were unable to recover it. 00:31:40.978 [2024-10-01 08:46:32.576873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.978 [2024-10-01 08:46:32.576884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.978 qpair failed and we were unable to recover it. 00:31:40.978 [2024-10-01 08:46:32.577187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.978 [2024-10-01 08:46:32.577198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.978 qpair failed and we were unable to recover it. 00:31:40.978 [2024-10-01 08:46:32.577478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.978 [2024-10-01 08:46:32.577489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.978 qpair failed and we were unable to recover it. 00:31:40.978 [2024-10-01 08:46:32.577826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.978 [2024-10-01 08:46:32.577837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.978 qpair failed and we were unable to recover it. 00:31:40.978 [2024-10-01 08:46:32.578174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.978 [2024-10-01 08:46:32.578184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.978 qpair failed and we were unable to recover it. 00:31:40.978 [2024-10-01 08:46:32.578464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.978 [2024-10-01 08:46:32.578474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.978 qpair failed and we were unable to recover it. 00:31:40.978 [2024-10-01 08:46:32.578759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.978 [2024-10-01 08:46:32.578769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.978 qpair failed and we were unable to recover it. 00:31:40.978 [2024-10-01 08:46:32.579083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.978 [2024-10-01 08:46:32.579094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.978 qpair failed and we were unable to recover it. 00:31:40.978 [2024-10-01 08:46:32.579408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.978 [2024-10-01 08:46:32.579417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.978 qpair failed and we were unable to recover it. 00:31:40.978 [2024-10-01 08:46:32.579603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.978 [2024-10-01 08:46:32.579612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.978 qpair failed and we were unable to recover it. 00:31:40.978 [2024-10-01 08:46:32.579829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.978 [2024-10-01 08:46:32.579840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.978 qpair failed and we were unable to recover it. 00:31:40.978 [2024-10-01 08:46:32.580144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.978 [2024-10-01 08:46:32.580155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.978 qpair failed and we were unable to recover it. 00:31:40.978 [2024-10-01 08:46:32.580489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.978 [2024-10-01 08:46:32.580500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.978 qpair failed and we were unable to recover it. 00:31:40.978 [2024-10-01 08:46:32.580801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.978 [2024-10-01 08:46:32.580811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.978 qpair failed and we were unable to recover it. 00:31:40.978 [2024-10-01 08:46:32.581136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.978 [2024-10-01 08:46:32.581146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.978 qpair failed and we were unable to recover it. 00:31:40.978 [2024-10-01 08:46:32.581498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.978 [2024-10-01 08:46:32.581507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.978 qpair failed and we were unable to recover it. 00:31:40.978 [2024-10-01 08:46:32.581896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.978 [2024-10-01 08:46:32.581917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.978 qpair failed and we were unable to recover it. 00:31:40.978 [2024-10-01 08:46:32.582345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.978 [2024-10-01 08:46:32.582360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.978 qpair failed and we were unable to recover it. 00:31:40.978 [2024-10-01 08:46:32.582684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.978 [2024-10-01 08:46:32.582698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.978 qpair failed and we were unable to recover it. 00:31:40.978 [2024-10-01 08:46:32.583019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.978 [2024-10-01 08:46:32.583034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.978 qpair failed and we were unable to recover it. 00:31:40.978 [2024-10-01 08:46:32.583417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.978 [2024-10-01 08:46:32.583431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.978 qpair failed and we were unable to recover it. 00:31:40.978 [2024-10-01 08:46:32.583781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.978 [2024-10-01 08:46:32.583795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.978 qpair failed and we were unable to recover it. 00:31:40.978 [2024-10-01 08:46:32.584135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.978 [2024-10-01 08:46:32.584150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.978 qpair failed and we were unable to recover it. 00:31:40.978 [2024-10-01 08:46:32.584468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.978 [2024-10-01 08:46:32.584483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.978 qpair failed and we were unable to recover it. 00:31:40.978 [2024-10-01 08:46:32.584819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.978 [2024-10-01 08:46:32.584833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.978 qpair failed and we were unable to recover it. 00:31:40.978 [2024-10-01 08:46:32.585153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.978 [2024-10-01 08:46:32.585167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.978 qpair failed and we were unable to recover it. 00:31:40.978 [2024-10-01 08:46:32.585491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.978 [2024-10-01 08:46:32.585505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.978 qpair failed and we were unable to recover it. 00:31:40.978 [2024-10-01 08:46:32.585838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.978 [2024-10-01 08:46:32.585853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.978 qpair failed and we were unable to recover it. 00:31:40.978 [2024-10-01 08:46:32.586168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.978 [2024-10-01 08:46:32.586185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.978 qpair failed and we were unable to recover it. 00:31:40.978 [2024-10-01 08:46:32.586471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.978 [2024-10-01 08:46:32.586485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.978 qpair failed and we were unable to recover it. 00:31:40.978 [2024-10-01 08:46:32.586788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.978 [2024-10-01 08:46:32.586803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.978 qpair failed and we were unable to recover it. 00:31:40.978 [2024-10-01 08:46:32.587122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.978 [2024-10-01 08:46:32.587137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.978 qpair failed and we were unable to recover it. 00:31:40.979 [2024-10-01 08:46:32.587461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.979 [2024-10-01 08:46:32.587475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.979 qpair failed and we were unable to recover it. 00:31:40.979 [2024-10-01 08:46:32.587819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.979 [2024-10-01 08:46:32.587835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.979 qpair failed and we were unable to recover it. 00:31:40.979 [2024-10-01 08:46:32.588158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.979 [2024-10-01 08:46:32.588173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.979 qpair failed and we were unable to recover it. 00:31:40.979 [2024-10-01 08:46:32.588490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.979 [2024-10-01 08:46:32.588512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.979 qpair failed and we were unable to recover it. 00:31:40.979 [2024-10-01 08:46:32.588815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.979 [2024-10-01 08:46:32.588830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.979 qpair failed and we were unable to recover it. 00:31:40.979 [2024-10-01 08:46:32.589158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.979 [2024-10-01 08:46:32.589173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.979 qpair failed and we were unable to recover it. 00:31:40.979 [2024-10-01 08:46:32.589507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.979 [2024-10-01 08:46:32.589522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.979 qpair failed and we were unable to recover it. 00:31:40.979 [2024-10-01 08:46:32.589816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.979 [2024-10-01 08:46:32.589832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.979 qpair failed and we were unable to recover it. 00:31:40.979 [2024-10-01 08:46:32.590150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.979 [2024-10-01 08:46:32.590165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.979 qpair failed and we were unable to recover it. 00:31:40.979 [2024-10-01 08:46:32.590493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.979 [2024-10-01 08:46:32.590507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.979 qpair failed and we were unable to recover it. 00:31:40.979 [2024-10-01 08:46:32.590820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.979 [2024-10-01 08:46:32.590834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.979 qpair failed and we were unable to recover it. 00:31:40.979 [2024-10-01 08:46:32.591159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.979 [2024-10-01 08:46:32.591174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.979 qpair failed and we were unable to recover it. 00:31:40.979 [2024-10-01 08:46:32.591506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.979 [2024-10-01 08:46:32.591521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.979 qpair failed and we were unable to recover it. 00:31:40.979 [2024-10-01 08:46:32.591839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.979 [2024-10-01 08:46:32.591861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.979 qpair failed and we were unable to recover it. 00:31:40.979 [2024-10-01 08:46:32.592168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.979 [2024-10-01 08:46:32.592183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.979 qpair failed and we were unable to recover it. 00:31:40.979 [2024-10-01 08:46:32.592515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.979 [2024-10-01 08:46:32.592531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.979 qpair failed and we were unable to recover it. 00:31:40.979 [2024-10-01 08:46:32.592869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.979 [2024-10-01 08:46:32.592884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.979 qpair failed and we were unable to recover it. 00:31:40.979 [2024-10-01 08:46:32.593245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.979 [2024-10-01 08:46:32.593261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.979 qpair failed and we were unable to recover it. 00:31:40.979 [2024-10-01 08:46:32.593572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.979 [2024-10-01 08:46:32.593586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.979 qpair failed and we were unable to recover it. 00:31:40.979 [2024-10-01 08:46:32.593880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.979 [2024-10-01 08:46:32.593895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.979 qpair failed and we were unable to recover it. 00:31:40.979 [2024-10-01 08:46:32.594215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.979 [2024-10-01 08:46:32.594230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.979 qpair failed and we were unable to recover it. 00:31:40.979 [2024-10-01 08:46:32.594550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.979 [2024-10-01 08:46:32.594565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.979 qpair failed and we were unable to recover it. 00:31:40.979 [2024-10-01 08:46:32.594911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.979 [2024-10-01 08:46:32.594926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.979 qpair failed and we were unable to recover it. 00:31:40.979 [2024-10-01 08:46:32.595243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.979 [2024-10-01 08:46:32.595258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.979 qpair failed and we were unable to recover it. 00:31:40.979 [2024-10-01 08:46:32.595575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.979 [2024-10-01 08:46:32.595594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.979 qpair failed and we were unable to recover it. 00:31:40.979 [2024-10-01 08:46:32.595927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.979 [2024-10-01 08:46:32.595942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.979 qpair failed and we were unable to recover it. 00:31:40.979 [2024-10-01 08:46:32.596264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.979 [2024-10-01 08:46:32.596280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.979 qpair failed and we were unable to recover it. 00:31:40.979 [2024-10-01 08:46:32.596607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.979 [2024-10-01 08:46:32.596622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.979 qpair failed and we were unable to recover it. 00:31:40.979 [2024-10-01 08:46:32.596966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.979 [2024-10-01 08:46:32.596981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.979 qpair failed and we were unable to recover it. 00:31:40.979 [2024-10-01 08:46:32.597317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.979 [2024-10-01 08:46:32.597333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.979 qpair failed and we were unable to recover it. 00:31:40.979 [2024-10-01 08:46:32.597661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.979 [2024-10-01 08:46:32.597676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.979 qpair failed and we were unable to recover it. 00:31:40.979 [2024-10-01 08:46:32.598007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.979 [2024-10-01 08:46:32.598023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.979 qpair failed and we were unable to recover it. 00:31:40.979 [2024-10-01 08:46:32.598334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.979 [2024-10-01 08:46:32.598349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.979 qpair failed and we were unable to recover it. 00:31:40.979 [2024-10-01 08:46:32.598669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.979 [2024-10-01 08:46:32.598684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.979 qpair failed and we were unable to recover it. 00:31:40.979 [2024-10-01 08:46:32.598978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.979 [2024-10-01 08:46:32.599010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.979 qpair failed and we were unable to recover it. 00:31:40.979 [2024-10-01 08:46:32.599311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.979 [2024-10-01 08:46:32.599325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.979 qpair failed and we were unable to recover it. 00:31:40.979 [2024-10-01 08:46:32.599712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.979 [2024-10-01 08:46:32.599727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.979 qpair failed and we were unable to recover it. 00:31:40.979 [2024-10-01 08:46:32.599943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.979 [2024-10-01 08:46:32.599957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.979 qpair failed and we were unable to recover it. 00:31:40.979 [2024-10-01 08:46:32.600303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.979 [2024-10-01 08:46:32.600319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.979 qpair failed and we were unable to recover it. 00:31:40.979 [2024-10-01 08:46:32.600643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.979 [2024-10-01 08:46:32.600657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.979 qpair failed and we were unable to recover it. 00:31:40.979 [2024-10-01 08:46:32.600888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.980 [2024-10-01 08:46:32.600903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.980 qpair failed and we were unable to recover it. 00:31:40.980 [2024-10-01 08:46:32.601216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.980 [2024-10-01 08:46:32.601230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.980 qpair failed and we were unable to recover it. 00:31:40.980 [2024-10-01 08:46:32.601554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.980 [2024-10-01 08:46:32.601569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.980 qpair failed and we were unable to recover it. 00:31:40.980 [2024-10-01 08:46:32.601886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.980 [2024-10-01 08:46:32.601900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.980 qpair failed and we were unable to recover it. 00:31:40.980 [2024-10-01 08:46:32.602219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.980 [2024-10-01 08:46:32.602234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.980 qpair failed and we were unable to recover it. 00:31:40.980 [2024-10-01 08:46:32.602540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.980 [2024-10-01 08:46:32.602554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.980 qpair failed and we were unable to recover it. 00:31:40.980 [2024-10-01 08:46:32.602858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.980 [2024-10-01 08:46:32.602873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.980 qpair failed and we were unable to recover it. 00:31:40.980 [2024-10-01 08:46:32.603174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.980 [2024-10-01 08:46:32.603189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.980 qpair failed and we were unable to recover it. 00:31:40.980 [2024-10-01 08:46:32.603525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.980 [2024-10-01 08:46:32.603540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.980 qpair failed and we were unable to recover it. 00:31:40.980 [2024-10-01 08:46:32.603749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.980 [2024-10-01 08:46:32.603763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.980 qpair failed and we were unable to recover it. 00:31:40.980 [2024-10-01 08:46:32.604089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.980 [2024-10-01 08:46:32.604104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.980 qpair failed and we were unable to recover it. 00:31:40.980 [2024-10-01 08:46:32.604399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.980 [2024-10-01 08:46:32.604416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.980 qpair failed and we were unable to recover it. 00:31:40.980 [2024-10-01 08:46:32.604749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.980 [2024-10-01 08:46:32.604763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.980 qpair failed and we were unable to recover it. 00:31:40.980 [2024-10-01 08:46:32.605149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.980 [2024-10-01 08:46:32.605164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.980 qpair failed and we were unable to recover it. 00:31:40.980 [2024-10-01 08:46:32.605503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.980 [2024-10-01 08:46:32.605518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.980 qpair failed and we were unable to recover it. 00:31:40.980 [2024-10-01 08:46:32.605819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.980 [2024-10-01 08:46:32.605835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.980 qpair failed and we were unable to recover it. 00:31:40.980 [2024-10-01 08:46:32.606146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.980 [2024-10-01 08:46:32.606161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.980 qpair failed and we were unable to recover it. 00:31:40.980 [2024-10-01 08:46:32.606496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.980 [2024-10-01 08:46:32.606510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.980 qpair failed and we were unable to recover it. 00:31:40.980 [2024-10-01 08:46:32.606844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.980 [2024-10-01 08:46:32.606859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.980 qpair failed and we were unable to recover it. 00:31:40.980 [2024-10-01 08:46:32.607198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.980 [2024-10-01 08:46:32.607214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.980 qpair failed and we were unable to recover it. 00:31:40.980 [2024-10-01 08:46:32.607542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.980 [2024-10-01 08:46:32.607556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.980 qpair failed and we were unable to recover it. 00:31:40.980 [2024-10-01 08:46:32.607891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.980 [2024-10-01 08:46:32.607906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.980 qpair failed and we were unable to recover it. 00:31:40.980 [2024-10-01 08:46:32.608210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.980 [2024-10-01 08:46:32.608225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.980 qpair failed and we were unable to recover it. 00:31:40.980 [2024-10-01 08:46:32.608545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.980 [2024-10-01 08:46:32.608559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.980 qpair failed and we were unable to recover it. 00:31:40.980 [2024-10-01 08:46:32.608862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.980 [2024-10-01 08:46:32.608880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.980 qpair failed and we were unable to recover it. 00:31:40.980 [2024-10-01 08:46:32.609210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.980 [2024-10-01 08:46:32.609225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.980 qpair failed and we were unable to recover it. 00:31:40.980 [2024-10-01 08:46:32.609550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.980 [2024-10-01 08:46:32.609572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.980 qpair failed and we were unable to recover it. 00:31:40.980 [2024-10-01 08:46:32.609878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.980 [2024-10-01 08:46:32.609893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.980 qpair failed and we were unable to recover it. 00:31:40.980 [2024-10-01 08:46:32.610214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.980 [2024-10-01 08:46:32.610229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.980 qpair failed and we were unable to recover it. 00:31:40.980 [2024-10-01 08:46:32.610546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.980 [2024-10-01 08:46:32.610561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.980 qpair failed and we were unable to recover it. 00:31:40.980 [2024-10-01 08:46:32.610895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.980 [2024-10-01 08:46:32.610910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.980 qpair failed and we were unable to recover it. 00:31:40.980 [2024-10-01 08:46:32.611225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.980 [2024-10-01 08:46:32.611241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.980 qpair failed and we were unable to recover it. 00:31:40.980 [2024-10-01 08:46:32.611569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.980 [2024-10-01 08:46:32.611584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.980 qpair failed and we were unable to recover it. 00:31:40.980 [2024-10-01 08:46:32.611923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.980 [2024-10-01 08:46:32.611938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.980 qpair failed and we were unable to recover it. 00:31:40.980 [2024-10-01 08:46:32.612254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.980 [2024-10-01 08:46:32.612269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.980 qpair failed and we were unable to recover it. 00:31:40.980 [2024-10-01 08:46:32.612558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.980 [2024-10-01 08:46:32.612572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.980 qpair failed and we were unable to recover it. 00:31:40.980 [2024-10-01 08:46:32.612910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.980 [2024-10-01 08:46:32.612926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.980 qpair failed and we were unable to recover it. 00:31:40.980 [2024-10-01 08:46:32.613259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.980 [2024-10-01 08:46:32.613274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.980 qpair failed and we were unable to recover it. 00:31:40.980 [2024-10-01 08:46:32.613608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.980 [2024-10-01 08:46:32.613623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.980 qpair failed and we were unable to recover it. 00:31:40.980 [2024-10-01 08:46:32.613841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.980 [2024-10-01 08:46:32.613856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.980 qpair failed and we were unable to recover it. 00:31:40.980 [2024-10-01 08:46:32.614177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.980 [2024-10-01 08:46:32.614193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.980 qpair failed and we were unable to recover it. 00:31:40.980 [2024-10-01 08:46:32.614525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.980 [2024-10-01 08:46:32.614539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.980 qpair failed and we were unable to recover it. 00:31:40.980 [2024-10-01 08:46:32.614843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.980 [2024-10-01 08:46:32.614857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.980 qpair failed and we were unable to recover it. 00:31:40.980 [2024-10-01 08:46:32.615185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.980 [2024-10-01 08:46:32.615199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.980 qpair failed and we were unable to recover it. 00:31:40.980 [2024-10-01 08:46:32.615598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.980 [2024-10-01 08:46:32.615612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.980 qpair failed and we were unable to recover it. 00:31:40.980 [2024-10-01 08:46:32.615915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.980 [2024-10-01 08:46:32.615929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.981 qpair failed and we were unable to recover it. 00:31:40.981 [2024-10-01 08:46:32.616119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.981 [2024-10-01 08:46:32.616135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.981 qpair failed and we were unable to recover it. 00:31:40.981 [2024-10-01 08:46:32.616430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.981 [2024-10-01 08:46:32.616444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.981 qpair failed and we were unable to recover it. 00:31:40.981 [2024-10-01 08:46:32.616743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.981 [2024-10-01 08:46:32.616757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.981 qpair failed and we were unable to recover it. 00:31:40.981 [2024-10-01 08:46:32.617079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.981 [2024-10-01 08:46:32.617094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.981 qpair failed and we were unable to recover it. 00:31:40.981 [2024-10-01 08:46:32.617434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.981 [2024-10-01 08:46:32.617449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.981 qpair failed and we were unable to recover it. 00:31:40.981 [2024-10-01 08:46:32.617756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.981 [2024-10-01 08:46:32.617771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.981 qpair failed and we were unable to recover it. 00:31:40.981 [2024-10-01 08:46:32.618103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.981 [2024-10-01 08:46:32.618118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.981 qpair failed and we were unable to recover it. 00:31:40.981 [2024-10-01 08:46:32.618410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.981 [2024-10-01 08:46:32.618424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.981 qpair failed and we were unable to recover it. 00:31:40.981 [2024-10-01 08:46:32.618742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.981 [2024-10-01 08:46:32.618757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.981 qpair failed and we were unable to recover it. 00:31:40.981 [2024-10-01 08:46:32.618941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.981 [2024-10-01 08:46:32.618957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.981 qpair failed and we were unable to recover it. 00:31:40.981 [2024-10-01 08:46:32.619163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.981 [2024-10-01 08:46:32.619180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.981 qpair failed and we were unable to recover it. 00:31:40.981 [2024-10-01 08:46:32.619549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.981 [2024-10-01 08:46:32.619564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.981 qpair failed and we were unable to recover it. 00:31:40.981 [2024-10-01 08:46:32.619901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.981 [2024-10-01 08:46:32.619917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.981 qpair failed and we were unable to recover it. 00:31:40.981 [2024-10-01 08:46:32.620257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.981 [2024-10-01 08:46:32.620273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.981 qpair failed and we were unable to recover it. 00:31:40.981 [2024-10-01 08:46:32.620572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.981 [2024-10-01 08:46:32.620587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.981 qpair failed and we were unable to recover it. 00:31:40.981 [2024-10-01 08:46:32.620935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.981 [2024-10-01 08:46:32.620950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.981 qpair failed and we were unable to recover it. 00:31:40.981 [2024-10-01 08:46:32.621276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.981 [2024-10-01 08:46:32.621291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.981 qpair failed and we were unable to recover it. 00:31:40.981 [2024-10-01 08:46:32.621579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.981 [2024-10-01 08:46:32.621594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.981 qpair failed and we were unable to recover it. 00:31:40.981 [2024-10-01 08:46:32.621883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.981 [2024-10-01 08:46:32.621901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.981 qpair failed and we were unable to recover it. 00:31:40.981 [2024-10-01 08:46:32.622213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.981 [2024-10-01 08:46:32.622229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.981 qpair failed and we were unable to recover it. 00:31:40.981 [2024-10-01 08:46:32.622532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.981 [2024-10-01 08:46:32.622547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.981 qpair failed and we were unable to recover it. 00:31:40.981 [2024-10-01 08:46:32.622923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.981 [2024-10-01 08:46:32.622938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.981 qpair failed and we were unable to recover it. 00:31:40.981 [2024-10-01 08:46:32.623291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.981 [2024-10-01 08:46:32.623308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.981 qpair failed and we were unable to recover it. 00:31:40.981 [2024-10-01 08:46:32.623515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.981 [2024-10-01 08:46:32.623531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.981 qpair failed and we were unable to recover it. 00:31:40.981 [2024-10-01 08:46:32.623911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.981 [2024-10-01 08:46:32.623926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.981 qpair failed and we were unable to recover it. 00:31:40.981 [2024-10-01 08:46:32.624248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.981 [2024-10-01 08:46:32.624264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.981 qpair failed and we were unable to recover it. 00:31:40.981 [2024-10-01 08:46:32.624614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.981 [2024-10-01 08:46:32.624629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.981 qpair failed and we were unable to recover it. 00:31:40.981 [2024-10-01 08:46:32.624941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.981 [2024-10-01 08:46:32.624956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.981 qpair failed and we were unable to recover it. 00:31:40.981 [2024-10-01 08:46:32.625292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.981 [2024-10-01 08:46:32.625308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.981 qpair failed and we were unable to recover it. 00:31:40.981 [2024-10-01 08:46:32.625539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.981 [2024-10-01 08:46:32.625554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.981 qpair failed and we were unable to recover it. 00:31:40.981 [2024-10-01 08:46:32.625764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.981 [2024-10-01 08:46:32.625779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.981 qpair failed and we were unable to recover it. 00:31:40.981 [2024-10-01 08:46:32.626070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.981 [2024-10-01 08:46:32.626086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.981 qpair failed and we were unable to recover it. 00:31:40.981 [2024-10-01 08:46:32.626439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.981 [2024-10-01 08:46:32.626455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.981 qpair failed and we were unable to recover it. 00:31:40.981 [2024-10-01 08:46:32.626762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.981 [2024-10-01 08:46:32.626777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.981 qpair failed and we were unable to recover it. 00:31:40.981 [2024-10-01 08:46:32.627093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.981 [2024-10-01 08:46:32.627108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.981 qpair failed and we were unable to recover it. 00:31:40.981 [2024-10-01 08:46:32.627451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.981 [2024-10-01 08:46:32.627466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.981 qpair failed and we were unable to recover it. 00:31:40.981 [2024-10-01 08:46:32.627840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.981 [2024-10-01 08:46:32.627856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.981 qpair failed and we were unable to recover it. 00:31:40.981 [2024-10-01 08:46:32.628069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.981 [2024-10-01 08:46:32.628085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.981 qpair failed and we were unable to recover it. 00:31:40.981 [2024-10-01 08:46:32.628404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.981 [2024-10-01 08:46:32.628418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.981 qpair failed and we were unable to recover it. 00:31:40.981 [2024-10-01 08:46:32.628759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.981 [2024-10-01 08:46:32.628774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.981 qpair failed and we were unable to recover it. 00:31:40.981 [2024-10-01 08:46:32.629061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.981 [2024-10-01 08:46:32.629076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.981 qpair failed and we were unable to recover it. 00:31:40.981 [2024-10-01 08:46:32.629426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.981 [2024-10-01 08:46:32.629442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.981 qpair failed and we were unable to recover it. 00:31:40.981 [2024-10-01 08:46:32.629731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.981 [2024-10-01 08:46:32.629745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.981 qpair failed and we were unable to recover it. 00:31:40.981 [2024-10-01 08:46:32.630056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.981 [2024-10-01 08:46:32.630072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.981 qpair failed and we were unable to recover it. 00:31:40.981 [2024-10-01 08:46:32.630415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.981 [2024-10-01 08:46:32.630430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.981 qpair failed and we were unable to recover it. 00:31:40.981 [2024-10-01 08:46:32.630750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.981 [2024-10-01 08:46:32.630765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.981 qpair failed and we were unable to recover it. 00:31:40.981 [2024-10-01 08:46:32.631080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.982 [2024-10-01 08:46:32.631095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.982 qpair failed and we were unable to recover it. 00:31:40.982 [2024-10-01 08:46:32.631397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.982 [2024-10-01 08:46:32.631412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.982 qpair failed and we were unable to recover it. 00:31:40.982 [2024-10-01 08:46:32.631597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.982 [2024-10-01 08:46:32.631612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.982 qpair failed and we were unable to recover it. 00:31:40.982 [2024-10-01 08:46:32.631957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.982 [2024-10-01 08:46:32.631971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.982 qpair failed and we were unable to recover it. 00:31:40.982 [2024-10-01 08:46:32.632293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.982 [2024-10-01 08:46:32.632310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.982 qpair failed and we were unable to recover it. 00:31:40.982 [2024-10-01 08:46:32.632626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.982 [2024-10-01 08:46:32.632641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.982 qpair failed and we were unable to recover it. 00:31:40.982 [2024-10-01 08:46:32.632998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.982 [2024-10-01 08:46:32.633014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.982 qpair failed and we were unable to recover it. 00:31:40.982 [2024-10-01 08:46:32.633325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.982 [2024-10-01 08:46:32.633340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.982 qpair failed and we were unable to recover it. 00:31:40.982 [2024-10-01 08:46:32.633676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.982 [2024-10-01 08:46:32.633692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.982 qpair failed and we were unable to recover it. 00:31:40.982 [2024-10-01 08:46:32.634019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.982 [2024-10-01 08:46:32.634035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.982 qpair failed and we were unable to recover it. 00:31:40.982 [2024-10-01 08:46:32.634410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.982 [2024-10-01 08:46:32.634424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.982 qpair failed and we were unable to recover it. 00:31:40.982 [2024-10-01 08:46:32.634806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.982 [2024-10-01 08:46:32.634821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.982 qpair failed and we were unable to recover it. 00:31:40.982 [2024-10-01 08:46:32.635142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.982 [2024-10-01 08:46:32.635160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.982 qpair failed and we were unable to recover it. 00:31:40.982 [2024-10-01 08:46:32.635501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.982 [2024-10-01 08:46:32.635517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.982 qpair failed and we were unable to recover it. 00:31:40.982 [2024-10-01 08:46:32.635856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.982 [2024-10-01 08:46:32.635871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.982 qpair failed and we were unable to recover it. 00:31:40.982 [2024-10-01 08:46:32.636167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.982 [2024-10-01 08:46:32.636181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.982 qpair failed and we were unable to recover it. 00:31:40.982 [2024-10-01 08:46:32.636486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.982 [2024-10-01 08:46:32.636501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.982 qpair failed and we were unable to recover it. 00:31:40.982 [2024-10-01 08:46:32.636838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.982 [2024-10-01 08:46:32.636854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.982 qpair failed and we were unable to recover it. 00:31:40.982 [2024-10-01 08:46:32.637062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.982 [2024-10-01 08:46:32.637077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.982 qpair failed and we were unable to recover it. 00:31:40.982 [2024-10-01 08:46:32.637385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.982 [2024-10-01 08:46:32.637399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.982 qpair failed and we were unable to recover it. 00:31:40.982 [2024-10-01 08:46:32.637738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.982 [2024-10-01 08:46:32.637753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.982 qpair failed and we were unable to recover it. 00:31:40.982 [2024-10-01 08:46:32.638068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.982 [2024-10-01 08:46:32.638086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.982 qpair failed and we were unable to recover it. 00:31:40.982 [2024-10-01 08:46:32.638394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.982 [2024-10-01 08:46:32.638409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.982 qpair failed and we were unable to recover it. 00:31:40.982 [2024-10-01 08:46:32.638733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.982 [2024-10-01 08:46:32.638748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.982 qpair failed and we were unable to recover it. 00:31:40.982 [2024-10-01 08:46:32.639064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.982 [2024-10-01 08:46:32.639079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.982 qpair failed and we were unable to recover it. 00:31:40.982 [2024-10-01 08:46:32.639425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.982 [2024-10-01 08:46:32.639439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.982 qpair failed and we were unable to recover it. 00:31:40.982 [2024-10-01 08:46:32.639814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.982 [2024-10-01 08:46:32.639828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.982 qpair failed and we were unable to recover it. 00:31:40.982 [2024-10-01 08:46:32.640166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.982 [2024-10-01 08:46:32.640180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.982 qpair failed and we were unable to recover it. 00:31:40.982 [2024-10-01 08:46:32.640404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.982 [2024-10-01 08:46:32.640419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.982 qpair failed and we were unable to recover it. 00:31:40.982 [2024-10-01 08:46:32.640752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.982 [2024-10-01 08:46:32.640766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.982 qpair failed and we were unable to recover it. 00:31:40.982 [2024-10-01 08:46:32.641073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.982 [2024-10-01 08:46:32.641096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.982 qpair failed and we were unable to recover it. 00:31:40.982 [2024-10-01 08:46:32.641435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.982 [2024-10-01 08:46:32.641449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.982 qpair failed and we were unable to recover it. 00:31:40.982 [2024-10-01 08:46:32.641818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.982 [2024-10-01 08:46:32.641832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.982 qpair failed and we were unable to recover it. 00:31:40.982 [2024-10-01 08:46:32.642164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.982 [2024-10-01 08:46:32.642179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.982 qpair failed and we were unable to recover it. 00:31:40.982 [2024-10-01 08:46:32.642522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.982 [2024-10-01 08:46:32.642537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.982 qpair failed and we were unable to recover it. 00:31:40.982 [2024-10-01 08:46:32.642874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.982 [2024-10-01 08:46:32.642888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.982 qpair failed and we were unable to recover it. 00:31:40.982 [2024-10-01 08:46:32.643221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.982 [2024-10-01 08:46:32.643236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.982 qpair failed and we were unable to recover it. 00:31:40.982 [2024-10-01 08:46:32.643450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.982 [2024-10-01 08:46:32.643464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.982 qpair failed and we were unable to recover it. 00:31:40.982 [2024-10-01 08:46:32.643780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.982 [2024-10-01 08:46:32.643795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.982 qpair failed and we were unable to recover it. 00:31:40.982 [2024-10-01 08:46:32.644118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.982 [2024-10-01 08:46:32.644133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.982 qpair failed and we were unable to recover it. 00:31:40.982 [2024-10-01 08:46:32.644483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.982 [2024-10-01 08:46:32.644498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.982 qpair failed and we were unable to recover it. 00:31:40.982 [2024-10-01 08:46:32.644803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.982 [2024-10-01 08:46:32.644817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.982 qpair failed and we were unable to recover it. 00:31:40.982 [2024-10-01 08:46:32.645112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.982 [2024-10-01 08:46:32.645127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.982 qpair failed and we were unable to recover it. 00:31:40.982 [2024-10-01 08:46:32.645452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.982 [2024-10-01 08:46:32.645467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.982 qpair failed and we were unable to recover it. 00:31:40.982 [2024-10-01 08:46:32.645843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.982 [2024-10-01 08:46:32.645857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.982 qpair failed and we were unable to recover it. 00:31:40.982 [2024-10-01 08:46:32.646156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.982 [2024-10-01 08:46:32.646170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.982 qpair failed and we were unable to recover it. 00:31:40.982 [2024-10-01 08:46:32.646464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.982 [2024-10-01 08:46:32.646479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.983 qpair failed and we were unable to recover it. 00:31:40.983 [2024-10-01 08:46:32.646856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.983 [2024-10-01 08:46:32.646872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.983 qpair failed and we were unable to recover it. 00:31:40.983 [2024-10-01 08:46:32.647211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.983 [2024-10-01 08:46:32.647225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.983 qpair failed and we were unable to recover it. 00:31:40.983 [2024-10-01 08:46:32.647571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.983 [2024-10-01 08:46:32.647585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.983 qpair failed and we were unable to recover it. 00:31:40.983 [2024-10-01 08:46:32.647964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.983 [2024-10-01 08:46:32.647978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.983 qpair failed and we were unable to recover it. 00:31:40.983 [2024-10-01 08:46:32.648181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.983 [2024-10-01 08:46:32.648197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.983 qpair failed and we were unable to recover it. 00:31:40.983 [2024-10-01 08:46:32.648524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.983 [2024-10-01 08:46:32.648542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.983 qpair failed and we were unable to recover it. 00:31:40.983 [2024-10-01 08:46:32.648867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.983 [2024-10-01 08:46:32.648890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.983 qpair failed and we were unable to recover it. 00:31:40.983 [2024-10-01 08:46:32.649211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.983 [2024-10-01 08:46:32.649226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.983 qpair failed and we were unable to recover it. 00:31:40.983 [2024-10-01 08:46:32.649565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.983 [2024-10-01 08:46:32.649581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.983 qpair failed and we were unable to recover it. 00:31:40.983 [2024-10-01 08:46:32.649925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.983 [2024-10-01 08:46:32.649939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.983 qpair failed and we were unable to recover it. 00:31:40.983 [2024-10-01 08:46:32.650240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.983 [2024-10-01 08:46:32.650255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.983 qpair failed and we were unable to recover it. 00:31:40.983 [2024-10-01 08:46:32.650499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.983 [2024-10-01 08:46:32.650513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.983 qpair failed and we were unable to recover it. 00:31:40.983 [2024-10-01 08:46:32.650880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.983 [2024-10-01 08:46:32.650894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.983 qpair failed and we were unable to recover it. 00:31:40.983 [2024-10-01 08:46:32.651231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.983 [2024-10-01 08:46:32.651246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.983 qpair failed and we were unable to recover it. 00:31:40.983 [2024-10-01 08:46:32.651587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.983 [2024-10-01 08:46:32.651601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.983 qpair failed and we were unable to recover it. 00:31:40.983 [2024-10-01 08:46:32.651898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.983 [2024-10-01 08:46:32.651912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.983 qpair failed and we were unable to recover it. 00:31:40.983 [2024-10-01 08:46:32.652224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.983 [2024-10-01 08:46:32.652238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.983 qpair failed and we were unable to recover it. 00:31:40.983 [2024-10-01 08:46:32.652569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.983 [2024-10-01 08:46:32.652584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.983 qpair failed and we were unable to recover it. 00:31:40.983 [2024-10-01 08:46:32.652889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.983 [2024-10-01 08:46:32.652904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.983 qpair failed and we were unable to recover it. 00:31:40.983 [2024-10-01 08:46:32.653231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.983 [2024-10-01 08:46:32.653246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.983 qpair failed and we were unable to recover it. 00:31:40.983 [2024-10-01 08:46:32.653587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.983 [2024-10-01 08:46:32.653602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.983 qpair failed and we were unable to recover it. 00:31:40.983 [2024-10-01 08:46:32.653942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.983 [2024-10-01 08:46:32.653957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.983 qpair failed and we were unable to recover it. 00:31:40.983 [2024-10-01 08:46:32.654272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.983 [2024-10-01 08:46:32.654288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.983 qpair failed and we were unable to recover it. 00:31:40.983 [2024-10-01 08:46:32.654613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.983 [2024-10-01 08:46:32.654628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.983 qpair failed and we were unable to recover it. 00:31:40.983 [2024-10-01 08:46:32.654973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.983 [2024-10-01 08:46:32.654988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.983 qpair failed and we were unable to recover it. 00:31:40.983 [2024-10-01 08:46:32.655293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.983 [2024-10-01 08:46:32.655308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.983 qpair failed and we were unable to recover it. 00:31:40.983 [2024-10-01 08:46:32.655612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.983 [2024-10-01 08:46:32.655626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.983 qpair failed and we were unable to recover it. 00:31:40.983 [2024-10-01 08:46:32.655814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.983 [2024-10-01 08:46:32.655830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.983 qpair failed and we were unable to recover it. 00:31:40.983 [2024-10-01 08:46:32.656138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.983 [2024-10-01 08:46:32.656153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.983 qpair failed and we were unable to recover it. 00:31:40.983 [2024-10-01 08:46:32.656502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.983 [2024-10-01 08:46:32.656517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.983 qpair failed and we were unable to recover it. 00:31:40.983 [2024-10-01 08:46:32.656708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.983 [2024-10-01 08:46:32.656723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.983 qpair failed and we were unable to recover it. 00:31:40.983 [2024-10-01 08:46:32.657021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.983 [2024-10-01 08:46:32.657037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.983 qpair failed and we were unable to recover it. 00:31:40.983 [2024-10-01 08:46:32.657242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.983 [2024-10-01 08:46:32.657259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.983 qpair failed and we were unable to recover it. 00:31:40.983 [2024-10-01 08:46:32.657578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.983 [2024-10-01 08:46:32.657592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.983 qpair failed and we were unable to recover it. 00:31:40.983 [2024-10-01 08:46:32.657927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.983 [2024-10-01 08:46:32.657941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.983 qpair failed and we were unable to recover it. 00:31:40.983 [2024-10-01 08:46:32.658253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.983 [2024-10-01 08:46:32.658269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.983 qpair failed and we were unable to recover it. 00:31:40.983 [2024-10-01 08:46:32.658604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.983 [2024-10-01 08:46:32.658619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.983 qpair failed and we were unable to recover it. 00:31:40.983 [2024-10-01 08:46:32.658946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.983 [2024-10-01 08:46:32.658960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.983 qpair failed and we were unable to recover it. 00:31:40.983 [2024-10-01 08:46:32.659259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.983 [2024-10-01 08:46:32.659274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.983 qpair failed and we were unable to recover it. 00:31:40.983 [2024-10-01 08:46:32.659465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.984 [2024-10-01 08:46:32.659480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.984 qpair failed and we were unable to recover it. 00:31:40.984 [2024-10-01 08:46:32.659780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.984 [2024-10-01 08:46:32.659795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.984 qpair failed and we were unable to recover it. 00:31:40.984 [2024-10-01 08:46:32.660230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.984 [2024-10-01 08:46:32.660245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.984 qpair failed and we were unable to recover it. 00:31:40.984 [2024-10-01 08:46:32.660470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.984 [2024-10-01 08:46:32.660485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.984 qpair failed and we were unable to recover it. 00:31:40.984 [2024-10-01 08:46:32.660807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.984 [2024-10-01 08:46:32.660822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.984 qpair failed and we were unable to recover it. 00:31:40.984 [2024-10-01 08:46:32.661122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.984 [2024-10-01 08:46:32.661137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.984 qpair failed and we were unable to recover it. 00:31:40.984 [2024-10-01 08:46:32.661333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.984 [2024-10-01 08:46:32.661354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.984 qpair failed and we were unable to recover it. 00:31:40.984 [2024-10-01 08:46:32.661682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.984 [2024-10-01 08:46:32.661698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.984 qpair failed and we were unable to recover it. 00:31:40.984 [2024-10-01 08:46:32.662046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.984 [2024-10-01 08:46:32.662061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.984 qpair failed and we were unable to recover it. 00:31:40.984 [2024-10-01 08:46:32.662366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.984 [2024-10-01 08:46:32.662382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.984 qpair failed and we were unable to recover it. 00:31:40.984 [2024-10-01 08:46:32.662704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.984 [2024-10-01 08:46:32.662718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.984 qpair failed and we were unable to recover it. 00:31:40.984 [2024-10-01 08:46:32.663052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.984 [2024-10-01 08:46:32.663067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.984 qpair failed and we were unable to recover it. 00:31:40.984 [2024-10-01 08:46:32.663420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.984 [2024-10-01 08:46:32.663435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.984 qpair failed and we were unable to recover it. 00:31:40.984 [2024-10-01 08:46:32.663747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.984 [2024-10-01 08:46:32.663761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.984 qpair failed and we were unable to recover it. 00:31:40.984 [2024-10-01 08:46:32.664015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.984 [2024-10-01 08:46:32.664032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.984 qpair failed and we were unable to recover it. 00:31:40.984 [2024-10-01 08:46:32.664247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.984 [2024-10-01 08:46:32.664263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.984 qpair failed and we were unable to recover it. 00:31:40.984 [2024-10-01 08:46:32.664591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.984 [2024-10-01 08:46:32.664607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.984 qpair failed and we were unable to recover it. 00:31:40.984 [2024-10-01 08:46:32.664798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.984 [2024-10-01 08:46:32.664814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.984 qpair failed and we were unable to recover it. 00:31:40.984 [2024-10-01 08:46:32.665118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.984 [2024-10-01 08:46:32.665135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.984 qpair failed and we were unable to recover it. 00:31:40.984 [2024-10-01 08:46:32.665457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.984 [2024-10-01 08:46:32.665472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.984 qpair failed and we were unable to recover it. 00:31:40.984 [2024-10-01 08:46:32.665804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.984 [2024-10-01 08:46:32.665819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.984 qpair failed and we were unable to recover it. 00:31:40.984 [2024-10-01 08:46:32.666169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.984 [2024-10-01 08:46:32.666185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.984 qpair failed and we were unable to recover it. 00:31:40.984 [2024-10-01 08:46:32.666512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.984 [2024-10-01 08:46:32.666526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.984 qpair failed and we were unable to recover it. 00:31:40.984 [2024-10-01 08:46:32.666751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.984 [2024-10-01 08:46:32.666767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.984 qpair failed and we were unable to recover it. 00:31:40.984 [2024-10-01 08:46:32.667091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.984 [2024-10-01 08:46:32.667107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.984 qpair failed and we were unable to recover it. 00:31:40.984 [2024-10-01 08:46:32.667285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.984 [2024-10-01 08:46:32.667301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.984 qpair failed and we were unable to recover it. 00:31:40.984 [2024-10-01 08:46:32.667603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.984 [2024-10-01 08:46:32.667618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.984 qpair failed and we were unable to recover it. 00:31:40.984 [2024-10-01 08:46:32.667947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.984 [2024-10-01 08:46:32.667961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.984 qpair failed and we were unable to recover it. 00:31:40.984 [2024-10-01 08:46:32.668282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.984 [2024-10-01 08:46:32.668299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.984 qpair failed and we were unable to recover it. 00:31:40.984 [2024-10-01 08:46:32.668633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.984 [2024-10-01 08:46:32.668648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.984 qpair failed and we were unable to recover it. 00:31:40.984 [2024-10-01 08:46:32.669031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.984 [2024-10-01 08:46:32.669047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.984 qpair failed and we were unable to recover it. 00:31:40.984 [2024-10-01 08:46:32.669396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.984 [2024-10-01 08:46:32.669411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.984 qpair failed and we were unable to recover it. 00:31:40.984 [2024-10-01 08:46:32.669699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.984 [2024-10-01 08:46:32.669714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.984 qpair failed and we were unable to recover it. 00:31:40.984 [2024-10-01 08:46:32.670051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.984 [2024-10-01 08:46:32.670067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.984 qpair failed and we were unable to recover it. 00:31:40.984 [2024-10-01 08:46:32.670457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.984 [2024-10-01 08:46:32.670471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.984 qpair failed and we were unable to recover it. 00:31:40.984 [2024-10-01 08:46:32.670786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.984 [2024-10-01 08:46:32.670801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.984 qpair failed and we were unable to recover it. 00:31:40.984 [2024-10-01 08:46:32.671125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.984 [2024-10-01 08:46:32.671139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.984 qpair failed and we were unable to recover it. 00:31:40.984 [2024-10-01 08:46:32.671481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.984 [2024-10-01 08:46:32.671496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.984 qpair failed and we were unable to recover it. 00:31:40.984 [2024-10-01 08:46:32.671844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.984 [2024-10-01 08:46:32.671858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.984 qpair failed and we were unable to recover it. 00:31:40.984 [2024-10-01 08:46:32.672220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.984 [2024-10-01 08:46:32.672236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.984 qpair failed and we were unable to recover it. 00:31:40.984 [2024-10-01 08:46:32.672576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.984 [2024-10-01 08:46:32.672592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.984 qpair failed and we were unable to recover it. 00:31:40.984 [2024-10-01 08:46:32.672936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.984 [2024-10-01 08:46:32.672950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.984 qpair failed and we were unable to recover it. 00:31:40.984 [2024-10-01 08:46:32.673238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.984 [2024-10-01 08:46:32.673253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.984 qpair failed and we were unable to recover it. 00:31:40.984 [2024-10-01 08:46:32.673578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.984 [2024-10-01 08:46:32.673592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.984 qpair failed and we were unable to recover it. 00:31:40.984 [2024-10-01 08:46:32.673925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.984 [2024-10-01 08:46:32.673946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.984 qpair failed and we were unable to recover it. 00:31:40.984 [2024-10-01 08:46:32.674313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.984 [2024-10-01 08:46:32.674329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.984 qpair failed and we were unable to recover it. 00:31:40.984 [2024-10-01 08:46:32.674655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.985 [2024-10-01 08:46:32.674673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.985 qpair failed and we were unable to recover it. 00:31:40.985 [2024-10-01 08:46:32.675016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.985 [2024-10-01 08:46:32.675032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.985 qpair failed and we were unable to recover it. 00:31:40.985 [2024-10-01 08:46:32.675379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.985 [2024-10-01 08:46:32.675394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.985 qpair failed and we were unable to recover it. 00:31:40.985 [2024-10-01 08:46:32.675717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.985 [2024-10-01 08:46:32.675731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.985 qpair failed and we were unable to recover it. 00:31:40.985 [2024-10-01 08:46:32.676068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.985 [2024-10-01 08:46:32.676084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.985 qpair failed and we were unable to recover it. 00:31:40.985 [2024-10-01 08:46:32.676395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.985 [2024-10-01 08:46:32.676409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.985 qpair failed and we were unable to recover it. 00:31:40.985 [2024-10-01 08:46:32.676749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.985 [2024-10-01 08:46:32.676764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.985 qpair failed and we were unable to recover it. 00:31:40.985 [2024-10-01 08:46:32.677194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.985 [2024-10-01 08:46:32.677209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.985 qpair failed and we were unable to recover it. 00:31:40.985 [2024-10-01 08:46:32.677529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.985 [2024-10-01 08:46:32.677544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.985 qpair failed and we were unable to recover it. 00:31:40.985 [2024-10-01 08:46:32.677742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.985 [2024-10-01 08:46:32.677759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.985 qpair failed and we were unable to recover it. 00:31:40.985 [2024-10-01 08:46:32.678094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.985 [2024-10-01 08:46:32.678109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.985 qpair failed and we were unable to recover it. 00:31:40.985 [2024-10-01 08:46:32.678426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.985 [2024-10-01 08:46:32.678441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.985 qpair failed and we were unable to recover it. 00:31:40.985 [2024-10-01 08:46:32.678808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.985 [2024-10-01 08:46:32.678823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.985 qpair failed and we were unable to recover it. 00:31:40.985 [2024-10-01 08:46:32.679132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.985 [2024-10-01 08:46:32.679148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.985 qpair failed and we were unable to recover it. 00:31:40.985 [2024-10-01 08:46:32.679507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.985 [2024-10-01 08:46:32.679522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.985 qpair failed and we were unable to recover it. 00:31:40.985 [2024-10-01 08:46:32.679852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.985 [2024-10-01 08:46:32.679866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.985 qpair failed and we were unable to recover it. 00:31:40.985 [2024-10-01 08:46:32.680214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.985 [2024-10-01 08:46:32.680229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.985 qpair failed and we were unable to recover it. 00:31:40.985 [2024-10-01 08:46:32.680575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.985 [2024-10-01 08:46:32.680590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.985 qpair failed and we were unable to recover it. 00:31:40.985 [2024-10-01 08:46:32.680963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.985 [2024-10-01 08:46:32.680978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.985 qpair failed and we were unable to recover it. 00:31:40.985 [2024-10-01 08:46:32.681283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.985 [2024-10-01 08:46:32.681298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.985 qpair failed and we were unable to recover it. 00:31:40.985 [2024-10-01 08:46:32.681514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.985 [2024-10-01 08:46:32.681528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.985 qpair failed and we were unable to recover it. 00:31:40.985 [2024-10-01 08:46:32.681846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.985 [2024-10-01 08:46:32.681862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.985 qpair failed and we were unable to recover it. 00:31:40.985 [2024-10-01 08:46:32.682134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.985 [2024-10-01 08:46:32.682151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.985 qpair failed and we were unable to recover it. 00:31:40.985 [2024-10-01 08:46:32.682491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.985 [2024-10-01 08:46:32.682507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.985 qpair failed and we were unable to recover it. 00:31:40.985 [2024-10-01 08:46:32.682727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.985 [2024-10-01 08:46:32.682742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.985 qpair failed and we were unable to recover it. 00:31:40.985 [2024-10-01 08:46:32.682949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.985 [2024-10-01 08:46:32.682965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.985 qpair failed and we were unable to recover it. 00:31:40.985 [2024-10-01 08:46:32.683294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.985 [2024-10-01 08:46:32.683310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.985 qpair failed and we were unable to recover it. 00:31:40.985 [2024-10-01 08:46:32.683622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.985 [2024-10-01 08:46:32.683637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.985 qpair failed and we were unable to recover it. 00:31:40.985 [2024-10-01 08:46:32.683972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.985 [2024-10-01 08:46:32.683988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.985 qpair failed and we were unable to recover it. 00:31:40.985 [2024-10-01 08:46:32.684309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.985 [2024-10-01 08:46:32.684325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.985 qpair failed and we were unable to recover it. 00:31:40.985 [2024-10-01 08:46:32.684636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.985 [2024-10-01 08:46:32.684653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.985 qpair failed and we were unable to recover it. 00:31:40.985 [2024-10-01 08:46:32.684981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.985 [2024-10-01 08:46:32.685003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.985 qpair failed and we were unable to recover it. 00:31:40.985 [2024-10-01 08:46:32.685328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.985 [2024-10-01 08:46:32.685343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.985 qpair failed and we were unable to recover it. 00:31:40.985 [2024-10-01 08:46:32.685663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.985 [2024-10-01 08:46:32.685677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.985 qpair failed and we were unable to recover it. 00:31:40.985 [2024-10-01 08:46:32.686017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.985 [2024-10-01 08:46:32.686033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.985 qpair failed and we were unable to recover it. 00:31:40.985 [2024-10-01 08:46:32.686260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.985 [2024-10-01 08:46:32.686275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.985 qpair failed and we were unable to recover it. 00:31:40.985 [2024-10-01 08:46:32.686604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.985 [2024-10-01 08:46:32.686618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.985 qpair failed and we were unable to recover it. 00:31:40.985 [2024-10-01 08:46:32.686989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.985 [2024-10-01 08:46:32.687011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.985 qpair failed and we were unable to recover it. 00:31:40.985 [2024-10-01 08:46:32.687335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.985 [2024-10-01 08:46:32.687350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.985 qpair failed and we were unable to recover it. 00:31:40.985 [2024-10-01 08:46:32.687743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.985 [2024-10-01 08:46:32.687757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.985 qpair failed and we were unable to recover it. 00:31:40.985 [2024-10-01 08:46:32.688097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.985 [2024-10-01 08:46:32.688115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.985 qpair failed and we were unable to recover it. 00:31:40.985 [2024-10-01 08:46:32.688446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.985 [2024-10-01 08:46:32.688461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.985 qpair failed and we were unable to recover it. 00:31:40.985 [2024-10-01 08:46:32.688782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.985 [2024-10-01 08:46:32.688797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.985 qpair failed and we were unable to recover it. 00:31:40.985 [2024-10-01 08:46:32.688967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.985 [2024-10-01 08:46:32.688983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.985 qpair failed and we were unable to recover it. 00:31:40.985 [2024-10-01 08:46:32.689339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.985 [2024-10-01 08:46:32.689355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.985 qpair failed and we were unable to recover it. 00:31:40.985 [2024-10-01 08:46:32.689662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.985 [2024-10-01 08:46:32.689676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.985 qpair failed and we were unable to recover it. 00:31:40.985 [2024-10-01 08:46:32.689972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.985 [2024-10-01 08:46:32.689988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.985 qpair failed and we were unable to recover it. 00:31:40.985 [2024-10-01 08:46:32.690208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.985 [2024-10-01 08:46:32.690224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.985 qpair failed and we were unable to recover it. 00:31:40.985 [2024-10-01 08:46:32.690524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.985 [2024-10-01 08:46:32.690539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.985 qpair failed and we were unable to recover it. 00:31:40.985 [2024-10-01 08:46:32.690712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.986 [2024-10-01 08:46:32.690729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.986 qpair failed and we were unable to recover it. 00:31:40.986 [2024-10-01 08:46:32.691049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.986 [2024-10-01 08:46:32.691065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.986 qpair failed and we were unable to recover it. 00:31:40.986 [2024-10-01 08:46:32.691469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.986 [2024-10-01 08:46:32.691484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.986 qpair failed and we were unable to recover it. 00:31:40.986 [2024-10-01 08:46:32.691787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.986 [2024-10-01 08:46:32.691803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.986 qpair failed and we were unable to recover it. 00:31:40.986 [2024-10-01 08:46:32.692113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.986 [2024-10-01 08:46:32.692130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.986 qpair failed and we were unable to recover it. 00:31:40.986 [2024-10-01 08:46:32.692455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.986 [2024-10-01 08:46:32.692470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.986 qpair failed and we were unable to recover it. 00:31:40.986 [2024-10-01 08:46:32.692681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.986 [2024-10-01 08:46:32.692698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.986 qpair failed and we were unable to recover it. 00:31:40.986 [2024-10-01 08:46:32.692980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.986 [2024-10-01 08:46:32.693003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.986 qpair failed and we were unable to recover it. 00:31:40.986 [2024-10-01 08:46:32.693368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.986 [2024-10-01 08:46:32.693383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.986 qpair failed and we were unable to recover it. 00:31:40.986 [2024-10-01 08:46:32.693761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.986 [2024-10-01 08:46:32.693775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.986 qpair failed and we were unable to recover it. 00:31:40.986 [2024-10-01 08:46:32.694098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.986 [2024-10-01 08:46:32.694113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.986 qpair failed and we were unable to recover it. 00:31:40.986 [2024-10-01 08:46:32.694441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.986 [2024-10-01 08:46:32.694456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.986 qpair failed and we were unable to recover it. 00:31:40.986 [2024-10-01 08:46:32.694745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.986 [2024-10-01 08:46:32.694759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.986 qpair failed and we were unable to recover it. 00:31:40.986 [2024-10-01 08:46:32.695065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.986 [2024-10-01 08:46:32.695080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.986 qpair failed and we were unable to recover it. 00:31:40.986 [2024-10-01 08:46:32.695476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.986 [2024-10-01 08:46:32.695490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.986 qpair failed and we were unable to recover it. 00:31:40.986 [2024-10-01 08:46:32.695793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.986 [2024-10-01 08:46:32.695807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.986 qpair failed and we were unable to recover it. 00:31:40.986 [2024-10-01 08:46:32.696112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.986 [2024-10-01 08:46:32.696127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.986 qpair failed and we were unable to recover it. 00:31:40.986 [2024-10-01 08:46:32.696352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.986 [2024-10-01 08:46:32.696366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.986 qpair failed and we were unable to recover it. 00:31:40.986 [2024-10-01 08:46:32.696695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.986 [2024-10-01 08:46:32.696710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.986 qpair failed and we were unable to recover it. 00:31:40.986 [2024-10-01 08:46:32.696889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.986 [2024-10-01 08:46:32.696905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.986 qpair failed and we were unable to recover it. 00:31:40.986 [2024-10-01 08:46:32.697247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.986 [2024-10-01 08:46:32.697264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.986 qpair failed and we were unable to recover it. 00:31:40.986 [2024-10-01 08:46:32.697595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.986 [2024-10-01 08:46:32.697609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.986 qpair failed and we were unable to recover it. 00:31:40.986 [2024-10-01 08:46:32.697989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.986 [2024-10-01 08:46:32.698020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.986 qpair failed and we were unable to recover it. 00:31:40.986 [2024-10-01 08:46:32.698350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.986 [2024-10-01 08:46:32.698364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.986 qpair failed and we were unable to recover it. 00:31:40.986 [2024-10-01 08:46:32.698758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.986 [2024-10-01 08:46:32.698773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.986 qpair failed and we were unable to recover it. 00:31:40.986 [2024-10-01 08:46:32.699068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.986 [2024-10-01 08:46:32.699085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.986 qpair failed and we were unable to recover it. 00:31:40.986 [2024-10-01 08:46:32.699298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.986 [2024-10-01 08:46:32.699313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.986 qpair failed and we were unable to recover it. 00:31:40.986 [2024-10-01 08:46:32.699715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.986 [2024-10-01 08:46:32.699729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.986 qpair failed and we were unable to recover it. 00:31:40.986 [2024-10-01 08:46:32.700028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.986 [2024-10-01 08:46:32.700044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.986 qpair failed and we were unable to recover it. 00:31:40.986 [2024-10-01 08:46:32.700375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.986 [2024-10-01 08:46:32.700390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.986 qpair failed and we were unable to recover it. 00:31:40.986 [2024-10-01 08:46:32.700704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.986 [2024-10-01 08:46:32.700719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.986 qpair failed and we were unable to recover it. 00:31:40.986 [2024-10-01 08:46:32.700942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.986 [2024-10-01 08:46:32.700960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.986 qpair failed and we were unable to recover it. 00:31:40.986 [2024-10-01 08:46:32.701264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.986 [2024-10-01 08:46:32.701288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.986 qpair failed and we were unable to recover it. 00:31:40.986 [2024-10-01 08:46:32.701621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.986 [2024-10-01 08:46:32.701636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.986 qpair failed and we were unable to recover it. 00:31:40.986 [2024-10-01 08:46:32.701972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.986 [2024-10-01 08:46:32.701988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.986 qpair failed and we were unable to recover it. 00:31:40.986 [2024-10-01 08:46:32.702305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.986 [2024-10-01 08:46:32.702321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.986 qpair failed and we were unable to recover it. 00:31:40.986 [2024-10-01 08:46:32.702673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.986 [2024-10-01 08:46:32.702688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.986 qpair failed and we were unable to recover it. 00:31:40.986 [2024-10-01 08:46:32.703061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.986 [2024-10-01 08:46:32.703076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.986 qpair failed and we were unable to recover it. 00:31:40.986 [2024-10-01 08:46:32.703263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.986 [2024-10-01 08:46:32.703278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.986 qpair failed and we were unable to recover it. 00:31:40.986 [2024-10-01 08:46:32.703627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.986 [2024-10-01 08:46:32.703641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.986 qpair failed and we were unable to recover it. 00:31:40.986 [2024-10-01 08:46:32.703862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.986 [2024-10-01 08:46:32.703877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.986 qpair failed and we were unable to recover it. 00:31:40.986 [2024-10-01 08:46:32.704238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.986 [2024-10-01 08:46:32.704254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.986 qpair failed and we were unable to recover it. 00:31:40.986 [2024-10-01 08:46:32.704549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.987 [2024-10-01 08:46:32.704563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.987 qpair failed and we were unable to recover it. 00:31:40.987 [2024-10-01 08:46:32.704901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.987 [2024-10-01 08:46:32.704916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.987 qpair failed and we were unable to recover it. 00:31:40.987 [2024-10-01 08:46:32.705193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.987 [2024-10-01 08:46:32.705209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.987 qpair failed and we were unable to recover it. 00:31:40.987 [2024-10-01 08:46:32.705508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.987 [2024-10-01 08:46:32.705523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.987 qpair failed and we were unable to recover it. 00:31:40.987 [2024-10-01 08:46:32.705912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.987 [2024-10-01 08:46:32.705927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.987 qpair failed and we were unable to recover it. 00:31:40.987 [2024-10-01 08:46:32.706259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.987 [2024-10-01 08:46:32.706275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.987 qpair failed and we were unable to recover it. 00:31:40.987 [2024-10-01 08:46:32.706490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.987 [2024-10-01 08:46:32.706504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.987 qpair failed and we were unable to recover it. 00:31:40.987 [2024-10-01 08:46:32.706812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.987 [2024-10-01 08:46:32.706826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.987 qpair failed and we were unable to recover it. 00:31:40.987 [2024-10-01 08:46:32.707150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.987 [2024-10-01 08:46:32.707166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.987 qpair failed and we were unable to recover it. 00:31:40.987 [2024-10-01 08:46:32.707513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.987 [2024-10-01 08:46:32.707529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.987 qpair failed and we were unable to recover it. 00:31:40.987 [2024-10-01 08:46:32.707641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.987 [2024-10-01 08:46:32.707657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.987 qpair failed and we were unable to recover it. 00:31:40.987 [2024-10-01 08:46:32.708018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.987 [2024-10-01 08:46:32.708033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.987 qpair failed and we were unable to recover it. 00:31:40.987 [2024-10-01 08:46:32.708343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.987 [2024-10-01 08:46:32.708358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.987 qpair failed and we were unable to recover it. 00:31:40.987 [2024-10-01 08:46:32.708688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.987 [2024-10-01 08:46:32.708704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.987 qpair failed and we were unable to recover it. 00:31:40.987 [2024-10-01 08:46:32.709026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.987 [2024-10-01 08:46:32.709041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.987 qpair failed and we were unable to recover it. 00:31:40.987 [2024-10-01 08:46:32.709371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.987 [2024-10-01 08:46:32.709386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:40.987 qpair failed and we were unable to recover it. 00:31:40.987 [2024-10-01 08:46:32.709767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.987 [2024-10-01 08:46:32.709805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.987 qpair failed and we were unable to recover it. 00:31:40.987 [2024-10-01 08:46:32.710221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.987 [2024-10-01 08:46:32.710267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.987 qpair failed and we were unable to recover it. 00:31:40.987 [2024-10-01 08:46:32.710636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.987 [2024-10-01 08:46:32.710649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.987 qpair failed and we were unable to recover it. 00:31:40.987 [2024-10-01 08:46:32.711021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.987 [2024-10-01 08:46:32.711033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.987 qpair failed and we were unable to recover it. 00:31:40.987 [2024-10-01 08:46:32.711426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.987 [2024-10-01 08:46:32.711472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.987 qpair failed and we were unable to recover it. 00:31:40.987 [2024-10-01 08:46:32.711773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.987 [2024-10-01 08:46:32.711787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.987 qpair failed and we were unable to recover it. 00:31:40.987 [2024-10-01 08:46:32.712223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.987 [2024-10-01 08:46:32.712269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.987 qpair failed and we were unable to recover it. 00:31:40.987 [2024-10-01 08:46:32.712600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.987 [2024-10-01 08:46:32.712614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.987 qpair failed and we were unable to recover it. 00:31:40.987 [2024-10-01 08:46:32.712795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.987 [2024-10-01 08:46:32.712808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.987 qpair failed and we were unable to recover it. 00:31:40.987 [2024-10-01 08:46:32.713136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.987 [2024-10-01 08:46:32.713150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.987 qpair failed and we were unable to recover it. 00:31:40.987 [2024-10-01 08:46:32.713441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.987 [2024-10-01 08:46:32.713453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.987 qpair failed and we were unable to recover it. 00:31:40.987 [2024-10-01 08:46:32.713798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.987 [2024-10-01 08:46:32.713809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.987 qpair failed and we were unable to recover it. 00:31:40.987 [2024-10-01 08:46:32.714091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.987 [2024-10-01 08:46:32.714103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.987 qpair failed and we were unable to recover it. 00:31:40.987 [2024-10-01 08:46:32.714341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.987 [2024-10-01 08:46:32.714351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.987 qpair failed and we were unable to recover it. 00:31:40.987 [2024-10-01 08:46:32.714676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.987 [2024-10-01 08:46:32.714687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.987 qpair failed and we were unable to recover it. 00:31:40.987 [2024-10-01 08:46:32.714999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.987 [2024-10-01 08:46:32.715010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.987 qpair failed and we were unable to recover it. 00:31:40.987 [2024-10-01 08:46:32.715298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.987 [2024-10-01 08:46:32.715308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.987 qpair failed and we were unable to recover it. 00:31:40.987 [2024-10-01 08:46:32.715651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.987 [2024-10-01 08:46:32.715663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.988 qpair failed and we were unable to recover it. 00:31:40.988 [2024-10-01 08:46:32.715990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.988 [2024-10-01 08:46:32.716006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.988 qpair failed and we were unable to recover it. 00:31:40.988 [2024-10-01 08:46:32.716290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.988 [2024-10-01 08:46:32.716300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.988 qpair failed and we were unable to recover it. 00:31:40.988 [2024-10-01 08:46:32.716576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.988 [2024-10-01 08:46:32.716586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.988 qpair failed and we were unable to recover it. 00:31:40.988 [2024-10-01 08:46:32.716907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.988 [2024-10-01 08:46:32.716918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.988 qpair failed and we were unable to recover it. 00:31:40.988 [2024-10-01 08:46:32.717243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.988 [2024-10-01 08:46:32.717254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.988 qpair failed and we were unable to recover it. 00:31:40.988 [2024-10-01 08:46:32.717546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.988 [2024-10-01 08:46:32.717556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.988 qpair failed and we were unable to recover it. 00:31:40.988 [2024-10-01 08:46:32.717860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.988 [2024-10-01 08:46:32.717870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.988 qpair failed and we were unable to recover it. 00:31:40.988 [2024-10-01 08:46:32.718190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.988 [2024-10-01 08:46:32.718200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.988 qpair failed and we were unable to recover it. 00:31:40.988 [2024-10-01 08:46:32.718431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.988 [2024-10-01 08:46:32.718441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.988 qpair failed and we were unable to recover it. 00:31:40.988 [2024-10-01 08:46:32.718730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.988 [2024-10-01 08:46:32.718743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.988 qpair failed and we were unable to recover it. 00:31:40.988 [2024-10-01 08:46:32.719060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.988 [2024-10-01 08:46:32.719071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.988 qpair failed and we were unable to recover it. 00:31:40.988 [2024-10-01 08:46:32.719364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.988 [2024-10-01 08:46:32.719374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.988 qpair failed and we were unable to recover it. 00:31:40.988 [2024-10-01 08:46:32.719680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.988 [2024-10-01 08:46:32.719690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.988 qpair failed and we were unable to recover it. 00:31:40.988 [2024-10-01 08:46:32.719997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.988 [2024-10-01 08:46:32.720008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.988 qpair failed and we were unable to recover it. 00:31:40.988 [2024-10-01 08:46:32.720347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.988 [2024-10-01 08:46:32.720356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.988 qpair failed and we were unable to recover it. 00:31:40.988 [2024-10-01 08:46:32.720665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.988 [2024-10-01 08:46:32.720675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.988 qpair failed and we were unable to recover it. 00:31:40.988 [2024-10-01 08:46:32.721023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.988 [2024-10-01 08:46:32.721033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.988 qpair failed and we were unable to recover it. 00:31:40.988 [2024-10-01 08:46:32.721341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.988 [2024-10-01 08:46:32.721352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.988 qpair failed and we were unable to recover it. 00:31:40.988 [2024-10-01 08:46:32.721572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.988 [2024-10-01 08:46:32.721581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.988 qpair failed and we were unable to recover it. 00:31:40.988 [2024-10-01 08:46:32.721900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.988 [2024-10-01 08:46:32.721915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.988 qpair failed and we were unable to recover it. 00:31:40.988 [2024-10-01 08:46:32.722229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.988 [2024-10-01 08:46:32.722239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.988 qpair failed and we were unable to recover it. 00:31:40.988 [2024-10-01 08:46:32.722523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.988 [2024-10-01 08:46:32.722533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.988 qpair failed and we were unable to recover it. 00:31:40.988 [2024-10-01 08:46:32.722709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.988 [2024-10-01 08:46:32.722720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.988 qpair failed and we were unable to recover it. 00:31:40.988 [2024-10-01 08:46:32.723033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.988 [2024-10-01 08:46:32.723043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.988 qpair failed and we were unable to recover it. 00:31:40.988 [2024-10-01 08:46:32.723340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.988 [2024-10-01 08:46:32.723350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.988 qpair failed and we were unable to recover it. 00:31:40.988 [2024-10-01 08:46:32.723675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.988 [2024-10-01 08:46:32.723685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.988 qpair failed and we were unable to recover it. 00:31:40.988 [2024-10-01 08:46:32.724019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.988 [2024-10-01 08:46:32.724030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.988 qpair failed and we were unable to recover it. 00:31:40.988 [2024-10-01 08:46:32.724436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.988 [2024-10-01 08:46:32.724446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.988 qpair failed and we were unable to recover it. 00:31:40.988 [2024-10-01 08:46:32.724763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.988 [2024-10-01 08:46:32.724773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.988 qpair failed and we were unable to recover it. 00:31:40.988 [2024-10-01 08:46:32.725092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.988 [2024-10-01 08:46:32.725102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.988 qpair failed and we were unable to recover it. 00:31:40.988 [2024-10-01 08:46:32.725418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.988 [2024-10-01 08:46:32.725427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.988 qpair failed and we were unable to recover it. 00:31:40.988 [2024-10-01 08:46:32.725743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.988 [2024-10-01 08:46:32.725753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.988 qpair failed and we were unable to recover it. 00:31:40.988 [2024-10-01 08:46:32.726120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.988 [2024-10-01 08:46:32.726131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.988 qpair failed and we were unable to recover it. 00:31:40.988 [2024-10-01 08:46:32.726442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.988 [2024-10-01 08:46:32.726452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.988 qpair failed and we were unable to recover it. 00:31:40.988 [2024-10-01 08:46:32.726729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.988 [2024-10-01 08:46:32.726739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.988 qpair failed and we were unable to recover it. 00:31:40.988 [2024-10-01 08:46:32.727030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.988 [2024-10-01 08:46:32.727041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.988 qpair failed and we were unable to recover it. 00:31:40.988 [2024-10-01 08:46:32.727381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.988 [2024-10-01 08:46:32.727391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.988 qpair failed and we were unable to recover it. 00:31:40.988 [2024-10-01 08:46:32.727701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.989 [2024-10-01 08:46:32.727711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.989 qpair failed and we were unable to recover it. 00:31:40.989 [2024-10-01 08:46:32.728087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.989 [2024-10-01 08:46:32.728099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.989 qpair failed and we were unable to recover it. 00:31:40.989 [2024-10-01 08:46:32.728405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.989 [2024-10-01 08:46:32.728415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.989 qpair failed and we were unable to recover it. 00:31:40.989 [2024-10-01 08:46:32.728725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.989 [2024-10-01 08:46:32.728734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.989 qpair failed and we were unable to recover it. 00:31:40.989 [2024-10-01 08:46:32.729024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.989 [2024-10-01 08:46:32.729035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.989 qpair failed and we were unable to recover it. 00:31:40.989 [2024-10-01 08:46:32.729377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.989 [2024-10-01 08:46:32.729386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.989 qpair failed and we were unable to recover it. 00:31:40.989 [2024-10-01 08:46:32.729675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.989 [2024-10-01 08:46:32.729685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.989 qpair failed and we were unable to recover it. 00:31:40.989 [2024-10-01 08:46:32.730004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.989 [2024-10-01 08:46:32.730014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.989 qpair failed and we were unable to recover it. 00:31:40.989 [2024-10-01 08:46:32.730294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.989 [2024-10-01 08:46:32.730304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.989 qpair failed and we were unable to recover it. 00:31:40.989 [2024-10-01 08:46:32.730514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.989 [2024-10-01 08:46:32.730523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.989 qpair failed and we were unable to recover it. 00:31:40.989 [2024-10-01 08:46:32.730836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.989 [2024-10-01 08:46:32.730846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.989 qpair failed and we were unable to recover it. 00:31:40.989 [2024-10-01 08:46:32.731227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.989 [2024-10-01 08:46:32.731237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.989 qpair failed and we were unable to recover it. 00:31:40.989 [2024-10-01 08:46:32.731525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.989 [2024-10-01 08:46:32.731535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.989 qpair failed and we were unable to recover it. 00:31:40.989 [2024-10-01 08:46:32.731859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.989 [2024-10-01 08:46:32.731869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.989 qpair failed and we were unable to recover it. 00:31:40.989 [2024-10-01 08:46:32.732174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.989 [2024-10-01 08:46:32.732184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.989 qpair failed and we were unable to recover it. 00:31:40.989 [2024-10-01 08:46:32.732501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.989 [2024-10-01 08:46:32.732511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.989 qpair failed and we were unable to recover it. 00:31:40.989 [2024-10-01 08:46:32.732799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.989 [2024-10-01 08:46:32.732809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.989 qpair failed and we were unable to recover it. 00:31:40.989 [2024-10-01 08:46:32.733116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.989 [2024-10-01 08:46:32.733126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.989 qpair failed and we were unable to recover it. 00:31:40.989 [2024-10-01 08:46:32.733416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.989 [2024-10-01 08:46:32.733426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.989 qpair failed and we were unable to recover it. 00:31:40.989 [2024-10-01 08:46:32.733763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.989 [2024-10-01 08:46:32.733773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.989 qpair failed and we were unable to recover it. 00:31:40.989 [2024-10-01 08:46:32.734059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.989 [2024-10-01 08:46:32.734069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.989 qpair failed and we were unable to recover it. 00:31:40.989 [2024-10-01 08:46:32.734394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.989 [2024-10-01 08:46:32.734405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.989 qpair failed and we were unable to recover it. 00:31:40.989 [2024-10-01 08:46:32.734725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.989 [2024-10-01 08:46:32.734736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.989 qpair failed and we were unable to recover it. 00:31:40.989 [2024-10-01 08:46:32.735052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.989 [2024-10-01 08:46:32.735063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.989 qpair failed and we were unable to recover it. 00:31:40.989 [2024-10-01 08:46:32.735366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.989 [2024-10-01 08:46:32.735375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.989 qpair failed and we were unable to recover it. 00:31:40.989 [2024-10-01 08:46:32.735678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.989 [2024-10-01 08:46:32.735689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.989 qpair failed and we were unable to recover it. 00:31:40.989 [2024-10-01 08:46:32.735981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.989 [2024-10-01 08:46:32.735991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.989 qpair failed and we were unable to recover it. 00:31:40.989 [2024-10-01 08:46:32.736389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.989 [2024-10-01 08:46:32.736399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.989 qpair failed and we were unable to recover it. 00:31:40.989 [2024-10-01 08:46:32.736735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.989 [2024-10-01 08:46:32.736745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.989 qpair failed and we were unable to recover it. 00:31:40.989 [2024-10-01 08:46:32.736926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.989 [2024-10-01 08:46:32.736937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.989 qpair failed and we were unable to recover it. 00:31:40.989 [2024-10-01 08:46:32.737253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.989 [2024-10-01 08:46:32.737264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.989 qpair failed and we were unable to recover it. 00:31:40.989 [2024-10-01 08:46:32.737542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.989 [2024-10-01 08:46:32.737551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.989 qpair failed and we were unable to recover it. 00:31:40.989 [2024-10-01 08:46:32.737955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.989 [2024-10-01 08:46:32.737965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.989 qpair failed and we were unable to recover it. 00:31:40.989 [2024-10-01 08:46:32.738247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.989 [2024-10-01 08:46:32.738257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.989 qpair failed and we were unable to recover it. 00:31:40.989 [2024-10-01 08:46:32.738576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.989 [2024-10-01 08:46:32.738586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.989 qpair failed and we were unable to recover it. 00:31:40.989 [2024-10-01 08:46:32.738896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.989 [2024-10-01 08:46:32.738906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.989 qpair failed and we were unable to recover it. 00:31:40.989 [2024-10-01 08:46:32.739227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.989 [2024-10-01 08:46:32.739238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.989 qpair failed and we were unable to recover it. 00:31:40.989 [2024-10-01 08:46:32.739546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.989 [2024-10-01 08:46:32.739556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.989 qpair failed and we were unable to recover it. 00:31:40.989 [2024-10-01 08:46:32.739895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.989 [2024-10-01 08:46:32.739906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.989 qpair failed and we were unable to recover it. 00:31:40.989 [2024-10-01 08:46:32.740209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.990 [2024-10-01 08:46:32.740219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.990 qpair failed and we were unable to recover it. 00:31:40.990 [2024-10-01 08:46:32.740452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.990 [2024-10-01 08:46:32.740464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.990 qpair failed and we were unable to recover it. 00:31:40.990 [2024-10-01 08:46:32.740798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.990 [2024-10-01 08:46:32.740808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.990 qpair failed and we were unable to recover it. 00:31:40.990 [2024-10-01 08:46:32.741089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.990 [2024-10-01 08:46:32.741100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.990 qpair failed and we were unable to recover it. 00:31:40.990 [2024-10-01 08:46:32.741419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.990 [2024-10-01 08:46:32.741429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.990 qpair failed and we were unable to recover it. 00:31:40.990 [2024-10-01 08:46:32.741711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.990 [2024-10-01 08:46:32.741721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.990 qpair failed and we were unable to recover it. 00:31:40.990 [2024-10-01 08:46:32.742056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.990 [2024-10-01 08:46:32.742066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.990 qpair failed and we were unable to recover it. 00:31:40.990 [2024-10-01 08:46:32.742445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.990 [2024-10-01 08:46:32.742456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.990 qpair failed and we were unable to recover it. 00:31:40.990 [2024-10-01 08:46:32.742754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.990 [2024-10-01 08:46:32.742765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.990 qpair failed and we were unable to recover it. 00:31:40.990 [2024-10-01 08:46:32.743099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.990 [2024-10-01 08:46:32.743109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.990 qpair failed and we were unable to recover it. 00:31:40.990 [2024-10-01 08:46:32.743416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.990 [2024-10-01 08:46:32.743426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.990 qpair failed and we were unable to recover it. 00:31:40.990 [2024-10-01 08:46:32.743757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.990 [2024-10-01 08:46:32.743767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.990 qpair failed and we were unable to recover it. 00:31:40.990 [2024-10-01 08:46:32.744060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.990 [2024-10-01 08:46:32.744071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.990 qpair failed and we were unable to recover it. 00:31:40.990 [2024-10-01 08:46:32.744274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.990 [2024-10-01 08:46:32.744284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.990 qpair failed and we were unable to recover it. 00:31:40.990 [2024-10-01 08:46:32.744656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.990 [2024-10-01 08:46:32.744666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.990 qpair failed and we were unable to recover it. 00:31:40.990 [2024-10-01 08:46:32.744988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.990 [2024-10-01 08:46:32.745002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.990 qpair failed and we were unable to recover it. 00:31:40.990 [2024-10-01 08:46:32.745377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.990 [2024-10-01 08:46:32.745386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.990 qpair failed and we were unable to recover it. 00:31:40.990 [2024-10-01 08:46:32.745693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.990 [2024-10-01 08:46:32.745702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.990 qpair failed and we were unable to recover it. 00:31:40.990 [2024-10-01 08:46:32.746009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.990 [2024-10-01 08:46:32.746019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.990 qpair failed and we were unable to recover it. 00:31:40.990 [2024-10-01 08:46:32.746333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.990 [2024-10-01 08:46:32.746343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.990 qpair failed and we were unable to recover it. 00:31:40.990 [2024-10-01 08:46:32.746539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.990 [2024-10-01 08:46:32.746548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.990 qpair failed and we were unable to recover it. 00:31:40.990 [2024-10-01 08:46:32.746886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.990 [2024-10-01 08:46:32.746896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.990 qpair failed and we were unable to recover it. 00:31:40.990 [2024-10-01 08:46:32.747274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.990 [2024-10-01 08:46:32.747284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.990 qpair failed and we were unable to recover it. 00:31:40.990 [2024-10-01 08:46:32.747591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.990 [2024-10-01 08:46:32.747601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.990 qpair failed and we were unable to recover it. 00:31:40.990 [2024-10-01 08:46:32.747901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.990 [2024-10-01 08:46:32.747910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.990 qpair failed and we were unable to recover it. 00:31:40.990 [2024-10-01 08:46:32.748228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.990 [2024-10-01 08:46:32.748238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.990 qpair failed and we were unable to recover it. 00:31:40.990 [2024-10-01 08:46:32.748556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.990 [2024-10-01 08:46:32.748565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.990 qpair failed and we were unable to recover it. 00:31:40.990 [2024-10-01 08:46:32.748845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.990 [2024-10-01 08:46:32.748855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.990 qpair failed and we were unable to recover it. 00:31:40.990 [2024-10-01 08:46:32.749171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.990 [2024-10-01 08:46:32.749181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.990 qpair failed and we were unable to recover it. 00:31:40.990 [2024-10-01 08:46:32.749469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.990 [2024-10-01 08:46:32.749479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.990 qpair failed and we were unable to recover it. 00:31:40.990 [2024-10-01 08:46:32.749807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.990 [2024-10-01 08:46:32.749817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.990 qpair failed and we were unable to recover it. 00:31:40.990 [2024-10-01 08:46:32.750175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.990 [2024-10-01 08:46:32.750185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.990 qpair failed and we were unable to recover it. 00:31:40.990 [2024-10-01 08:46:32.750484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.990 [2024-10-01 08:46:32.750494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.990 qpair failed and we were unable to recover it. 00:31:40.990 [2024-10-01 08:46:32.750792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.990 [2024-10-01 08:46:32.750802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.990 qpair failed and we were unable to recover it. 00:31:40.990 [2024-10-01 08:46:32.751112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.990 [2024-10-01 08:46:32.751122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.990 qpair failed and we were unable to recover it. 00:31:40.990 [2024-10-01 08:46:32.751407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:40.990 [2024-10-01 08:46:32.751417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:40.990 qpair failed and we were unable to recover it. 00:31:41.265 [2024-10-01 08:46:32.751705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.265 [2024-10-01 08:46:32.751717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.265 qpair failed and we were unable to recover it. 00:31:41.265 [2024-10-01 08:46:32.752010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.265 [2024-10-01 08:46:32.752021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.265 qpair failed and we were unable to recover it. 00:31:41.265 [2024-10-01 08:46:32.752345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.266 [2024-10-01 08:46:32.752356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.266 qpair failed and we were unable to recover it. 00:31:41.266 [2024-10-01 08:46:32.752649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.266 [2024-10-01 08:46:32.752659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.266 qpair failed and we were unable to recover it. 00:31:41.266 [2024-10-01 08:46:32.752973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.266 [2024-10-01 08:46:32.752982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.266 qpair failed and we were unable to recover it. 00:31:41.266 [2024-10-01 08:46:32.753262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.266 [2024-10-01 08:46:32.753273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.266 qpair failed and we were unable to recover it. 00:31:41.266 [2024-10-01 08:46:32.753592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.266 [2024-10-01 08:46:32.753603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.266 qpair failed and we were unable to recover it. 00:31:41.266 [2024-10-01 08:46:32.753882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.266 [2024-10-01 08:46:32.753893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.266 qpair failed and we were unable to recover it. 00:31:41.266 [2024-10-01 08:46:32.754214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.266 [2024-10-01 08:46:32.754225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.266 qpair failed and we were unable to recover it. 00:31:41.266 [2024-10-01 08:46:32.754513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.266 [2024-10-01 08:46:32.754523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.266 qpair failed and we were unable to recover it. 00:31:41.266 [2024-10-01 08:46:32.754821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.266 [2024-10-01 08:46:32.754830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.266 qpair failed and we were unable to recover it. 00:31:41.266 [2024-10-01 08:46:32.755145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.266 [2024-10-01 08:46:32.755163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.266 qpair failed and we were unable to recover it. 00:31:41.266 [2024-10-01 08:46:32.755455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.266 [2024-10-01 08:46:32.755465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.266 qpair failed and we were unable to recover it. 00:31:41.266 [2024-10-01 08:46:32.755750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.266 [2024-10-01 08:46:32.755759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.266 qpair failed and we were unable to recover it. 00:31:41.266 [2024-10-01 08:46:32.756055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.266 [2024-10-01 08:46:32.756065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.266 qpair failed and we were unable to recover it. 00:31:41.266 [2024-10-01 08:46:32.756358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.266 [2024-10-01 08:46:32.756369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.266 qpair failed and we were unable to recover it. 00:31:41.266 [2024-10-01 08:46:32.756584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.266 [2024-10-01 08:46:32.756595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.266 qpair failed and we were unable to recover it. 00:31:41.266 [2024-10-01 08:46:32.756897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.266 [2024-10-01 08:46:32.756907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.266 qpair failed and we were unable to recover it. 00:31:41.266 [2024-10-01 08:46:32.757210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.266 [2024-10-01 08:46:32.757220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.266 qpair failed and we were unable to recover it. 00:31:41.266 [2024-10-01 08:46:32.757418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.266 [2024-10-01 08:46:32.757428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.266 qpair failed and we were unable to recover it. 00:31:41.266 [2024-10-01 08:46:32.757722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.266 [2024-10-01 08:46:32.757732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.266 qpair failed and we were unable to recover it. 00:31:41.266 [2024-10-01 08:46:32.758062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.266 [2024-10-01 08:46:32.758074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.266 qpair failed and we were unable to recover it. 00:31:41.266 [2024-10-01 08:46:32.758381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.266 [2024-10-01 08:46:32.758390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.266 qpair failed and we were unable to recover it. 00:31:41.266 [2024-10-01 08:46:32.758559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.266 [2024-10-01 08:46:32.758570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.266 qpair failed and we were unable to recover it. 00:31:41.266 [2024-10-01 08:46:32.758936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.266 [2024-10-01 08:46:32.758946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.266 qpair failed and we were unable to recover it. 00:31:41.266 [2024-10-01 08:46:32.759255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.266 [2024-10-01 08:46:32.759265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.266 qpair failed and we were unable to recover it. 00:31:41.266 [2024-10-01 08:46:32.759532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.266 [2024-10-01 08:46:32.759541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.266 qpair failed and we were unable to recover it. 00:31:41.266 [2024-10-01 08:46:32.759883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.266 [2024-10-01 08:46:32.759894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.266 qpair failed and we were unable to recover it. 00:31:41.266 [2024-10-01 08:46:32.760212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.266 [2024-10-01 08:46:32.760222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.267 qpair failed and we were unable to recover it. 00:31:41.267 [2024-10-01 08:46:32.760502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.267 [2024-10-01 08:46:32.760512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.267 qpair failed and we were unable to recover it. 00:31:41.267 [2024-10-01 08:46:32.760778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.267 [2024-10-01 08:46:32.760789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.267 qpair failed and we were unable to recover it. 00:31:41.267 [2024-10-01 08:46:32.761086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.267 [2024-10-01 08:46:32.761097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.267 qpair failed and we were unable to recover it. 00:31:41.267 [2024-10-01 08:46:32.761294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.267 [2024-10-01 08:46:32.761304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.267 qpair failed and we were unable to recover it. 00:31:41.267 [2024-10-01 08:46:32.761613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.267 [2024-10-01 08:46:32.761624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.267 qpair failed and we were unable to recover it. 00:31:41.267 [2024-10-01 08:46:32.761946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.267 [2024-10-01 08:46:32.761956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.267 qpair failed and we were unable to recover it. 00:31:41.267 [2024-10-01 08:46:32.762322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.267 [2024-10-01 08:46:32.762332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.267 qpair failed and we were unable to recover it. 00:31:41.267 [2024-10-01 08:46:32.762642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.267 [2024-10-01 08:46:32.762651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.267 qpair failed and we were unable to recover it. 00:31:41.267 [2024-10-01 08:46:32.762953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.267 [2024-10-01 08:46:32.762963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.267 qpair failed and we were unable to recover it. 00:31:41.267 [2024-10-01 08:46:32.763267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.267 [2024-10-01 08:46:32.763277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.267 qpair failed and we were unable to recover it. 00:31:41.267 [2024-10-01 08:46:32.763603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.267 [2024-10-01 08:46:32.763620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.267 qpair failed and we were unable to recover it. 00:31:41.267 [2024-10-01 08:46:32.763953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.267 [2024-10-01 08:46:32.763963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.267 qpair failed and we were unable to recover it. 00:31:41.267 [2024-10-01 08:46:32.764304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.267 [2024-10-01 08:46:32.764316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.267 qpair failed and we were unable to recover it. 00:31:41.267 [2024-10-01 08:46:32.764617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.267 [2024-10-01 08:46:32.764626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.267 qpair failed and we were unable to recover it. 00:31:41.267 [2024-10-01 08:46:32.764906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.267 [2024-10-01 08:46:32.764915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.267 qpair failed and we were unable to recover it. 00:31:41.267 [2024-10-01 08:46:32.765111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.267 [2024-10-01 08:46:32.765122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.267 qpair failed and we were unable to recover it. 00:31:41.267 [2024-10-01 08:46:32.765404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.267 [2024-10-01 08:46:32.765413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.267 qpair failed and we were unable to recover it. 00:31:41.267 [2024-10-01 08:46:32.765784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.267 [2024-10-01 08:46:32.765794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.267 qpair failed and we were unable to recover it. 00:31:41.267 [2024-10-01 08:46:32.766094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.267 [2024-10-01 08:46:32.766105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.267 qpair failed and we were unable to recover it. 00:31:41.267 [2024-10-01 08:46:32.766420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.267 [2024-10-01 08:46:32.766430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.267 qpair failed and we were unable to recover it. 00:31:41.267 [2024-10-01 08:46:32.766707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.267 [2024-10-01 08:46:32.766717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.267 qpair failed and we were unable to recover it. 00:31:41.267 [2024-10-01 08:46:32.767040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.267 [2024-10-01 08:46:32.767050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.267 qpair failed and we were unable to recover it. 00:31:41.267 [2024-10-01 08:46:32.767338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.267 [2024-10-01 08:46:32.767347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.267 qpair failed and we were unable to recover it. 00:31:41.267 [2024-10-01 08:46:32.767642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.267 [2024-10-01 08:46:32.767651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.267 qpair failed and we were unable to recover it. 00:31:41.267 [2024-10-01 08:46:32.767926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.267 [2024-10-01 08:46:32.767936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.267 qpair failed and we were unable to recover it. 00:31:41.267 [2024-10-01 08:46:32.768253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.267 [2024-10-01 08:46:32.768264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.267 qpair failed and we were unable to recover it. 00:31:41.267 [2024-10-01 08:46:32.768462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.267 [2024-10-01 08:46:32.768472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.267 qpair failed and we were unable to recover it. 00:31:41.268 [2024-10-01 08:46:32.768798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.268 [2024-10-01 08:46:32.768808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.268 qpair failed and we were unable to recover it. 00:31:41.268 [2024-10-01 08:46:32.769117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.268 [2024-10-01 08:46:32.769128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.268 qpair failed and we were unable to recover it. 00:31:41.268 [2024-10-01 08:46:32.769436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.268 [2024-10-01 08:46:32.769447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.268 qpair failed and we were unable to recover it. 00:31:41.268 [2024-10-01 08:46:32.769754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.268 [2024-10-01 08:46:32.769765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.268 qpair failed and we were unable to recover it. 00:31:41.268 [2024-10-01 08:46:32.770094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.268 [2024-10-01 08:46:32.770105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.268 qpair failed and we were unable to recover it. 00:31:41.268 [2024-10-01 08:46:32.770391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.268 [2024-10-01 08:46:32.770401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.268 qpair failed and we were unable to recover it. 00:31:41.268 [2024-10-01 08:46:32.770714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.268 [2024-10-01 08:46:32.770724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.268 qpair failed and we were unable to recover it. 00:31:41.268 [2024-10-01 08:46:32.770914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.268 [2024-10-01 08:46:32.770924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.268 qpair failed and we were unable to recover it. 00:31:41.268 [2024-10-01 08:46:32.771104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.268 [2024-10-01 08:46:32.771116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.268 qpair failed and we were unable to recover it. 00:31:41.268 [2024-10-01 08:46:32.771438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.268 [2024-10-01 08:46:32.771448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.268 qpair failed and we were unable to recover it. 00:31:41.268 [2024-10-01 08:46:32.771769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.268 [2024-10-01 08:46:32.771779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.268 qpair failed and we were unable to recover it. 00:31:41.268 [2024-10-01 08:46:32.772068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.268 [2024-10-01 08:46:32.772078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.268 qpair failed and we were unable to recover it. 00:31:41.268 [2024-10-01 08:46:32.772273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.268 [2024-10-01 08:46:32.772283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.268 qpair failed and we were unable to recover it. 00:31:41.268 [2024-10-01 08:46:32.772575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.268 [2024-10-01 08:46:32.772585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.268 qpair failed and we were unable to recover it. 00:31:41.268 [2024-10-01 08:46:32.772888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.268 [2024-10-01 08:46:32.772898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.268 qpair failed and we were unable to recover it. 00:31:41.268 [2024-10-01 08:46:32.773199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.268 [2024-10-01 08:46:32.773209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.268 qpair failed and we were unable to recover it. 00:31:41.268 [2024-10-01 08:46:32.773488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.268 [2024-10-01 08:46:32.773498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.268 qpair failed and we were unable to recover it. 00:31:41.268 [2024-10-01 08:46:32.773832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.268 [2024-10-01 08:46:32.773842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.268 qpair failed and we were unable to recover it. 00:31:41.268 [2024-10-01 08:46:32.774121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.268 [2024-10-01 08:46:32.774134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.268 qpair failed and we were unable to recover it. 00:31:41.268 [2024-10-01 08:46:32.774447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.268 [2024-10-01 08:46:32.774456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.268 qpair failed and we were unable to recover it. 00:31:41.268 [2024-10-01 08:46:32.774755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.268 [2024-10-01 08:46:32.774764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.268 qpair failed and we were unable to recover it. 00:31:41.268 [2024-10-01 08:46:32.775100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.268 [2024-10-01 08:46:32.775110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.268 qpair failed and we were unable to recover it. 00:31:41.268 [2024-10-01 08:46:32.775445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.268 [2024-10-01 08:46:32.775455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.268 qpair failed and we were unable to recover it. 00:31:41.268 [2024-10-01 08:46:32.775739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.268 [2024-10-01 08:46:32.775748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.268 qpair failed and we were unable to recover it. 00:31:41.268 [2024-10-01 08:46:32.776068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.268 [2024-10-01 08:46:32.776080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.268 qpair failed and we were unable to recover it. 00:31:41.268 [2024-10-01 08:46:32.776387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.268 [2024-10-01 08:46:32.776396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.268 qpair failed and we were unable to recover it. 00:31:41.268 [2024-10-01 08:46:32.776704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.268 [2024-10-01 08:46:32.776714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.268 qpair failed and we were unable to recover it. 00:31:41.268 [2024-10-01 08:46:32.776999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.269 [2024-10-01 08:46:32.777010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.269 qpair failed and we were unable to recover it. 00:31:41.269 [2024-10-01 08:46:32.777288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.269 [2024-10-01 08:46:32.777299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.269 qpair failed and we were unable to recover it. 00:31:41.269 [2024-10-01 08:46:32.777574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.269 [2024-10-01 08:46:32.777585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.269 qpair failed and we were unable to recover it. 00:31:41.269 [2024-10-01 08:46:32.777926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.269 [2024-10-01 08:46:32.777937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.269 qpair failed and we were unable to recover it. 00:31:41.269 [2024-10-01 08:46:32.778257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.269 [2024-10-01 08:46:32.778269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.269 qpair failed and we were unable to recover it. 00:31:41.269 [2024-10-01 08:46:32.778465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.269 [2024-10-01 08:46:32.778476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.269 qpair failed and we were unable to recover it. 00:31:41.269 [2024-10-01 08:46:32.778793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.269 [2024-10-01 08:46:32.778803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.269 qpair failed and we were unable to recover it. 00:31:41.269 [2024-10-01 08:46:32.779169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.269 [2024-10-01 08:46:32.779179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.269 qpair failed and we were unable to recover it. 00:31:41.269 [2024-10-01 08:46:32.779470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.269 [2024-10-01 08:46:32.779480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.269 qpair failed and we were unable to recover it. 00:31:41.269 [2024-10-01 08:46:32.779758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.269 [2024-10-01 08:46:32.779768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.269 qpair failed and we were unable to recover it. 00:31:41.269 [2024-10-01 08:46:32.780067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.269 [2024-10-01 08:46:32.780077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.269 qpair failed and we were unable to recover it. 00:31:41.269 [2024-10-01 08:46:32.780272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.269 [2024-10-01 08:46:32.780283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.269 qpair failed and we were unable to recover it. 00:31:41.269 [2024-10-01 08:46:32.780585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.269 [2024-10-01 08:46:32.780595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.269 qpair failed and we were unable to recover it. 00:31:41.269 [2024-10-01 08:46:32.780894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.269 [2024-10-01 08:46:32.780903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.269 qpair failed and we were unable to recover it. 00:31:41.269 [2024-10-01 08:46:32.781220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.269 [2024-10-01 08:46:32.781230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.269 qpair failed and we were unable to recover it. 00:31:41.269 [2024-10-01 08:46:32.781525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.269 [2024-10-01 08:46:32.781536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.269 qpair failed and we were unable to recover it. 00:31:41.269 [2024-10-01 08:46:32.781722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.269 [2024-10-01 08:46:32.781732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.269 qpair failed and we were unable to recover it. 00:31:41.269 [2024-10-01 08:46:32.781897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.269 [2024-10-01 08:46:32.781908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.269 qpair failed and we were unable to recover it. 00:31:41.269 [2024-10-01 08:46:32.782209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.269 [2024-10-01 08:46:32.782223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.269 qpair failed and we were unable to recover it. 00:31:41.269 [2024-10-01 08:46:32.782525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.269 [2024-10-01 08:46:32.782536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.269 qpair failed and we were unable to recover it. 00:31:41.269 [2024-10-01 08:46:32.782841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.269 [2024-10-01 08:46:32.782851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.269 qpair failed and we were unable to recover it. 00:31:41.269 [2024-10-01 08:46:32.783156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.269 [2024-10-01 08:46:32.783166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.269 qpair failed and we were unable to recover it. 00:31:41.269 [2024-10-01 08:46:32.783461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.269 [2024-10-01 08:46:32.783471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.269 qpair failed and we were unable to recover it. 00:31:41.269 [2024-10-01 08:46:32.783786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.269 [2024-10-01 08:46:32.783796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.269 qpair failed and we were unable to recover it. 00:31:41.269 [2024-10-01 08:46:32.784089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.269 [2024-10-01 08:46:32.784099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.269 qpair failed and we were unable to recover it. 00:31:41.269 [2024-10-01 08:46:32.784415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.269 [2024-10-01 08:46:32.784425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.269 qpair failed and we were unable to recover it. 00:31:41.269 [2024-10-01 08:46:32.784708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.269 [2024-10-01 08:46:32.784718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.269 qpair failed and we were unable to recover it. 00:31:41.269 [2024-10-01 08:46:32.784986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.269 [2024-10-01 08:46:32.784999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.270 qpair failed and we were unable to recover it. 00:31:41.270 [2024-10-01 08:46:32.785213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.270 [2024-10-01 08:46:32.785223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.270 qpair failed and we were unable to recover it. 00:31:41.270 [2024-10-01 08:46:32.785534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.270 [2024-10-01 08:46:32.785543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.270 qpair failed and we were unable to recover it. 00:31:41.270 [2024-10-01 08:46:32.785848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.270 [2024-10-01 08:46:32.785858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.270 qpair failed and we were unable to recover it. 00:31:41.270 [2024-10-01 08:46:32.786198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.270 [2024-10-01 08:46:32.786208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.270 qpair failed and we were unable to recover it. 00:31:41.270 [2024-10-01 08:46:32.786504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.270 [2024-10-01 08:46:32.786514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.270 qpair failed and we were unable to recover it. 00:31:41.270 [2024-10-01 08:46:32.786828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.270 [2024-10-01 08:46:32.786839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.270 qpair failed and we were unable to recover it. 00:31:41.270 [2024-10-01 08:46:32.787151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.270 [2024-10-01 08:46:32.787162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.270 qpair failed and we were unable to recover it. 00:31:41.270 [2024-10-01 08:46:32.787463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.270 [2024-10-01 08:46:32.787472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.270 qpair failed and we were unable to recover it. 00:31:41.270 [2024-10-01 08:46:32.787862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.270 [2024-10-01 08:46:32.787871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.270 qpair failed and we were unable to recover it. 00:31:41.270 [2024-10-01 08:46:32.788185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.270 [2024-10-01 08:46:32.788196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.270 qpair failed and we were unable to recover it. 00:31:41.270 [2024-10-01 08:46:32.788504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.270 [2024-10-01 08:46:32.788513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.270 qpair failed and we were unable to recover it. 00:31:41.270 [2024-10-01 08:46:32.788819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.270 [2024-10-01 08:46:32.788829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.270 qpair failed and we were unable to recover it. 00:31:41.270 [2024-10-01 08:46:32.789147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.270 [2024-10-01 08:46:32.789157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.270 qpair failed and we were unable to recover it. 00:31:41.270 [2024-10-01 08:46:32.789449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.270 [2024-10-01 08:46:32.789468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.270 qpair failed and we were unable to recover it. 00:31:41.270 [2024-10-01 08:46:32.789793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.270 [2024-10-01 08:46:32.789802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.270 qpair failed and we were unable to recover it. 00:31:41.270 [2024-10-01 08:46:32.790109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.270 [2024-10-01 08:46:32.790120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.270 qpair failed and we were unable to recover it. 00:31:41.270 [2024-10-01 08:46:32.790442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.270 [2024-10-01 08:46:32.790452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.270 qpair failed and we were unable to recover it. 00:31:41.270 [2024-10-01 08:46:32.790758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.270 [2024-10-01 08:46:32.790768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.270 qpair failed and we were unable to recover it. 00:31:41.270 [2024-10-01 08:46:32.791067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.270 [2024-10-01 08:46:32.791077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.270 qpair failed and we were unable to recover it. 00:31:41.270 [2024-10-01 08:46:32.791415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.270 [2024-10-01 08:46:32.791425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.270 qpair failed and we were unable to recover it. 00:31:41.270 [2024-10-01 08:46:32.791745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.270 [2024-10-01 08:46:32.791755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.270 qpair failed and we were unable to recover it. 00:31:41.270 [2024-10-01 08:46:32.792039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.270 [2024-10-01 08:46:32.792049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.270 qpair failed and we were unable to recover it. 00:31:41.270 [2024-10-01 08:46:32.792369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.270 [2024-10-01 08:46:32.792379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.270 qpair failed and we were unable to recover it. 00:31:41.270 [2024-10-01 08:46:32.792570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.270 [2024-10-01 08:46:32.792580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.270 qpair failed and we were unable to recover it. 00:31:41.270 [2024-10-01 08:46:32.792904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.270 [2024-10-01 08:46:32.792914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.270 qpair failed and we were unable to recover it. 00:31:41.270 [2024-10-01 08:46:32.793228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.270 [2024-10-01 08:46:32.793238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.270 qpair failed and we were unable to recover it. 00:31:41.270 [2024-10-01 08:46:32.793552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.270 [2024-10-01 08:46:32.793561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.270 qpair failed and we were unable to recover it. 00:31:41.271 [2024-10-01 08:46:32.793873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.271 [2024-10-01 08:46:32.793884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.271 qpair failed and we were unable to recover it. 00:31:41.271 [2024-10-01 08:46:32.794221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.271 [2024-10-01 08:46:32.794231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.271 qpair failed and we were unable to recover it. 00:31:41.271 [2024-10-01 08:46:32.794509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.271 [2024-10-01 08:46:32.794519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.271 qpair failed and we were unable to recover it. 00:31:41.271 [2024-10-01 08:46:32.794813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.271 [2024-10-01 08:46:32.794823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.271 qpair failed and we were unable to recover it. 00:31:41.271 [2024-10-01 08:46:32.795137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.271 [2024-10-01 08:46:32.795150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.271 qpair failed and we were unable to recover it. 00:31:41.271 [2024-10-01 08:46:32.795479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.271 [2024-10-01 08:46:32.795490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.271 qpair failed and we were unable to recover it. 00:31:41.271 [2024-10-01 08:46:32.795708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.271 [2024-10-01 08:46:32.795719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.271 qpair failed and we were unable to recover it. 00:31:41.271 [2024-10-01 08:46:32.796046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.271 [2024-10-01 08:46:32.796056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.271 qpair failed and we were unable to recover it. 00:31:41.271 [2024-10-01 08:46:32.796367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.271 [2024-10-01 08:46:32.796376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.271 qpair failed and we were unable to recover it. 00:31:41.271 [2024-10-01 08:46:32.796670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.271 [2024-10-01 08:46:32.796680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.271 qpair failed and we were unable to recover it. 00:31:41.271 [2024-10-01 08:46:32.797008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.271 [2024-10-01 08:46:32.797018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.271 qpair failed and we were unable to recover it. 00:31:41.271 [2024-10-01 08:46:32.797363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.271 [2024-10-01 08:46:32.797373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.271 qpair failed and we were unable to recover it. 00:31:41.271 [2024-10-01 08:46:32.797678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.271 [2024-10-01 08:46:32.797688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.271 qpair failed and we were unable to recover it. 00:31:41.271 [2024-10-01 08:46:32.797974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.271 [2024-10-01 08:46:32.797984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.271 qpair failed and we were unable to recover it. 00:31:41.271 [2024-10-01 08:46:32.798298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.271 [2024-10-01 08:46:32.798309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.271 qpair failed and we were unable to recover it. 00:31:41.271 [2024-10-01 08:46:32.798638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.271 [2024-10-01 08:46:32.798648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.271 qpair failed and we were unable to recover it. 00:31:41.271 [2024-10-01 08:46:32.798934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.271 [2024-10-01 08:46:32.798944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.271 qpair failed and we were unable to recover it. 00:31:41.271 [2024-10-01 08:46:32.799247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.271 [2024-10-01 08:46:32.799257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.271 qpair failed and we were unable to recover it. 00:31:41.271 [2024-10-01 08:46:32.799570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.271 [2024-10-01 08:46:32.799580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.271 qpair failed and we were unable to recover it. 00:31:41.271 [2024-10-01 08:46:32.799870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.271 [2024-10-01 08:46:32.799879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.271 qpair failed and we were unable to recover it. 00:31:41.271 [2024-10-01 08:46:32.800158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.271 [2024-10-01 08:46:32.800168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.271 qpair failed and we were unable to recover it. 00:31:41.271 [2024-10-01 08:46:32.800368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.271 [2024-10-01 08:46:32.800378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.271 qpair failed and we were unable to recover it. 00:31:41.271 [2024-10-01 08:46:32.800669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.271 [2024-10-01 08:46:32.800679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.271 qpair failed and we were unable to recover it. 00:31:41.271 [2024-10-01 08:46:32.800987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.271 [2024-10-01 08:46:32.801007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.271 qpair failed and we were unable to recover it. 00:31:41.271 [2024-10-01 08:46:32.801206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.271 [2024-10-01 08:46:32.801216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.271 qpair failed and we were unable to recover it. 00:31:41.271 [2024-10-01 08:46:32.801516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.271 [2024-10-01 08:46:32.801525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.271 qpair failed and we were unable to recover it. 00:31:41.271 [2024-10-01 08:46:32.801852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.271 [2024-10-01 08:46:32.801861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.271 qpair failed and we were unable to recover it. 00:31:41.271 [2024-10-01 08:46:32.802152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.272 [2024-10-01 08:46:32.802162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.272 qpair failed and we were unable to recover it. 00:31:41.272 [2024-10-01 08:46:32.802445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.272 [2024-10-01 08:46:32.802454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.272 qpair failed and we were unable to recover it. 00:31:41.272 [2024-10-01 08:46:32.802744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.272 [2024-10-01 08:46:32.802753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.272 qpair failed and we were unable to recover it. 00:31:41.272 [2024-10-01 08:46:32.803060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.272 [2024-10-01 08:46:32.803070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.272 qpair failed and we were unable to recover it. 00:31:41.272 [2024-10-01 08:46:32.803380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.272 [2024-10-01 08:46:32.803392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.272 qpair failed and we were unable to recover it. 00:31:41.272 [2024-10-01 08:46:32.803701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.272 [2024-10-01 08:46:32.803710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.272 qpair failed and we were unable to recover it. 00:31:41.272 [2024-10-01 08:46:32.803991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.272 [2024-10-01 08:46:32.804004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.272 qpair failed and we were unable to recover it. 00:31:41.272 [2024-10-01 08:46:32.804354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.272 [2024-10-01 08:46:32.804363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.272 qpair failed and we were unable to recover it. 00:31:41.272 [2024-10-01 08:46:32.804672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.272 [2024-10-01 08:46:32.804682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.272 qpair failed and we were unable to recover it. 00:31:41.272 [2024-10-01 08:46:32.804990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.272 [2024-10-01 08:46:32.805003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.272 qpair failed and we were unable to recover it. 00:31:41.272 [2024-10-01 08:46:32.805379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.272 [2024-10-01 08:46:32.805389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.272 qpair failed and we were unable to recover it. 00:31:41.272 [2024-10-01 08:46:32.805660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.272 [2024-10-01 08:46:32.805671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.272 qpair failed and we were unable to recover it. 00:31:41.272 [2024-10-01 08:46:32.805949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.272 [2024-10-01 08:46:32.805959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.272 qpair failed and we were unable to recover it. 00:31:41.272 [2024-10-01 08:46:32.806259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.272 [2024-10-01 08:46:32.806269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.272 qpair failed and we were unable to recover it. 00:31:41.272 [2024-10-01 08:46:32.806576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.272 [2024-10-01 08:46:32.806585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.272 qpair failed and we were unable to recover it. 00:31:41.272 [2024-10-01 08:46:32.806892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.272 [2024-10-01 08:46:32.806902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.272 qpair failed and we were unable to recover it. 00:31:41.272 [2024-10-01 08:46:32.807220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.272 [2024-10-01 08:46:32.807230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.272 qpair failed and we were unable to recover it. 00:31:41.272 [2024-10-01 08:46:32.807539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.272 [2024-10-01 08:46:32.807549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.272 qpair failed and we were unable to recover it. 00:31:41.272 [2024-10-01 08:46:32.807852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.272 [2024-10-01 08:46:32.807863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.272 qpair failed and we were unable to recover it. 00:31:41.272 [2024-10-01 08:46:32.808139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.272 [2024-10-01 08:46:32.808149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.272 qpair failed and we were unable to recover it. 00:31:41.272 [2024-10-01 08:46:32.808458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.272 [2024-10-01 08:46:32.808468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.272 qpair failed and we were unable to recover it. 00:31:41.272 [2024-10-01 08:46:32.808654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.272 [2024-10-01 08:46:32.808665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.272 qpair failed and we were unable to recover it. 00:31:41.272 [2024-10-01 08:46:32.809002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.272 [2024-10-01 08:46:32.809012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.273 qpair failed and we were unable to recover it. 00:31:41.273 [2024-10-01 08:46:32.809275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.273 [2024-10-01 08:46:32.809285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.273 qpair failed and we were unable to recover it. 00:31:41.273 [2024-10-01 08:46:32.809661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.273 [2024-10-01 08:46:32.809671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.273 qpair failed and we were unable to recover it. 00:31:41.273 [2024-10-01 08:46:32.809965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.273 [2024-10-01 08:46:32.809975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.273 qpair failed and we were unable to recover it. 00:31:41.273 [2024-10-01 08:46:32.810323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.273 [2024-10-01 08:46:32.810333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.273 qpair failed and we were unable to recover it. 00:31:41.273 [2024-10-01 08:46:32.810601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.273 [2024-10-01 08:46:32.810611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.273 qpair failed and we were unable to recover it. 00:31:41.273 [2024-10-01 08:46:32.810827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.273 [2024-10-01 08:46:32.810837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.273 qpair failed and we were unable to recover it. 00:31:41.273 [2024-10-01 08:46:32.811149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.273 [2024-10-01 08:46:32.811160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.273 qpair failed and we were unable to recover it. 00:31:41.273 [2024-10-01 08:46:32.811461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.273 [2024-10-01 08:46:32.811471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.273 qpair failed and we were unable to recover it. 00:31:41.273 [2024-10-01 08:46:32.811756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.273 [2024-10-01 08:46:32.811767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.273 qpair failed and we were unable to recover it. 00:31:41.273 [2024-10-01 08:46:32.812043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.273 [2024-10-01 08:46:32.812054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.273 qpair failed and we were unable to recover it. 00:31:41.273 [2024-10-01 08:46:32.812339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.273 [2024-10-01 08:46:32.812349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.273 qpair failed and we were unable to recover it. 00:31:41.273 [2024-10-01 08:46:32.812637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.273 [2024-10-01 08:46:32.812647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.273 qpair failed and we were unable to recover it. 00:31:41.273 [2024-10-01 08:46:32.812809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.273 [2024-10-01 08:46:32.812820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.273 qpair failed and we were unable to recover it. 00:31:41.273 [2024-10-01 08:46:32.813142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.273 [2024-10-01 08:46:32.813153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.273 qpair failed and we were unable to recover it. 00:31:41.273 [2024-10-01 08:46:32.813462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.273 [2024-10-01 08:46:32.813472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.273 qpair failed and we were unable to recover it. 00:31:41.273 [2024-10-01 08:46:32.813763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.273 [2024-10-01 08:46:32.813773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.273 qpair failed and we were unable to recover it. 00:31:41.273 [2024-10-01 08:46:32.814087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.273 [2024-10-01 08:46:32.814097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.273 qpair failed and we were unable to recover it. 00:31:41.273 [2024-10-01 08:46:32.814476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.273 [2024-10-01 08:46:32.814486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.273 qpair failed and we were unable to recover it. 00:31:41.273 [2024-10-01 08:46:32.814822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.273 [2024-10-01 08:46:32.814833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.273 qpair failed and we were unable to recover it. 00:31:41.273 [2024-10-01 08:46:32.815047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.273 [2024-10-01 08:46:32.815058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.273 qpair failed and we were unable to recover it. 00:31:41.273 [2024-10-01 08:46:32.815367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.273 [2024-10-01 08:46:32.815377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.273 qpair failed and we were unable to recover it. 00:31:41.273 [2024-10-01 08:46:32.815680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.273 [2024-10-01 08:46:32.815690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.273 qpair failed and we were unable to recover it. 00:31:41.273 [2024-10-01 08:46:32.815844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.273 [2024-10-01 08:46:32.815859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.273 qpair failed and we were unable to recover it. 00:31:41.273 [2024-10-01 08:46:32.816161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.273 [2024-10-01 08:46:32.816171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.273 qpair failed and we were unable to recover it. 00:31:41.273 [2024-10-01 08:46:32.816440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.273 [2024-10-01 08:46:32.816450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.273 qpair failed and we were unable to recover it. 00:31:41.273 [2024-10-01 08:46:32.816832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.273 [2024-10-01 08:46:32.816841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.273 qpair failed and we were unable to recover it. 00:31:41.273 [2024-10-01 08:46:32.817127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.273 [2024-10-01 08:46:32.817138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.273 qpair failed and we were unable to recover it. 00:31:41.274 [2024-10-01 08:46:32.817452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.274 [2024-10-01 08:46:32.817462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.274 qpair failed and we were unable to recover it. 00:31:41.274 [2024-10-01 08:46:32.817794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.274 [2024-10-01 08:46:32.817805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.274 qpair failed and we were unable to recover it. 00:31:41.274 [2024-10-01 08:46:32.818137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.274 [2024-10-01 08:46:32.818147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.274 qpair failed and we were unable to recover it. 00:31:41.274 [2024-10-01 08:46:32.818325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.274 [2024-10-01 08:46:32.818335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.274 qpair failed and we were unable to recover it. 00:31:41.274 [2024-10-01 08:46:32.818667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.274 [2024-10-01 08:46:32.818677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.274 qpair failed and we were unable to recover it. 00:31:41.274 [2024-10-01 08:46:32.819015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.274 [2024-10-01 08:46:32.819025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.274 qpair failed and we were unable to recover it. 00:31:41.274 [2024-10-01 08:46:32.819301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.274 [2024-10-01 08:46:32.819311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.274 qpair failed and we were unable to recover it. 00:31:41.274 [2024-10-01 08:46:32.819596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.274 [2024-10-01 08:46:32.819606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.274 qpair failed and we were unable to recover it. 00:31:41.274 [2024-10-01 08:46:32.819899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.274 [2024-10-01 08:46:32.819909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.274 qpair failed and we were unable to recover it. 00:31:41.274 [2024-10-01 08:46:32.820203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.274 [2024-10-01 08:46:32.820213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.274 qpair failed and we were unable to recover it. 00:31:41.274 [2024-10-01 08:46:32.820524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.274 [2024-10-01 08:46:32.820534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.274 qpair failed and we were unable to recover it. 00:31:41.274 [2024-10-01 08:46:32.820822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.274 [2024-10-01 08:46:32.820833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.274 qpair failed and we were unable to recover it. 00:31:41.274 [2024-10-01 08:46:32.821123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.274 [2024-10-01 08:46:32.821134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.274 qpair failed and we were unable to recover it. 00:31:41.274 [2024-10-01 08:46:32.821438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.274 [2024-10-01 08:46:32.821448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.274 qpair failed and we were unable to recover it. 00:31:41.274 [2024-10-01 08:46:32.821742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.274 [2024-10-01 08:46:32.821752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.274 qpair failed and we were unable to recover it. 00:31:41.274 [2024-10-01 08:46:32.822052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.274 [2024-10-01 08:46:32.822062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.274 qpair failed and we were unable to recover it. 00:31:41.274 [2024-10-01 08:46:32.822357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.274 [2024-10-01 08:46:32.822367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.274 qpair failed and we were unable to recover it. 00:31:41.274 [2024-10-01 08:46:32.822652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.274 [2024-10-01 08:46:32.822662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.274 qpair failed and we were unable to recover it. 00:31:41.274 [2024-10-01 08:46:32.822981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.274 [2024-10-01 08:46:32.822991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.274 qpair failed and we were unable to recover it. 00:31:41.274 [2024-10-01 08:46:32.823302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.274 [2024-10-01 08:46:32.823312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.274 qpair failed and we were unable to recover it. 00:31:41.274 [2024-10-01 08:46:32.823590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.274 [2024-10-01 08:46:32.823601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.274 qpair failed and we were unable to recover it. 00:31:41.274 [2024-10-01 08:46:32.823872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.274 [2024-10-01 08:46:32.823883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.274 qpair failed and we were unable to recover it. 00:31:41.274 [2024-10-01 08:46:32.824223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.274 [2024-10-01 08:46:32.824236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.274 qpair failed and we were unable to recover it. 00:31:41.274 [2024-10-01 08:46:32.824605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.274 [2024-10-01 08:46:32.824616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.274 qpair failed and we were unable to recover it. 00:31:41.274 [2024-10-01 08:46:32.824957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.274 [2024-10-01 08:46:32.824968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.274 qpair failed and we were unable to recover it. 00:31:41.274 [2024-10-01 08:46:32.825263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.274 [2024-10-01 08:46:32.825274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.274 qpair failed and we were unable to recover it. 00:31:41.274 [2024-10-01 08:46:32.825561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.274 [2024-10-01 08:46:32.825572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.274 qpair failed and we were unable to recover it. 00:31:41.274 [2024-10-01 08:46:32.825901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.275 [2024-10-01 08:46:32.825911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.275 qpair failed and we were unable to recover it. 00:31:41.275 [2024-10-01 08:46:32.826211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.275 [2024-10-01 08:46:32.826222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.275 qpair failed and we were unable to recover it. 00:31:41.275 [2024-10-01 08:46:32.826554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.275 [2024-10-01 08:46:32.826565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.275 qpair failed and we were unable to recover it. 00:31:41.275 [2024-10-01 08:46:32.826901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.275 [2024-10-01 08:46:32.826912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.275 qpair failed and we were unable to recover it. 00:31:41.275 [2024-10-01 08:46:32.827246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.275 [2024-10-01 08:46:32.827257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.275 qpair failed and we were unable to recover it. 00:31:41.275 [2024-10-01 08:46:32.827538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.275 [2024-10-01 08:46:32.827549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.275 qpair failed and we were unable to recover it. 00:31:41.275 [2024-10-01 08:46:32.827887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.275 [2024-10-01 08:46:32.827898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.275 qpair failed and we were unable to recover it. 00:31:41.275 [2024-10-01 08:46:32.828210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.275 [2024-10-01 08:46:32.828221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.275 qpair failed and we were unable to recover it. 00:31:41.275 [2024-10-01 08:46:32.828498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.275 [2024-10-01 08:46:32.828509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.275 qpair failed and we were unable to recover it. 00:31:41.275 [2024-10-01 08:46:32.828825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.275 [2024-10-01 08:46:32.828836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.275 qpair failed and we were unable to recover it. 00:31:41.275 [2024-10-01 08:46:32.829156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.275 [2024-10-01 08:46:32.829167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.275 qpair failed and we were unable to recover it. 00:31:41.275 [2024-10-01 08:46:32.829460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.275 [2024-10-01 08:46:32.829470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.275 qpair failed and we were unable to recover it. 00:31:41.275 [2024-10-01 08:46:32.829775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.275 [2024-10-01 08:46:32.829786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.275 qpair failed and we were unable to recover it. 00:31:41.275 [2024-10-01 08:46:32.830131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.275 [2024-10-01 08:46:32.830142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.275 qpair failed and we were unable to recover it. 00:31:41.275 [2024-10-01 08:46:32.830427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.275 [2024-10-01 08:46:32.830439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.275 qpair failed and we were unable to recover it. 00:31:41.275 [2024-10-01 08:46:32.830740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.275 [2024-10-01 08:46:32.830751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.275 qpair failed and we were unable to recover it. 00:31:41.275 [2024-10-01 08:46:32.831087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.275 [2024-10-01 08:46:32.831097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.275 qpair failed and we were unable to recover it. 00:31:41.275 [2024-10-01 08:46:32.831413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.275 [2024-10-01 08:46:32.831423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.275 qpair failed and we were unable to recover it. 00:31:41.275 [2024-10-01 08:46:32.831785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.275 [2024-10-01 08:46:32.831795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.275 qpair failed and we were unable to recover it. 00:31:41.275 [2024-10-01 08:46:32.832098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.275 [2024-10-01 08:46:32.832109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.275 qpair failed and we were unable to recover it. 00:31:41.275 [2024-10-01 08:46:32.832415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.275 [2024-10-01 08:46:32.832425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.275 qpair failed and we were unable to recover it. 00:31:41.275 [2024-10-01 08:46:32.832686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.275 [2024-10-01 08:46:32.832696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.275 qpair failed and we were unable to recover it. 00:31:41.275 [2024-10-01 08:46:32.833028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.275 [2024-10-01 08:46:32.833038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.275 qpair failed and we were unable to recover it. 00:31:41.275 [2024-10-01 08:46:32.833379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.275 [2024-10-01 08:46:32.833389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.275 qpair failed and we were unable to recover it. 00:31:41.275 [2024-10-01 08:46:32.833702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.275 [2024-10-01 08:46:32.833712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.275 qpair failed and we were unable to recover it. 00:31:41.275 [2024-10-01 08:46:32.833991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.275 [2024-10-01 08:46:32.834007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.275 qpair failed and we were unable to recover it. 00:31:41.275 [2024-10-01 08:46:32.834340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.275 [2024-10-01 08:46:32.834350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.275 qpair failed and we were unable to recover it. 00:31:41.275 [2024-10-01 08:46:32.834662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.275 [2024-10-01 08:46:32.834673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.275 qpair failed and we were unable to recover it. 00:31:41.275 [2024-10-01 08:46:32.834979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.276 [2024-10-01 08:46:32.834989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.276 qpair failed and we were unable to recover it. 00:31:41.276 [2024-10-01 08:46:32.835296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.276 [2024-10-01 08:46:32.835306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.276 qpair failed and we were unable to recover it. 00:31:41.276 [2024-10-01 08:46:32.835565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.276 [2024-10-01 08:46:32.835574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.276 qpair failed and we were unable to recover it. 00:31:41.276 [2024-10-01 08:46:32.835848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.276 [2024-10-01 08:46:32.835859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.276 qpair failed and we were unable to recover it. 00:31:41.276 [2024-10-01 08:46:32.836153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.276 [2024-10-01 08:46:32.836164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.276 qpair failed and we were unable to recover it. 00:31:41.276 [2024-10-01 08:46:32.836475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.276 [2024-10-01 08:46:32.836485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.276 qpair failed and we were unable to recover it. 00:31:41.276 [2024-10-01 08:46:32.836795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.276 [2024-10-01 08:46:32.836805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.276 qpair failed and we were unable to recover it. 00:31:41.276 [2024-10-01 08:46:32.837114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.276 [2024-10-01 08:46:32.837124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.276 qpair failed and we were unable to recover it. 00:31:41.276 [2024-10-01 08:46:32.837397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.276 [2024-10-01 08:46:32.837409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.276 qpair failed and we were unable to recover it. 00:31:41.276 [2024-10-01 08:46:32.837697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.276 [2024-10-01 08:46:32.837707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.276 qpair failed and we were unable to recover it. 00:31:41.276 [2024-10-01 08:46:32.838002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.276 [2024-10-01 08:46:32.838012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.276 qpair failed and we were unable to recover it. 00:31:41.276 [2024-10-01 08:46:32.838316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.276 [2024-10-01 08:46:32.838326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.276 qpair failed and we were unable to recover it. 00:31:41.276 [2024-10-01 08:46:32.838615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.276 [2024-10-01 08:46:32.838626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.276 qpair failed and we were unable to recover it. 00:31:41.276 [2024-10-01 08:46:32.838900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.276 [2024-10-01 08:46:32.838910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.276 qpair failed and we were unable to recover it. 00:31:41.276 [2024-10-01 08:46:32.839187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.276 [2024-10-01 08:46:32.839199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.276 qpair failed and we were unable to recover it. 00:31:41.276 [2024-10-01 08:46:32.839484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.276 [2024-10-01 08:46:32.839494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.276 qpair failed and we were unable to recover it. 00:31:41.276 [2024-10-01 08:46:32.839827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.276 [2024-10-01 08:46:32.839837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.276 qpair failed and we were unable to recover it. 00:31:41.276 [2024-10-01 08:46:32.840168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.276 [2024-10-01 08:46:32.840180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.276 qpair failed and we were unable to recover it. 00:31:41.276 [2024-10-01 08:46:32.840469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.276 [2024-10-01 08:46:32.840479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.276 qpair failed and we were unable to recover it. 00:31:41.276 [2024-10-01 08:46:32.840669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.276 [2024-10-01 08:46:32.840679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.276 qpair failed and we were unable to recover it. 00:31:41.276 [2024-10-01 08:46:32.840978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.276 [2024-10-01 08:46:32.840988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.276 qpair failed and we were unable to recover it. 00:31:41.276 [2024-10-01 08:46:32.841319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.276 [2024-10-01 08:46:32.841329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.276 qpair failed and we were unable to recover it. 00:31:41.276 [2024-10-01 08:46:32.841671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.276 [2024-10-01 08:46:32.841681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.276 qpair failed and we were unable to recover it. 00:31:41.276 [2024-10-01 08:46:32.841967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.276 [2024-10-01 08:46:32.841977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.276 qpair failed and we were unable to recover it. 00:31:41.276 [2024-10-01 08:46:32.842307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.276 [2024-10-01 08:46:32.842317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.276 qpair failed and we were unable to recover it. 00:31:41.276 [2024-10-01 08:46:32.842608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.276 [2024-10-01 08:46:32.842618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.276 qpair failed and we were unable to recover it. 00:31:41.276 [2024-10-01 08:46:32.842923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.276 [2024-10-01 08:46:32.842932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.276 qpair failed and we were unable to recover it. 00:31:41.276 [2024-10-01 08:46:32.843232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.277 [2024-10-01 08:46:32.843243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.277 qpair failed and we were unable to recover it. 00:31:41.277 [2024-10-01 08:46:32.843572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.277 [2024-10-01 08:46:32.843581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.277 qpair failed and we were unable to recover it. 00:31:41.277 [2024-10-01 08:46:32.843843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.277 [2024-10-01 08:46:32.843853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.277 qpair failed and we were unable to recover it. 00:31:41.277 [2024-10-01 08:46:32.844157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.277 [2024-10-01 08:46:32.844168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.277 qpair failed and we were unable to recover it. 00:31:41.277 [2024-10-01 08:46:32.844477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.277 [2024-10-01 08:46:32.844487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.277 qpair failed and we were unable to recover it. 00:31:41.277 [2024-10-01 08:46:32.844785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.277 [2024-10-01 08:46:32.844795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.277 qpair failed and we were unable to recover it. 00:31:41.277 [2024-10-01 08:46:32.845124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.277 [2024-10-01 08:46:32.845134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.277 qpair failed and we were unable to recover it. 00:31:41.277 [2024-10-01 08:46:32.845344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.277 [2024-10-01 08:46:32.845355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.277 qpair failed and we were unable to recover it. 00:31:41.277 [2024-10-01 08:46:32.845623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.277 [2024-10-01 08:46:32.845633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.277 qpair failed and we were unable to recover it. 00:31:41.277 [2024-10-01 08:46:32.845914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.277 [2024-10-01 08:46:32.845925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.277 qpair failed and we were unable to recover it. 00:31:41.277 [2024-10-01 08:46:32.846241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.277 [2024-10-01 08:46:32.846251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.277 qpair failed and we were unable to recover it. 00:31:41.277 [2024-10-01 08:46:32.846542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.277 [2024-10-01 08:46:32.846552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.277 qpair failed and we were unable to recover it. 00:31:41.277 [2024-10-01 08:46:32.846856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.277 [2024-10-01 08:46:32.846866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.277 qpair failed and we were unable to recover it. 00:31:41.277 [2024-10-01 08:46:32.847206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.277 [2024-10-01 08:46:32.847217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.277 qpair failed and we were unable to recover it. 00:31:41.277 [2024-10-01 08:46:32.847496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.277 [2024-10-01 08:46:32.847506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.277 qpair failed and we were unable to recover it. 00:31:41.277 [2024-10-01 08:46:32.847792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.277 [2024-10-01 08:46:32.847810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.277 qpair failed and we were unable to recover it. 00:31:41.277 [2024-10-01 08:46:32.848154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.277 [2024-10-01 08:46:32.848165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.277 qpair failed and we were unable to recover it. 00:31:41.277 [2024-10-01 08:46:32.848470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.277 [2024-10-01 08:46:32.848480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.277 qpair failed and we were unable to recover it. 00:31:41.277 [2024-10-01 08:46:32.848775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.277 [2024-10-01 08:46:32.848785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.277 qpair failed and we were unable to recover it. 00:31:41.277 [2024-10-01 08:46:32.849118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.277 [2024-10-01 08:46:32.849129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.277 qpair failed and we were unable to recover it. 00:31:41.277 [2024-10-01 08:46:32.849468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.277 [2024-10-01 08:46:32.849478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.277 qpair failed and we were unable to recover it. 00:31:41.277 [2024-10-01 08:46:32.849780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.277 [2024-10-01 08:46:32.849790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.277 qpair failed and we were unable to recover it. 00:31:41.277 [2024-10-01 08:46:32.850114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.277 [2024-10-01 08:46:32.850124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.277 qpair failed and we were unable to recover it. 00:31:41.277 [2024-10-01 08:46:32.850393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.277 [2024-10-01 08:46:32.850402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.277 qpair failed and we were unable to recover it. 00:31:41.277 [2024-10-01 08:46:32.850707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.277 [2024-10-01 08:46:32.850716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.277 qpair failed and we were unable to recover it. 00:31:41.277 [2024-10-01 08:46:32.850999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.277 [2024-10-01 08:46:32.851009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.277 qpair failed and we were unable to recover it. 00:31:41.277 [2024-10-01 08:46:32.851335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.277 [2024-10-01 08:46:32.851344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.277 qpair failed and we were unable to recover it. 00:31:41.277 [2024-10-01 08:46:32.851628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.277 [2024-10-01 08:46:32.851638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.278 qpair failed and we were unable to recover it. 00:31:41.278 [2024-10-01 08:46:32.851927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.278 [2024-10-01 08:46:32.851937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.278 qpair failed and we were unable to recover it. 00:31:41.278 [2024-10-01 08:46:32.852127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.278 [2024-10-01 08:46:32.852137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.278 qpair failed and we were unable to recover it. 00:31:41.278 [2024-10-01 08:46:32.852491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.278 [2024-10-01 08:46:32.852501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.278 qpair failed and we were unable to recover it. 00:31:41.278 [2024-10-01 08:46:32.852828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.278 [2024-10-01 08:46:32.852839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.278 qpair failed and we were unable to recover it. 00:31:41.278 [2024-10-01 08:46:32.853148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.278 [2024-10-01 08:46:32.853159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.278 qpair failed and we were unable to recover it. 00:31:41.278 [2024-10-01 08:46:32.853492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.278 [2024-10-01 08:46:32.853502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.278 qpair failed and we were unable to recover it. 00:31:41.278 [2024-10-01 08:46:32.853854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.278 [2024-10-01 08:46:32.853864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.278 qpair failed and we were unable to recover it. 00:31:41.278 [2024-10-01 08:46:32.854156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.278 [2024-10-01 08:46:32.854166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.278 qpair failed and we were unable to recover it. 00:31:41.278 [2024-10-01 08:46:32.854386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.278 [2024-10-01 08:46:32.854396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.278 qpair failed and we were unable to recover it. 00:31:41.278 [2024-10-01 08:46:32.854731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.278 [2024-10-01 08:46:32.854741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.278 qpair failed and we were unable to recover it. 00:31:41.278 [2024-10-01 08:46:32.855012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.278 [2024-10-01 08:46:32.855022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.278 qpair failed and we were unable to recover it. 00:31:41.278 [2024-10-01 08:46:32.855343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.278 [2024-10-01 08:46:32.855353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.278 qpair failed and we were unable to recover it. 00:31:41.278 [2024-10-01 08:46:32.855642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.278 [2024-10-01 08:46:32.855652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.278 qpair failed and we were unable to recover it. 00:31:41.278 [2024-10-01 08:46:32.855913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.278 [2024-10-01 08:46:32.855922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.278 qpair failed and we were unable to recover it. 00:31:41.278 [2024-10-01 08:46:32.856241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.278 [2024-10-01 08:46:32.856251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.278 qpair failed and we were unable to recover it. 00:31:41.278 [2024-10-01 08:46:32.856541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.278 [2024-10-01 08:46:32.856559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.278 qpair failed and we were unable to recover it. 00:31:41.278 [2024-10-01 08:46:32.856857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.278 [2024-10-01 08:46:32.856867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.278 qpair failed and we were unable to recover it. 00:31:41.278 [2024-10-01 08:46:32.857203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.278 [2024-10-01 08:46:32.857213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.278 qpair failed and we were unable to recover it. 00:31:41.278 [2024-10-01 08:46:32.857536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.278 [2024-10-01 08:46:32.857545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.278 qpair failed and we were unable to recover it. 00:31:41.278 [2024-10-01 08:46:32.857828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.278 [2024-10-01 08:46:32.857837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.278 qpair failed and we were unable to recover it. 00:31:41.278 [2024-10-01 08:46:32.858138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.278 [2024-10-01 08:46:32.858149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.278 qpair failed and we were unable to recover it. 00:31:41.278 [2024-10-01 08:46:32.858465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.278 [2024-10-01 08:46:32.858477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.278 qpair failed and we were unable to recover it. 00:31:41.278 [2024-10-01 08:46:32.858641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.279 [2024-10-01 08:46:32.858652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.279 qpair failed and we were unable to recover it. 00:31:41.279 [2024-10-01 08:46:32.858987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.279 [2024-10-01 08:46:32.858999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.279 qpair failed and we were unable to recover it. 00:31:41.279 [2024-10-01 08:46:32.859308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.279 [2024-10-01 08:46:32.859318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.279 qpair failed and we were unable to recover it. 00:31:41.279 [2024-10-01 08:46:32.859577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.279 [2024-10-01 08:46:32.859588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.279 qpair failed and we were unable to recover it. 00:31:41.279 [2024-10-01 08:46:32.859746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.279 [2024-10-01 08:46:32.859757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.279 qpair failed and we were unable to recover it. 00:31:41.279 [2024-10-01 08:46:32.860048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.279 [2024-10-01 08:46:32.860058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.279 qpair failed and we were unable to recover it. 00:31:41.279 [2024-10-01 08:46:32.860358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.279 [2024-10-01 08:46:32.860367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.279 qpair failed and we were unable to recover it. 00:31:41.279 [2024-10-01 08:46:32.860644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.279 [2024-10-01 08:46:32.860654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.279 qpair failed and we were unable to recover it. 00:31:41.279 [2024-10-01 08:46:32.860989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.279 [2024-10-01 08:46:32.861002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.279 qpair failed and we were unable to recover it. 00:31:41.279 [2024-10-01 08:46:32.861288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.279 [2024-10-01 08:46:32.861297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.279 qpair failed and we were unable to recover it. 00:31:41.279 [2024-10-01 08:46:32.861594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.279 [2024-10-01 08:46:32.861605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.279 qpair failed and we were unable to recover it. 00:31:41.279 [2024-10-01 08:46:32.861913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.279 [2024-10-01 08:46:32.861923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.279 qpair failed and we were unable to recover it. 00:31:41.279 [2024-10-01 08:46:32.862231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.279 [2024-10-01 08:46:32.862241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.279 qpair failed and we were unable to recover it. 00:31:41.279 [2024-10-01 08:46:32.862561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.279 [2024-10-01 08:46:32.862571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.279 qpair failed and we were unable to recover it. 00:31:41.279 [2024-10-01 08:46:32.862872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.279 [2024-10-01 08:46:32.862883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.279 qpair failed and we were unable to recover it. 00:31:41.279 [2024-10-01 08:46:32.863231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.279 [2024-10-01 08:46:32.863241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.279 qpair failed and we were unable to recover it. 00:31:41.279 [2024-10-01 08:46:32.863547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.279 [2024-10-01 08:46:32.863557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.279 qpair failed and we were unable to recover it. 00:31:41.279 [2024-10-01 08:46:32.863839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.279 [2024-10-01 08:46:32.863849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.279 qpair failed and we were unable to recover it. 00:31:41.279 [2024-10-01 08:46:32.864161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.279 [2024-10-01 08:46:32.864171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.279 qpair failed and we were unable to recover it. 00:31:41.279 [2024-10-01 08:46:32.864480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.279 [2024-10-01 08:46:32.864490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.279 qpair failed and we were unable to recover it. 00:31:41.279 [2024-10-01 08:46:32.864773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.279 [2024-10-01 08:46:32.864782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.279 qpair failed and we were unable to recover it. 00:31:41.279 [2024-10-01 08:46:32.865089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.279 [2024-10-01 08:46:32.865099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.279 qpair failed and we were unable to recover it. 00:31:41.279 [2024-10-01 08:46:32.865418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.279 [2024-10-01 08:46:32.865427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.279 qpair failed and we were unable to recover it. 00:31:41.279 [2024-10-01 08:46:32.865767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.279 [2024-10-01 08:46:32.865777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.279 qpair failed and we were unable to recover it. 00:31:41.279 [2024-10-01 08:46:32.866096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.279 [2024-10-01 08:46:32.866106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.279 qpair failed and we were unable to recover it. 00:31:41.279 [2024-10-01 08:46:32.866396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.279 [2024-10-01 08:46:32.866406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.279 qpair failed and we were unable to recover it. 00:31:41.279 [2024-10-01 08:46:32.866739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.279 [2024-10-01 08:46:32.866749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.279 qpair failed and we were unable to recover it. 00:31:41.279 [2024-10-01 08:46:32.867025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.279 [2024-10-01 08:46:32.867035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.279 qpair failed and we were unable to recover it. 00:31:41.280 [2024-10-01 08:46:32.867340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.280 [2024-10-01 08:46:32.867351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.280 qpair failed and we were unable to recover it. 00:31:41.280 [2024-10-01 08:46:32.867636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.280 [2024-10-01 08:46:32.867647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.280 qpair failed and we were unable to recover it. 00:31:41.280 [2024-10-01 08:46:32.867951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.280 [2024-10-01 08:46:32.867962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.280 qpair failed and we were unable to recover it. 00:31:41.280 [2024-10-01 08:46:32.868261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.280 [2024-10-01 08:46:32.868272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.280 qpair failed and we were unable to recover it. 00:31:41.280 [2024-10-01 08:46:32.868459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.280 [2024-10-01 08:46:32.868470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.280 qpair failed and we were unable to recover it. 00:31:41.280 [2024-10-01 08:46:32.868791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.280 [2024-10-01 08:46:32.868801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.280 qpair failed and we were unable to recover it. 00:31:41.280 [2024-10-01 08:46:32.869080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.280 [2024-10-01 08:46:32.869090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.280 qpair failed and we were unable to recover it. 00:31:41.280 [2024-10-01 08:46:32.869402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.280 [2024-10-01 08:46:32.869412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.280 qpair failed and we were unable to recover it. 00:31:41.280 [2024-10-01 08:46:32.869693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.280 [2024-10-01 08:46:32.869703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.280 qpair failed and we were unable to recover it. 00:31:41.280 [2024-10-01 08:46:32.870023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.280 [2024-10-01 08:46:32.870033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.280 qpair failed and we were unable to recover it. 00:31:41.280 [2024-10-01 08:46:32.870331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.280 [2024-10-01 08:46:32.870341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.280 qpair failed and we were unable to recover it. 00:31:41.280 [2024-10-01 08:46:32.870629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.280 [2024-10-01 08:46:32.870639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.280 qpair failed and we were unable to recover it. 00:31:41.280 [2024-10-01 08:46:32.870827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.280 [2024-10-01 08:46:32.870840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.280 qpair failed and we were unable to recover it. 00:31:41.280 [2024-10-01 08:46:32.871069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.280 [2024-10-01 08:46:32.871079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.280 qpair failed and we were unable to recover it. 00:31:41.280 [2024-10-01 08:46:32.871405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.280 [2024-10-01 08:46:32.871415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.280 qpair failed and we were unable to recover it. 00:31:41.280 [2024-10-01 08:46:32.871699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.280 [2024-10-01 08:46:32.871708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.280 qpair failed and we were unable to recover it. 00:31:41.280 [2024-10-01 08:46:32.871991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.280 [2024-10-01 08:46:32.872006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.280 qpair failed and we were unable to recover it. 00:31:41.280 [2024-10-01 08:46:32.872321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.280 [2024-10-01 08:46:32.872331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.280 qpair failed and we were unable to recover it. 00:31:41.280 [2024-10-01 08:46:32.872668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.280 [2024-10-01 08:46:32.872678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.280 qpair failed and we were unable to recover it. 00:31:41.280 [2024-10-01 08:46:32.872951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.280 [2024-10-01 08:46:32.872960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.280 qpair failed and we were unable to recover it. 00:31:41.280 [2024-10-01 08:46:32.873309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.280 [2024-10-01 08:46:32.873320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.280 qpair failed and we were unable to recover it. 00:31:41.280 [2024-10-01 08:46:32.873626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.280 [2024-10-01 08:46:32.873636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.280 qpair failed and we were unable to recover it. 00:31:41.280 [2024-10-01 08:46:32.873910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.280 [2024-10-01 08:46:32.873920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.280 qpair failed and we were unable to recover it. 00:31:41.280 [2024-10-01 08:46:32.874192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.280 [2024-10-01 08:46:32.874201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.280 qpair failed and we were unable to recover it. 00:31:41.280 [2024-10-01 08:46:32.874549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.280 [2024-10-01 08:46:32.874559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.280 qpair failed and we were unable to recover it. 00:31:41.280 [2024-10-01 08:46:32.874890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.280 [2024-10-01 08:46:32.874899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.280 qpair failed and we were unable to recover it. 00:31:41.280 [2024-10-01 08:46:32.875210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.280 [2024-10-01 08:46:32.875220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.280 qpair failed and we were unable to recover it. 00:31:41.280 [2024-10-01 08:46:32.875520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.281 [2024-10-01 08:46:32.875529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.281 qpair failed and we were unable to recover it. 00:31:41.281 [2024-10-01 08:46:32.875813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.281 [2024-10-01 08:46:32.875823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.281 qpair failed and we were unable to recover it. 00:31:41.281 [2024-10-01 08:46:32.876133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.281 [2024-10-01 08:46:32.876143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.281 qpair failed and we were unable to recover it. 00:31:41.281 [2024-10-01 08:46:32.876481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.281 [2024-10-01 08:46:32.876491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.281 qpair failed and we were unable to recover it. 00:31:41.281 [2024-10-01 08:46:32.876771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.281 [2024-10-01 08:46:32.876780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.281 qpair failed and we were unable to recover it. 00:31:41.281 [2024-10-01 08:46:32.877093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.281 [2024-10-01 08:46:32.877104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.281 qpair failed and we were unable to recover it. 00:31:41.281 [2024-10-01 08:46:32.877431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.281 [2024-10-01 08:46:32.877440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.281 qpair failed and we were unable to recover it. 00:31:41.281 [2024-10-01 08:46:32.877727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.281 [2024-10-01 08:46:32.877737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.281 qpair failed and we were unable to recover it. 00:31:41.281 [2024-10-01 08:46:32.878028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.281 [2024-10-01 08:46:32.878038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.281 qpair failed and we were unable to recover it. 00:31:41.281 [2024-10-01 08:46:32.878337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.281 [2024-10-01 08:46:32.878347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.281 qpair failed and we were unable to recover it. 00:31:41.281 [2024-10-01 08:46:32.878625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.281 [2024-10-01 08:46:32.878636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.281 qpair failed and we were unable to recover it. 00:31:41.281 [2024-10-01 08:46:32.878952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.281 [2024-10-01 08:46:32.878963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.281 qpair failed and we were unable to recover it. 00:31:41.281 [2024-10-01 08:46:32.879291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.281 [2024-10-01 08:46:32.879306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.281 qpair failed and we were unable to recover it. 00:31:41.281 [2024-10-01 08:46:32.879620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.281 [2024-10-01 08:46:32.879631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.281 qpair failed and we were unable to recover it. 00:31:41.281 [2024-10-01 08:46:32.879927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.281 [2024-10-01 08:46:32.879938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.281 qpair failed and we were unable to recover it. 00:31:41.281 [2024-10-01 08:46:32.880249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.281 [2024-10-01 08:46:32.880259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.281 qpair failed and we were unable to recover it. 00:31:41.281 [2024-10-01 08:46:32.880540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.281 [2024-10-01 08:46:32.880550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.281 qpair failed and we were unable to recover it. 00:31:41.281 [2024-10-01 08:46:32.880859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.281 [2024-10-01 08:46:32.880869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.281 qpair failed and we were unable to recover it. 00:31:41.281 [2024-10-01 08:46:32.881199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.281 [2024-10-01 08:46:32.881210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.281 qpair failed and we were unable to recover it. 00:31:41.281 [2024-10-01 08:46:32.881507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.281 [2024-10-01 08:46:32.881516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.281 qpair failed and we were unable to recover it. 00:31:41.281 [2024-10-01 08:46:32.881818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.281 [2024-10-01 08:46:32.881828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.281 qpair failed and we were unable to recover it. 00:31:41.281 [2024-10-01 08:46:32.882137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.281 [2024-10-01 08:46:32.882147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.281 qpair failed and we were unable to recover it. 00:31:41.281 [2024-10-01 08:46:32.882451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.281 [2024-10-01 08:46:32.882460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.281 qpair failed and we were unable to recover it. 00:31:41.281 [2024-10-01 08:46:32.882766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.281 [2024-10-01 08:46:32.882776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.281 qpair failed and we were unable to recover it. 00:31:41.281 [2024-10-01 08:46:32.882991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.281 [2024-10-01 08:46:32.883004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.281 qpair failed and we were unable to recover it. 00:31:41.281 [2024-10-01 08:46:32.883316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.281 [2024-10-01 08:46:32.883326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.281 qpair failed and we were unable to recover it. 00:31:41.281 [2024-10-01 08:46:32.883635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.281 [2024-10-01 08:46:32.883645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.281 qpair failed and we were unable to recover it. 00:31:41.281 [2024-10-01 08:46:32.883938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.281 [2024-10-01 08:46:32.883947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.281 qpair failed and we were unable to recover it. 00:31:41.281 [2024-10-01 08:46:32.884245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.282 [2024-10-01 08:46:32.884255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.282 qpair failed and we were unable to recover it. 00:31:41.282 [2024-10-01 08:46:32.884461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.282 [2024-10-01 08:46:32.884471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.282 qpair failed and we were unable to recover it. 00:31:41.282 [2024-10-01 08:46:32.884673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.282 [2024-10-01 08:46:32.884683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.282 qpair failed and we were unable to recover it. 00:31:41.282 [2024-10-01 08:46:32.884985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.282 [2024-10-01 08:46:32.884997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.282 qpair failed and we were unable to recover it. 00:31:41.282 [2024-10-01 08:46:32.885319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.282 [2024-10-01 08:46:32.885328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.282 qpair failed and we were unable to recover it. 00:31:41.282 [2024-10-01 08:46:32.885613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.282 [2024-10-01 08:46:32.885632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.282 qpair failed and we were unable to recover it. 00:31:41.282 [2024-10-01 08:46:32.885963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.282 [2024-10-01 08:46:32.885973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.282 qpair failed and we were unable to recover it. 00:31:41.282 [2024-10-01 08:46:32.886308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.282 [2024-10-01 08:46:32.886319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.282 qpair failed and we were unable to recover it. 00:31:41.282 [2024-10-01 08:46:32.886618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.282 [2024-10-01 08:46:32.886628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.282 qpair failed and we were unable to recover it. 00:31:41.282 [2024-10-01 08:46:32.886835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.282 [2024-10-01 08:46:32.886844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.282 qpair failed and we were unable to recover it. 00:31:41.282 [2024-10-01 08:46:32.887162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.282 [2024-10-01 08:46:32.887172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.282 qpair failed and we were unable to recover it. 00:31:41.282 [2024-10-01 08:46:32.887455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.282 [2024-10-01 08:46:32.887464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.282 qpair failed and we were unable to recover it. 00:31:41.282 [2024-10-01 08:46:32.887756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.282 [2024-10-01 08:46:32.887766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.282 qpair failed and we were unable to recover it. 00:31:41.282 [2024-10-01 08:46:32.888075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.282 [2024-10-01 08:46:32.888085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.282 qpair failed and we were unable to recover it. 00:31:41.282 [2024-10-01 08:46:32.888378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.282 [2024-10-01 08:46:32.888388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.282 qpair failed and we were unable to recover it. 00:31:41.282 [2024-10-01 08:46:32.888666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.282 [2024-10-01 08:46:32.888676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.282 qpair failed and we were unable to recover it. 00:31:41.282 [2024-10-01 08:46:32.888983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.282 [2024-10-01 08:46:32.888992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.282 qpair failed and we were unable to recover it. 00:31:41.282 [2024-10-01 08:46:32.889277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.282 [2024-10-01 08:46:32.889287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.282 qpair failed and we were unable to recover it. 00:31:41.282 [2024-10-01 08:46:32.889634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.282 [2024-10-01 08:46:32.889644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.282 qpair failed and we were unable to recover it. 00:31:41.282 [2024-10-01 08:46:32.889831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.282 [2024-10-01 08:46:32.889842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.282 qpair failed and we were unable to recover it. 00:31:41.282 [2024-10-01 08:46:32.890144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.282 [2024-10-01 08:46:32.890154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.282 qpair failed and we were unable to recover it. 00:31:41.282 [2024-10-01 08:46:32.890433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.282 [2024-10-01 08:46:32.890443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.282 qpair failed and we were unable to recover it. 00:31:41.282 [2024-10-01 08:46:32.890757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.282 [2024-10-01 08:46:32.890767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.282 qpair failed and we were unable to recover it. 00:31:41.282 [2024-10-01 08:46:32.891041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.282 [2024-10-01 08:46:32.891051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.282 qpair failed and we were unable to recover it. 00:31:41.282 [2024-10-01 08:46:32.891374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.282 [2024-10-01 08:46:32.891384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.282 qpair failed and we were unable to recover it. 00:31:41.282 [2024-10-01 08:46:32.891694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.282 [2024-10-01 08:46:32.891705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.282 qpair failed and we were unable to recover it. 00:31:41.282 [2024-10-01 08:46:32.891988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.282 [2024-10-01 08:46:32.892001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.282 qpair failed and we were unable to recover it. 00:31:41.282 [2024-10-01 08:46:32.892315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.283 [2024-10-01 08:46:32.892325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.283 qpair failed and we were unable to recover it. 00:31:41.283 [2024-10-01 08:46:32.892598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.283 [2024-10-01 08:46:32.892608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.283 qpair failed and we were unable to recover it. 00:31:41.283 [2024-10-01 08:46:32.892870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.283 [2024-10-01 08:46:32.892879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.283 qpair failed and we were unable to recover it. 00:31:41.283 [2024-10-01 08:46:32.893251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.283 [2024-10-01 08:46:32.893261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.283 qpair failed and we were unable to recover it. 00:31:41.283 [2024-10-01 08:46:32.893561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.283 [2024-10-01 08:46:32.893571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.283 qpair failed and we were unable to recover it. 00:31:41.283 [2024-10-01 08:46:32.893855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.283 [2024-10-01 08:46:32.893865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.283 qpair failed and we were unable to recover it. 00:31:41.283 [2024-10-01 08:46:32.894060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.283 [2024-10-01 08:46:32.894080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.283 qpair failed and we were unable to recover it. 00:31:41.283 [2024-10-01 08:46:32.894371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.283 [2024-10-01 08:46:32.894381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.283 qpair failed and we were unable to recover it. 00:31:41.283 [2024-10-01 08:46:32.894682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.283 [2024-10-01 08:46:32.894691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.283 qpair failed and we were unable to recover it. 00:31:41.283 [2024-10-01 08:46:32.894977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.283 [2024-10-01 08:46:32.894987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.283 qpair failed and we were unable to recover it. 00:31:41.283 [2024-10-01 08:46:32.895316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.283 [2024-10-01 08:46:32.895327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.283 qpair failed and we were unable to recover it. 00:31:41.283 [2024-10-01 08:46:32.895629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.283 [2024-10-01 08:46:32.895638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.283 qpair failed and we were unable to recover it. 00:31:41.283 [2024-10-01 08:46:32.895975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.283 [2024-10-01 08:46:32.895985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.283 qpair failed and we were unable to recover it. 00:31:41.283 [2024-10-01 08:46:32.896306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.283 [2024-10-01 08:46:32.896316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.283 qpair failed and we were unable to recover it. 00:31:41.283 [2024-10-01 08:46:32.896624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.283 [2024-10-01 08:46:32.896633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.283 qpair failed and we were unable to recover it. 00:31:41.283 [2024-10-01 08:46:32.896941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.283 [2024-10-01 08:46:32.896950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.283 qpair failed and we were unable to recover it. 00:31:41.283 [2024-10-01 08:46:32.897257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.283 [2024-10-01 08:46:32.897267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.283 qpair failed and we were unable to recover it. 00:31:41.283 [2024-10-01 08:46:32.897577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.283 [2024-10-01 08:46:32.897587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.283 qpair failed and we were unable to recover it. 00:31:41.283 [2024-10-01 08:46:32.897904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.283 [2024-10-01 08:46:32.897913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.283 qpair failed and we were unable to recover it. 00:31:41.283 [2024-10-01 08:46:32.898296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.283 [2024-10-01 08:46:32.898306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.283 qpair failed and we were unable to recover it. 00:31:41.283 [2024-10-01 08:46:32.898588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.283 [2024-10-01 08:46:32.898598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.283 qpair failed and we were unable to recover it. 00:31:41.283 [2024-10-01 08:46:32.898887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.283 [2024-10-01 08:46:32.898897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.283 qpair failed and we were unable to recover it. 00:31:41.283 [2024-10-01 08:46:32.899203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.283 [2024-10-01 08:46:32.899213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.283 qpair failed and we were unable to recover it. 00:31:41.283 [2024-10-01 08:46:32.899495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.283 [2024-10-01 08:46:32.899504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.283 qpair failed and we were unable to recover it. 00:31:41.283 [2024-10-01 08:46:32.899807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.283 [2024-10-01 08:46:32.899817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.283 qpair failed and we were unable to recover it. 00:31:41.283 [2024-10-01 08:46:32.900090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.283 [2024-10-01 08:46:32.900103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.283 qpair failed and we were unable to recover it. 00:31:41.283 [2024-10-01 08:46:32.900446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.283 [2024-10-01 08:46:32.900456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.283 qpair failed and we were unable to recover it. 00:31:41.283 [2024-10-01 08:46:32.900746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.283 [2024-10-01 08:46:32.900763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.283 qpair failed and we were unable to recover it. 00:31:41.283 [2024-10-01 08:46:32.901068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.284 [2024-10-01 08:46:32.901078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.284 qpair failed and we were unable to recover it. 00:31:41.284 [2024-10-01 08:46:32.901393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.284 [2024-10-01 08:46:32.901408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.284 qpair failed and we were unable to recover it. 00:31:41.284 [2024-10-01 08:46:32.901736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.284 [2024-10-01 08:46:32.901746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.284 qpair failed and we were unable to recover it. 00:31:41.284 [2024-10-01 08:46:32.902052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.284 [2024-10-01 08:46:32.902063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.284 qpair failed and we were unable to recover it. 00:31:41.284 [2024-10-01 08:46:32.902267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.284 [2024-10-01 08:46:32.902277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.284 qpair failed and we were unable to recover it. 00:31:41.284 [2024-10-01 08:46:32.902604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.284 [2024-10-01 08:46:32.902613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.284 qpair failed and we were unable to recover it. 00:31:41.284 [2024-10-01 08:46:32.902920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.284 [2024-10-01 08:46:32.902929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.284 qpair failed and we were unable to recover it. 00:31:41.284 [2024-10-01 08:46:32.903217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.284 [2024-10-01 08:46:32.903228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.284 qpair failed and we were unable to recover it. 00:31:41.284 [2024-10-01 08:46:32.903506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.284 [2024-10-01 08:46:32.903516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.284 qpair failed and we were unable to recover it. 00:31:41.284 [2024-10-01 08:46:32.903835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.284 [2024-10-01 08:46:32.903845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.284 qpair failed and we were unable to recover it. 00:31:41.284 [2024-10-01 08:46:32.904147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.284 [2024-10-01 08:46:32.904157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.284 qpair failed and we were unable to recover it. 00:31:41.284 [2024-10-01 08:46:32.904457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.284 [2024-10-01 08:46:32.904468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.284 qpair failed and we were unable to recover it. 00:31:41.284 [2024-10-01 08:46:32.904637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.284 [2024-10-01 08:46:32.904646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.284 qpair failed and we were unable to recover it. 00:31:41.284 [2024-10-01 08:46:32.904919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.284 [2024-10-01 08:46:32.904929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.284 qpair failed and we were unable to recover it. 00:31:41.284 [2024-10-01 08:46:32.905240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.284 [2024-10-01 08:46:32.905250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.284 qpair failed and we were unable to recover it. 00:31:41.284 [2024-10-01 08:46:32.905550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.284 [2024-10-01 08:46:32.905560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.284 qpair failed and we were unable to recover it. 00:31:41.284 [2024-10-01 08:46:32.905834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.284 [2024-10-01 08:46:32.905844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.284 qpair failed and we were unable to recover it. 00:31:41.284 [2024-10-01 08:46:32.906166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.284 [2024-10-01 08:46:32.906176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.284 qpair failed and we were unable to recover it. 00:31:41.284 [2024-10-01 08:46:32.906480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.284 [2024-10-01 08:46:32.906490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.284 qpair failed and we were unable to recover it. 00:31:41.284 [2024-10-01 08:46:32.906674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.284 [2024-10-01 08:46:32.906685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.284 qpair failed and we were unable to recover it. 00:31:41.284 [2024-10-01 08:46:32.906881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.284 [2024-10-01 08:46:32.906890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.284 qpair failed and we were unable to recover it. 00:31:41.284 [2024-10-01 08:46:32.907174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.284 [2024-10-01 08:46:32.907185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.284 qpair failed and we were unable to recover it. 00:31:41.284 [2024-10-01 08:46:32.907463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.284 [2024-10-01 08:46:32.907472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.284 qpair failed and we were unable to recover it. 00:31:41.284 [2024-10-01 08:46:32.907773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.284 [2024-10-01 08:46:32.907782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.284 qpair failed and we were unable to recover it. 00:31:41.284 [2024-10-01 08:46:32.908100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.284 [2024-10-01 08:46:32.908110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.284 qpair failed and we were unable to recover it. 00:31:41.284 [2024-10-01 08:46:32.908301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.284 [2024-10-01 08:46:32.908311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.284 qpair failed and we were unable to recover it. 00:31:41.284 [2024-10-01 08:46:32.908497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.284 [2024-10-01 08:46:32.908506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.284 qpair failed and we were unable to recover it. 00:31:41.284 [2024-10-01 08:46:32.908837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.284 [2024-10-01 08:46:32.908846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.284 qpair failed and we were unable to recover it. 00:31:41.285 [2024-10-01 08:46:32.909132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.285 [2024-10-01 08:46:32.909142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.285 qpair failed and we were unable to recover it. 00:31:41.285 [2024-10-01 08:46:32.909445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.285 [2024-10-01 08:46:32.909454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.285 qpair failed and we were unable to recover it. 00:31:41.285 [2024-10-01 08:46:32.909785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.285 [2024-10-01 08:46:32.909796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.285 qpair failed and we were unable to recover it. 00:31:41.285 [2024-10-01 08:46:32.910033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.285 [2024-10-01 08:46:32.910043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.285 qpair failed and we were unable to recover it. 00:31:41.285 [2024-10-01 08:46:32.910359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.285 [2024-10-01 08:46:32.910369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.285 qpair failed and we were unable to recover it. 00:31:41.285 [2024-10-01 08:46:32.910727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.285 [2024-10-01 08:46:32.910737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.285 qpair failed and we were unable to recover it. 00:31:41.285 [2024-10-01 08:46:32.911017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.285 [2024-10-01 08:46:32.911027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.285 qpair failed and we were unable to recover it. 00:31:41.285 [2024-10-01 08:46:32.911447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.285 [2024-10-01 08:46:32.911457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.285 qpair failed and we were unable to recover it. 00:31:41.285 [2024-10-01 08:46:32.911698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.285 [2024-10-01 08:46:32.911708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.285 qpair failed and we were unable to recover it. 00:31:41.285 [2024-10-01 08:46:32.912011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.285 [2024-10-01 08:46:32.912022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.285 qpair failed and we were unable to recover it. 00:31:41.285 [2024-10-01 08:46:32.912336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.285 [2024-10-01 08:46:32.912349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.285 qpair failed and we were unable to recover it. 00:31:41.285 [2024-10-01 08:46:32.912658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.285 [2024-10-01 08:46:32.912669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.285 qpair failed and we were unable to recover it. 00:31:41.285 [2024-10-01 08:46:32.913007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.285 [2024-10-01 08:46:32.913018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.285 qpair failed and we were unable to recover it. 00:31:41.285 [2024-10-01 08:46:32.913327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.285 [2024-10-01 08:46:32.913337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.285 qpair failed and we were unable to recover it. 00:31:41.285 [2024-10-01 08:46:32.913640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.285 [2024-10-01 08:46:32.913650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.285 qpair failed and we were unable to recover it. 00:31:41.285 [2024-10-01 08:46:32.913922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.285 [2024-10-01 08:46:32.913932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.285 qpair failed and we were unable to recover it. 00:31:41.285 [2024-10-01 08:46:32.914226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.285 [2024-10-01 08:46:32.914236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.285 qpair failed and we were unable to recover it. 00:31:41.285 [2024-10-01 08:46:32.914550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.285 [2024-10-01 08:46:32.914560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.285 qpair failed and we were unable to recover it. 00:31:41.285 [2024-10-01 08:46:32.914833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.285 [2024-10-01 08:46:32.914843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.285 qpair failed and we were unable to recover it. 00:31:41.285 [2024-10-01 08:46:32.915126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.285 [2024-10-01 08:46:32.915137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.285 qpair failed and we were unable to recover it. 00:31:41.285 [2024-10-01 08:46:32.915465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.285 [2024-10-01 08:46:32.915476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.285 qpair failed and we were unable to recover it. 00:31:41.285 [2024-10-01 08:46:32.915686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.285 [2024-10-01 08:46:32.915696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.285 qpair failed and we were unable to recover it. 00:31:41.285 [2024-10-01 08:46:32.916036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.285 [2024-10-01 08:46:32.916048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.285 qpair failed and we were unable to recover it. 00:31:41.286 [2024-10-01 08:46:32.916343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.286 [2024-10-01 08:46:32.916353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.286 qpair failed and we were unable to recover it. 00:31:41.286 [2024-10-01 08:46:32.916669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.286 [2024-10-01 08:46:32.916679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.286 qpair failed and we were unable to recover it. 00:31:41.286 [2024-10-01 08:46:32.916968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.286 [2024-10-01 08:46:32.916978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.286 qpair failed and we were unable to recover it. 00:31:41.286 [2024-10-01 08:46:32.917271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.286 [2024-10-01 08:46:32.917281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.286 qpair failed and we were unable to recover it. 00:31:41.286 [2024-10-01 08:46:32.917559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.286 [2024-10-01 08:46:32.917569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.286 qpair failed and we were unable to recover it. 00:31:41.286 [2024-10-01 08:46:32.917843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.286 [2024-10-01 08:46:32.917853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.286 qpair failed and we were unable to recover it. 00:31:41.286 [2024-10-01 08:46:32.918143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.286 [2024-10-01 08:46:32.918154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.286 qpair failed and we were unable to recover it. 00:31:41.286 [2024-10-01 08:46:32.918466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.286 [2024-10-01 08:46:32.918476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.286 qpair failed and we were unable to recover it. 00:31:41.286 [2024-10-01 08:46:32.918805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.286 [2024-10-01 08:46:32.918815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.286 qpair failed and we were unable to recover it. 00:31:41.286 [2024-10-01 08:46:32.919093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.286 [2024-10-01 08:46:32.919103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.286 qpair failed and we were unable to recover it. 00:31:41.286 [2024-10-01 08:46:32.919390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.286 [2024-10-01 08:46:32.919400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.286 qpair failed and we were unable to recover it. 00:31:41.286 [2024-10-01 08:46:32.919769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.286 [2024-10-01 08:46:32.919780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.286 qpair failed and we were unable to recover it. 00:31:41.286 [2024-10-01 08:46:32.920056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.286 [2024-10-01 08:46:32.920066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.286 qpair failed and we were unable to recover it. 00:31:41.286 [2024-10-01 08:46:32.920353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.286 [2024-10-01 08:46:32.920363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.286 qpair failed and we were unable to recover it. 00:31:41.286 [2024-10-01 08:46:32.920736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.286 [2024-10-01 08:46:32.920748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.286 qpair failed and we were unable to recover it. 00:31:41.286 [2024-10-01 08:46:32.920984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.286 [2024-10-01 08:46:32.921000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.286 qpair failed and we were unable to recover it. 00:31:41.286 [2024-10-01 08:46:32.921314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.286 [2024-10-01 08:46:32.921325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.286 qpair failed and we were unable to recover it. 00:31:41.286 [2024-10-01 08:46:32.921662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.286 [2024-10-01 08:46:32.921671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.286 qpair failed and we were unable to recover it. 00:31:41.286 [2024-10-01 08:46:32.921872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.286 [2024-10-01 08:46:32.921883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.286 qpair failed and we were unable to recover it. 00:31:41.286 [2024-10-01 08:46:32.922107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.286 [2024-10-01 08:46:32.922118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.286 qpair failed and we were unable to recover it. 00:31:41.286 [2024-10-01 08:46:32.922439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.286 [2024-10-01 08:46:32.922448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.286 qpair failed and we were unable to recover it. 00:31:41.286 [2024-10-01 08:46:32.922631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.286 [2024-10-01 08:46:32.922642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.286 qpair failed and we were unable to recover it. 00:31:41.286 [2024-10-01 08:46:32.922906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.286 [2024-10-01 08:46:32.922915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.286 qpair failed and we were unable to recover it. 00:31:41.286 [2024-10-01 08:46:32.923226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.286 [2024-10-01 08:46:32.923235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.286 qpair failed and we were unable to recover it. 00:31:41.286 [2024-10-01 08:46:32.923541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.286 [2024-10-01 08:46:32.923551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.286 qpair failed and we were unable to recover it. 00:31:41.286 [2024-10-01 08:46:32.923859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.286 [2024-10-01 08:46:32.923869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.286 qpair failed and we were unable to recover it. 00:31:41.286 [2024-10-01 08:46:32.924091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.286 [2024-10-01 08:46:32.924101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.287 qpair failed and we were unable to recover it. 00:31:41.287 [2024-10-01 08:46:32.924298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.287 [2024-10-01 08:46:32.924308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.287 qpair failed and we were unable to recover it. 00:31:41.287 [2024-10-01 08:46:32.924521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.287 [2024-10-01 08:46:32.924532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.287 qpair failed and we were unable to recover it. 00:31:41.287 [2024-10-01 08:46:32.924811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.287 [2024-10-01 08:46:32.924822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.287 qpair failed and we were unable to recover it. 00:31:41.287 [2024-10-01 08:46:32.925127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.287 [2024-10-01 08:46:32.925138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.287 qpair failed and we were unable to recover it. 00:31:41.287 [2024-10-01 08:46:32.925423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.287 [2024-10-01 08:46:32.925433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.287 qpair failed and we were unable to recover it. 00:31:41.287 [2024-10-01 08:46:32.925630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.287 [2024-10-01 08:46:32.925640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.287 qpair failed and we were unable to recover it. 00:31:41.287 [2024-10-01 08:46:32.925902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.287 [2024-10-01 08:46:32.925912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.287 qpair failed and we were unable to recover it. 00:31:41.287 [2024-10-01 08:46:32.926218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.287 [2024-10-01 08:46:32.926228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.287 qpair failed and we were unable to recover it. 00:31:41.287 [2024-10-01 08:46:32.926525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.287 [2024-10-01 08:46:32.926535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.287 qpair failed and we were unable to recover it. 00:31:41.287 [2024-10-01 08:46:32.926842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.287 [2024-10-01 08:46:32.926852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.287 qpair failed and we were unable to recover it. 00:31:41.287 [2024-10-01 08:46:32.927156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.287 [2024-10-01 08:46:32.927166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.287 qpair failed and we were unable to recover it. 00:31:41.287 [2024-10-01 08:46:32.927454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.287 [2024-10-01 08:46:32.927463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.287 qpair failed and we were unable to recover it. 00:31:41.287 [2024-10-01 08:46:32.927749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.287 [2024-10-01 08:46:32.927759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.287 qpair failed and we were unable to recover it. 00:31:41.287 [2024-10-01 08:46:32.928032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.287 [2024-10-01 08:46:32.928043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.287 qpair failed and we were unable to recover it. 00:31:41.287 [2024-10-01 08:46:32.928363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.287 [2024-10-01 08:46:32.928373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.287 qpair failed and we were unable to recover it. 00:31:41.287 [2024-10-01 08:46:32.928683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.287 [2024-10-01 08:46:32.928693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.287 qpair failed and we were unable to recover it. 00:31:41.287 [2024-10-01 08:46:32.929003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.287 [2024-10-01 08:46:32.929013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.287 qpair failed and we were unable to recover it. 00:31:41.287 [2024-10-01 08:46:32.929232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.287 [2024-10-01 08:46:32.929242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.287 qpair failed and we were unable to recover it. 00:31:41.287 [2024-10-01 08:46:32.929561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.287 [2024-10-01 08:46:32.929571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.287 qpair failed and we were unable to recover it. 00:31:41.287 [2024-10-01 08:46:32.929787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.287 [2024-10-01 08:46:32.929797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.287 qpair failed and we were unable to recover it. 00:31:41.287 [2024-10-01 08:46:32.930087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.287 [2024-10-01 08:46:32.930098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.287 qpair failed and we were unable to recover it. 00:31:41.287 [2024-10-01 08:46:32.930325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.287 [2024-10-01 08:46:32.930335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.287 qpair failed and we were unable to recover it. 00:31:41.287 [2024-10-01 08:46:32.930641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.287 [2024-10-01 08:46:32.930652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.287 qpair failed and we were unable to recover it. 00:31:41.287 [2024-10-01 08:46:32.930844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.287 [2024-10-01 08:46:32.930855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.287 qpair failed and we were unable to recover it. 00:31:41.287 [2024-10-01 08:46:32.931153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.287 [2024-10-01 08:46:32.931164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.287 qpair failed and we were unable to recover it. 00:31:41.287 [2024-10-01 08:46:32.931507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.287 [2024-10-01 08:46:32.931516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.287 qpair failed and we were unable to recover it. 00:31:41.287 [2024-10-01 08:46:32.931792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.287 [2024-10-01 08:46:32.931802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.287 qpair failed and we were unable to recover it. 00:31:41.287 [2024-10-01 08:46:32.932151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.288 [2024-10-01 08:46:32.932160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.288 qpair failed and we were unable to recover it. 00:31:41.288 [2024-10-01 08:46:32.932447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.288 [2024-10-01 08:46:32.932460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.288 qpair failed and we were unable to recover it. 00:31:41.288 [2024-10-01 08:46:32.932772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.288 [2024-10-01 08:46:32.932783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.288 qpair failed and we were unable to recover it. 00:31:41.288 [2024-10-01 08:46:32.933114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.288 [2024-10-01 08:46:32.933124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.288 qpair failed and we were unable to recover it. 00:31:41.288 [2024-10-01 08:46:32.933416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.288 [2024-10-01 08:46:32.933426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.288 qpair failed and we were unable to recover it. 00:31:41.288 [2024-10-01 08:46:32.933748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.288 [2024-10-01 08:46:32.933758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.288 qpair failed and we were unable to recover it. 00:31:41.288 [2024-10-01 08:46:32.934032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.288 [2024-10-01 08:46:32.934043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.288 qpair failed and we were unable to recover it. 00:31:41.288 [2024-10-01 08:46:32.934363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.288 [2024-10-01 08:46:32.934373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.288 qpair failed and we were unable to recover it. 00:31:41.288 [2024-10-01 08:46:32.934536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.288 [2024-10-01 08:46:32.934546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.288 qpair failed and we were unable to recover it. 00:31:41.288 [2024-10-01 08:46:32.934900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.288 [2024-10-01 08:46:32.934909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.288 qpair failed and we were unable to recover it. 00:31:41.288 [2024-10-01 08:46:32.935224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.288 [2024-10-01 08:46:32.935234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.288 qpair failed and we were unable to recover it. 00:31:41.288 [2024-10-01 08:46:32.935522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.288 [2024-10-01 08:46:32.935532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.288 qpair failed and we were unable to recover it. 00:31:41.288 [2024-10-01 08:46:32.935844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.288 [2024-10-01 08:46:32.935855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.288 qpair failed and we were unable to recover it. 00:31:41.288 [2024-10-01 08:46:32.936139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.288 [2024-10-01 08:46:32.936151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.288 qpair failed and we were unable to recover it. 00:31:41.288 [2024-10-01 08:46:32.936462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.288 [2024-10-01 08:46:32.936474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.288 qpair failed and we were unable to recover it. 00:31:41.288 [2024-10-01 08:46:32.936761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.288 [2024-10-01 08:46:32.936773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.288 qpair failed and we were unable to recover it. 00:31:41.288 [2024-10-01 08:46:32.937062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.288 [2024-10-01 08:46:32.937073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.288 qpair failed and we were unable to recover it. 00:31:41.288 [2024-10-01 08:46:32.937398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.288 [2024-10-01 08:46:32.937408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.288 qpair failed and we were unable to recover it. 00:31:41.288 [2024-10-01 08:46:32.937711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.288 [2024-10-01 08:46:32.937720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.288 qpair failed and we were unable to recover it. 00:31:41.288 [2024-10-01 08:46:32.938010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.288 [2024-10-01 08:46:32.938021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.288 qpair failed and we were unable to recover it. 00:31:41.288 [2024-10-01 08:46:32.938341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.288 [2024-10-01 08:46:32.938351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.288 qpair failed and we were unable to recover it. 00:31:41.288 [2024-10-01 08:46:32.938657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.288 [2024-10-01 08:46:32.938668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.288 qpair failed and we were unable to recover it. 00:31:41.288 [2024-10-01 08:46:32.938973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.288 [2024-10-01 08:46:32.938983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.288 qpair failed and we were unable to recover it. 00:31:41.288 [2024-10-01 08:46:32.939345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.288 [2024-10-01 08:46:32.939355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.288 qpair failed and we were unable to recover it. 00:31:41.288 [2024-10-01 08:46:32.939643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.288 [2024-10-01 08:46:32.939653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.288 qpair failed and we were unable to recover it. 00:31:41.288 [2024-10-01 08:46:32.939930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.288 [2024-10-01 08:46:32.939939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.288 qpair failed and we were unable to recover it. 00:31:41.288 [2024-10-01 08:46:32.940239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.288 [2024-10-01 08:46:32.940249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.288 qpair failed and we were unable to recover it. 00:31:41.288 [2024-10-01 08:46:32.940556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.288 [2024-10-01 08:46:32.940566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.288 qpair failed and we were unable to recover it. 00:31:41.289 [2024-10-01 08:46:32.940893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.289 [2024-10-01 08:46:32.940903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.289 qpair failed and we were unable to recover it. 00:31:41.289 [2024-10-01 08:46:32.941218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.289 [2024-10-01 08:46:32.941229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.289 qpair failed and we were unable to recover it. 00:31:41.289 [2024-10-01 08:46:32.941504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.289 [2024-10-01 08:46:32.941514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.289 qpair failed and we were unable to recover it. 00:31:41.289 [2024-10-01 08:46:32.941849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.289 [2024-10-01 08:46:32.941858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.289 qpair failed and we were unable to recover it. 00:31:41.289 [2024-10-01 08:46:32.942141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.289 [2024-10-01 08:46:32.942153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.289 qpair failed and we were unable to recover it. 00:31:41.289 [2024-10-01 08:46:32.942462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.289 [2024-10-01 08:46:32.942472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.289 qpair failed and we were unable to recover it. 00:31:41.289 [2024-10-01 08:46:32.942755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.289 [2024-10-01 08:46:32.942765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.289 qpair failed and we were unable to recover it. 00:31:41.289 [2024-10-01 08:46:32.943101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.289 [2024-10-01 08:46:32.943111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.289 qpair failed and we were unable to recover it. 00:31:41.289 [2024-10-01 08:46:32.943423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.289 [2024-10-01 08:46:32.943432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.289 qpair failed and we were unable to recover it. 00:31:41.289 [2024-10-01 08:46:32.943737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.289 [2024-10-01 08:46:32.943746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.289 qpair failed and we were unable to recover it. 00:31:41.289 [2024-10-01 08:46:32.944037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.289 [2024-10-01 08:46:32.944048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.289 qpair failed and we were unable to recover it. 00:31:41.289 [2024-10-01 08:46:32.944242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.289 [2024-10-01 08:46:32.944252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.289 qpair failed and we were unable to recover it. 00:31:41.289 [2024-10-01 08:46:32.944563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.289 [2024-10-01 08:46:32.944573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.289 qpair failed and we were unable to recover it. 00:31:41.289 [2024-10-01 08:46:32.944853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.289 [2024-10-01 08:46:32.944863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.289 qpair failed and we were unable to recover it. 00:31:41.289 [2024-10-01 08:46:32.945165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.289 [2024-10-01 08:46:32.945175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.289 qpair failed and we were unable to recover it. 00:31:41.289 [2024-10-01 08:46:32.945353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.289 [2024-10-01 08:46:32.945364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.289 qpair failed and we were unable to recover it. 00:31:41.289 [2024-10-01 08:46:32.945718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.289 [2024-10-01 08:46:32.945728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.289 qpair failed and we were unable to recover it. 00:31:41.289 [2024-10-01 08:46:32.946028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.289 [2024-10-01 08:46:32.946039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.289 qpair failed and we were unable to recover it. 00:31:41.289 [2024-10-01 08:46:32.946221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.289 [2024-10-01 08:46:32.946231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.289 qpair failed and we were unable to recover it. 00:31:41.289 [2024-10-01 08:46:32.946511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.289 [2024-10-01 08:46:32.946522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.289 qpair failed and we were unable to recover it. 00:31:41.289 [2024-10-01 08:46:32.946822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.289 [2024-10-01 08:46:32.946833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.289 qpair failed and we were unable to recover it. 00:31:41.289 [2024-10-01 08:46:32.947137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.289 [2024-10-01 08:46:32.947147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.289 qpair failed and we were unable to recover it. 00:31:41.289 [2024-10-01 08:46:32.947490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.289 [2024-10-01 08:46:32.947500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.289 qpair failed and we were unable to recover it. 00:31:41.289 [2024-10-01 08:46:32.947830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.289 [2024-10-01 08:46:32.947840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.289 qpair failed and we were unable to recover it. 00:31:41.289 [2024-10-01 08:46:32.948122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.289 [2024-10-01 08:46:32.948132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.289 qpair failed and we were unable to recover it. 00:31:41.289 [2024-10-01 08:46:32.948437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.289 [2024-10-01 08:46:32.948447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.289 qpair failed and we were unable to recover it. 00:31:41.289 [2024-10-01 08:46:32.948748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.289 [2024-10-01 08:46:32.948758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.289 qpair failed and we were unable to recover it. 00:31:41.289 [2024-10-01 08:46:32.949059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.289 [2024-10-01 08:46:32.949069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.290 qpair failed and we were unable to recover it. 00:31:41.290 [2024-10-01 08:46:32.949369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.290 [2024-10-01 08:46:32.949379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.290 qpair failed and we were unable to recover it. 00:31:41.290 [2024-10-01 08:46:32.949684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.290 [2024-10-01 08:46:32.949694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.290 qpair failed and we were unable to recover it. 00:31:41.290 [2024-10-01 08:46:32.949970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.290 [2024-10-01 08:46:32.949980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.290 qpair failed and we were unable to recover it. 00:31:41.290 [2024-10-01 08:46:32.950291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.290 [2024-10-01 08:46:32.950301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.290 qpair failed and we were unable to recover it. 00:31:41.290 [2024-10-01 08:46:32.950536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.290 [2024-10-01 08:46:32.950545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.290 qpair failed and we were unable to recover it. 00:31:41.290 [2024-10-01 08:46:32.950722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.290 [2024-10-01 08:46:32.950732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.290 qpair failed and we were unable to recover it. 00:31:41.290 [2024-10-01 08:46:32.951047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.290 [2024-10-01 08:46:32.951058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.290 qpair failed and we were unable to recover it. 00:31:41.290 [2024-10-01 08:46:32.951370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.290 [2024-10-01 08:46:32.951380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.290 qpair failed and we were unable to recover it. 00:31:41.290 [2024-10-01 08:46:32.951562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.290 [2024-10-01 08:46:32.951573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.290 qpair failed and we were unable to recover it. 00:31:41.290 [2024-10-01 08:46:32.951894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.290 [2024-10-01 08:46:32.951903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.290 qpair failed and we were unable to recover it. 00:31:41.290 [2024-10-01 08:46:32.952203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.290 [2024-10-01 08:46:32.952214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.290 qpair failed and we were unable to recover it. 00:31:41.290 [2024-10-01 08:46:32.952510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.290 [2024-10-01 08:46:32.952521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.290 qpair failed and we were unable to recover it. 00:31:41.290 [2024-10-01 08:46:32.952800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.290 [2024-10-01 08:46:32.952811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.290 qpair failed and we were unable to recover it. 00:31:41.290 [2024-10-01 08:46:32.953124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.290 [2024-10-01 08:46:32.953137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.290 qpair failed and we were unable to recover it. 00:31:41.290 [2024-10-01 08:46:32.953426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.290 [2024-10-01 08:46:32.953436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.290 qpair failed and we were unable to recover it. 00:31:41.290 [2024-10-01 08:46:32.953728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.290 [2024-10-01 08:46:32.953737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.290 qpair failed and we were unable to recover it. 00:31:41.290 [2024-10-01 08:46:32.954043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.290 [2024-10-01 08:46:32.954054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.290 qpair failed and we were unable to recover it. 00:31:41.290 [2024-10-01 08:46:32.954374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.290 [2024-10-01 08:46:32.954383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.290 qpair failed and we were unable to recover it. 00:31:41.290 [2024-10-01 08:46:32.954665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.290 [2024-10-01 08:46:32.954675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.290 qpair failed and we were unable to recover it. 00:31:41.290 [2024-10-01 08:46:32.954945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.290 [2024-10-01 08:46:32.954956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.290 qpair failed and we were unable to recover it. 00:31:41.290 [2024-10-01 08:46:32.955227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.290 [2024-10-01 08:46:32.955238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.290 qpair failed and we were unable to recover it. 00:31:41.290 [2024-10-01 08:46:32.955521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.290 [2024-10-01 08:46:32.955530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.290 qpair failed and we were unable to recover it. 00:31:41.290 [2024-10-01 08:46:32.955840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.290 [2024-10-01 08:46:32.955850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.290 qpair failed and we were unable to recover it. 00:31:41.290 [2024-10-01 08:46:32.956122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.290 [2024-10-01 08:46:32.956133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.290 qpair failed and we were unable to recover it. 00:31:41.290 [2024-10-01 08:46:32.956317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.290 [2024-10-01 08:46:32.956327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.290 qpair failed and we were unable to recover it. 00:31:41.290 [2024-10-01 08:46:32.956670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.290 [2024-10-01 08:46:32.956681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.290 qpair failed and we were unable to recover it. 00:31:41.290 [2024-10-01 08:46:32.957029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.290 [2024-10-01 08:46:32.957039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.290 qpair failed and we were unable to recover it. 00:31:41.290 [2024-10-01 08:46:32.957345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.291 [2024-10-01 08:46:32.957355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.291 qpair failed and we were unable to recover it. 00:31:41.291 [2024-10-01 08:46:32.957528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.291 [2024-10-01 08:46:32.957538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.291 qpair failed and we were unable to recover it. 00:31:41.291 [2024-10-01 08:46:32.957824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.291 [2024-10-01 08:46:32.957834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.291 qpair failed and we were unable to recover it. 00:31:41.291 [2024-10-01 08:46:32.958130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.291 [2024-10-01 08:46:32.958144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.291 qpair failed and we were unable to recover it. 00:31:41.291 [2024-10-01 08:46:32.958313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.291 [2024-10-01 08:46:32.958324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.291 qpair failed and we were unable to recover it. 00:31:41.291 [2024-10-01 08:46:32.958587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.291 [2024-10-01 08:46:32.958597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.291 qpair failed and we were unable to recover it. 00:31:41.291 [2024-10-01 08:46:32.958908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.291 [2024-10-01 08:46:32.958918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.291 qpair failed and we were unable to recover it. 00:31:41.291 [2024-10-01 08:46:32.959126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.291 [2024-10-01 08:46:32.959136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.291 qpair failed and we were unable to recover it. 00:31:41.291 [2024-10-01 08:46:32.959373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.291 [2024-10-01 08:46:32.959383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.291 qpair failed and we were unable to recover it. 00:31:41.291 [2024-10-01 08:46:32.959702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.291 [2024-10-01 08:46:32.959712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.291 qpair failed and we were unable to recover it. 00:31:41.291 [2024-10-01 08:46:32.959956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.291 [2024-10-01 08:46:32.959966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.291 qpair failed and we were unable to recover it. 00:31:41.291 [2024-10-01 08:46:32.960229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.291 [2024-10-01 08:46:32.960239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.291 qpair failed and we were unable to recover it. 00:31:41.291 [2024-10-01 08:46:32.960522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.291 [2024-10-01 08:46:32.960532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.291 qpair failed and we were unable to recover it. 00:31:41.291 [2024-10-01 08:46:32.960840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.291 [2024-10-01 08:46:32.960850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.291 qpair failed and we were unable to recover it. 00:31:41.291 [2024-10-01 08:46:32.961170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.291 [2024-10-01 08:46:32.961180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.291 qpair failed and we were unable to recover it. 00:31:41.291 [2024-10-01 08:46:32.961478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.291 [2024-10-01 08:46:32.961488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.291 qpair failed and we were unable to recover it. 00:31:41.291 [2024-10-01 08:46:32.961716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.291 [2024-10-01 08:46:32.961726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.291 qpair failed and we were unable to recover it. 00:31:41.291 [2024-10-01 08:46:32.961937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.291 [2024-10-01 08:46:32.961947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.291 qpair failed and we were unable to recover it. 00:31:41.291 [2024-10-01 08:46:32.962261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.291 [2024-10-01 08:46:32.962272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.291 qpair failed and we were unable to recover it. 00:31:41.291 [2024-10-01 08:46:32.962588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.291 [2024-10-01 08:46:32.962599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.291 qpair failed and we were unable to recover it. 00:31:41.291 [2024-10-01 08:46:32.962906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.291 [2024-10-01 08:46:32.962917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.291 qpair failed and we were unable to recover it. 00:31:41.291 [2024-10-01 08:46:32.963212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.291 [2024-10-01 08:46:32.963223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.291 qpair failed and we were unable to recover it. 00:31:41.291 [2024-10-01 08:46:32.963516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.291 [2024-10-01 08:46:32.963526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.291 qpair failed and we were unable to recover it. 00:31:41.291 [2024-10-01 08:46:32.963828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.292 [2024-10-01 08:46:32.963838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.292 qpair failed and we were unable to recover it. 00:31:41.292 [2024-10-01 08:46:32.964119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.292 [2024-10-01 08:46:32.964129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.292 qpair failed and we were unable to recover it. 00:31:41.292 [2024-10-01 08:46:32.964360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.292 [2024-10-01 08:46:32.964371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.292 qpair failed and we were unable to recover it. 00:31:41.292 [2024-10-01 08:46:32.964641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.292 [2024-10-01 08:46:32.964651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.292 qpair failed and we were unable to recover it. 00:31:41.292 [2024-10-01 08:46:32.964948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.292 [2024-10-01 08:46:32.964959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.292 qpair failed and we were unable to recover it. 00:31:41.292 [2024-10-01 08:46:32.965287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.292 [2024-10-01 08:46:32.965297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.292 qpair failed and we were unable to recover it. 00:31:41.292 [2024-10-01 08:46:32.965613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.292 [2024-10-01 08:46:32.965623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.292 qpair failed and we were unable to recover it. 00:31:41.292 [2024-10-01 08:46:32.965901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.292 [2024-10-01 08:46:32.965911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.292 qpair failed and we were unable to recover it. 00:31:41.292 [2024-10-01 08:46:32.966112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.292 [2024-10-01 08:46:32.966122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.292 qpair failed and we were unable to recover it. 00:31:41.292 [2024-10-01 08:46:32.966445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.292 [2024-10-01 08:46:32.966455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.292 qpair failed and we were unable to recover it. 00:31:41.292 [2024-10-01 08:46:32.966757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.292 [2024-10-01 08:46:32.966767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.292 qpair failed and we were unable to recover it. 00:31:41.292 [2024-10-01 08:46:32.967060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.292 [2024-10-01 08:46:32.967071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.292 qpair failed and we were unable to recover it. 00:31:41.292 [2024-10-01 08:46:32.967274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.292 [2024-10-01 08:46:32.967285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.292 qpair failed and we were unable to recover it. 00:31:41.292 [2024-10-01 08:46:32.967604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.292 [2024-10-01 08:46:32.967615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.292 qpair failed and we were unable to recover it. 00:31:41.292 [2024-10-01 08:46:32.967913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.292 [2024-10-01 08:46:32.967922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.292 qpair failed and we were unable to recover it. 00:31:41.292 [2024-10-01 08:46:32.968256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.292 [2024-10-01 08:46:32.968266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.292 qpair failed and we were unable to recover it. 00:31:41.292 [2024-10-01 08:46:32.968570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.292 [2024-10-01 08:46:32.968580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.292 qpair failed and we were unable to recover it. 00:31:41.292 [2024-10-01 08:46:32.968887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.292 [2024-10-01 08:46:32.968897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.292 qpair failed and we were unable to recover it. 00:31:41.292 [2024-10-01 08:46:32.969220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.292 [2024-10-01 08:46:32.969230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.292 qpair failed and we were unable to recover it. 00:31:41.292 [2024-10-01 08:46:32.969519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.292 [2024-10-01 08:46:32.969528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.292 qpair failed and we were unable to recover it. 00:31:41.292 [2024-10-01 08:46:32.969833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.292 [2024-10-01 08:46:32.969843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.292 qpair failed and we were unable to recover it. 00:31:41.292 [2024-10-01 08:46:32.970149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.292 [2024-10-01 08:46:32.970159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.292 qpair failed and we were unable to recover it. 00:31:41.292 [2024-10-01 08:46:32.970444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.292 [2024-10-01 08:46:32.970453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.292 qpair failed and we were unable to recover it. 00:31:41.292 [2024-10-01 08:46:32.970773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.292 [2024-10-01 08:46:32.970783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.292 qpair failed and we were unable to recover it. 00:31:41.292 [2024-10-01 08:46:32.971093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.292 [2024-10-01 08:46:32.971104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.292 qpair failed and we were unable to recover it. 00:31:41.292 [2024-10-01 08:46:32.971414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.292 [2024-10-01 08:46:32.971430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.292 qpair failed and we were unable to recover it. 00:31:41.292 [2024-10-01 08:46:32.971686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.292 [2024-10-01 08:46:32.971696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.292 qpair failed and we were unable to recover it. 00:31:41.292 [2024-10-01 08:46:32.972002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.292 [2024-10-01 08:46:32.972012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.292 qpair failed and we were unable to recover it. 00:31:41.293 [2024-10-01 08:46:32.972328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.293 [2024-10-01 08:46:32.972338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.293 qpair failed and we were unable to recover it. 00:31:41.293 [2024-10-01 08:46:32.972647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.293 [2024-10-01 08:46:32.972656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.293 qpair failed and we were unable to recover it. 00:31:41.293 [2024-10-01 08:46:32.972986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.293 [2024-10-01 08:46:32.972999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.293 qpair failed and we were unable to recover it. 00:31:41.293 [2024-10-01 08:46:32.973245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.293 [2024-10-01 08:46:32.973257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.293 qpair failed and we were unable to recover it. 00:31:41.293 [2024-10-01 08:46:32.973568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.293 [2024-10-01 08:46:32.973578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.293 qpair failed and we were unable to recover it. 00:31:41.293 [2024-10-01 08:46:32.973870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.293 [2024-10-01 08:46:32.973879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.293 qpair failed and we were unable to recover it. 00:31:41.293 [2024-10-01 08:46:32.974163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.293 [2024-10-01 08:46:32.974174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.293 qpair failed and we were unable to recover it. 00:31:41.293 [2024-10-01 08:46:32.974488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.293 [2024-10-01 08:46:32.974498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.293 qpair failed and we were unable to recover it. 00:31:41.293 [2024-10-01 08:46:32.974805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.293 [2024-10-01 08:46:32.974815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.293 qpair failed and we were unable to recover it. 00:31:41.293 [2024-10-01 08:46:32.975195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.293 [2024-10-01 08:46:32.975207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.293 qpair failed and we were unable to recover it. 00:31:41.293 [2024-10-01 08:46:32.975481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.293 [2024-10-01 08:46:32.975491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.293 qpair failed and we were unable to recover it. 00:31:41.293 [2024-10-01 08:46:32.975805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.293 [2024-10-01 08:46:32.975815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.293 qpair failed and we were unable to recover it. 00:31:41.293 [2024-10-01 08:46:32.976102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.293 [2024-10-01 08:46:32.976112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.293 qpair failed and we were unable to recover it. 00:31:41.293 [2024-10-01 08:46:32.976387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.293 [2024-10-01 08:46:32.976397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.293 qpair failed and we were unable to recover it. 00:31:41.293 [2024-10-01 08:46:32.976714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.293 [2024-10-01 08:46:32.976724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.293 qpair failed and we were unable to recover it. 00:31:41.293 [2024-10-01 08:46:32.977004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.293 [2024-10-01 08:46:32.977014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.293 qpair failed and we were unable to recover it. 00:31:41.293 [2024-10-01 08:46:32.977294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.293 [2024-10-01 08:46:32.977304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.293 qpair failed and we were unable to recover it. 00:31:41.293 [2024-10-01 08:46:32.977609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.293 [2024-10-01 08:46:32.977620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.293 qpair failed and we were unable to recover it. 00:31:41.293 [2024-10-01 08:46:32.977938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.293 [2024-10-01 08:46:32.977947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.293 qpair failed and we were unable to recover it. 00:31:41.293 [2024-10-01 08:46:32.978142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.293 [2024-10-01 08:46:32.978153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.293 qpair failed and we were unable to recover it. 00:31:41.293 [2024-10-01 08:46:32.978469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.293 [2024-10-01 08:46:32.978479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.293 qpair failed and we were unable to recover it. 00:31:41.293 [2024-10-01 08:46:32.978796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.293 [2024-10-01 08:46:32.978806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.293 qpair failed and we were unable to recover it. 00:31:41.293 [2024-10-01 08:46:32.979121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.293 [2024-10-01 08:46:32.979131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.293 qpair failed and we were unable to recover it. 00:31:41.293 [2024-10-01 08:46:32.979450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.293 [2024-10-01 08:46:32.979460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.293 qpair failed and we were unable to recover it. 00:31:41.293 [2024-10-01 08:46:32.979785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.293 [2024-10-01 08:46:32.979795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.293 qpair failed and we were unable to recover it. 00:31:41.293 [2024-10-01 08:46:32.980060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.293 [2024-10-01 08:46:32.980070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.293 qpair failed and we were unable to recover it. 00:31:41.293 [2024-10-01 08:46:32.980440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.293 [2024-10-01 08:46:32.980451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.293 qpair failed and we were unable to recover it. 00:31:41.293 [2024-10-01 08:46:32.980813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.293 [2024-10-01 08:46:32.980823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.294 qpair failed and we were unable to recover it. 00:31:41.294 [2024-10-01 08:46:32.981104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.294 [2024-10-01 08:46:32.981115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.294 qpair failed and we were unable to recover it. 00:31:41.294 [2024-10-01 08:46:32.981439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.294 [2024-10-01 08:46:32.981450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.294 qpair failed and we were unable to recover it. 00:31:41.294 [2024-10-01 08:46:32.981789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.294 [2024-10-01 08:46:32.981800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.294 qpair failed and we were unable to recover it. 00:31:41.294 [2024-10-01 08:46:32.982030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.294 [2024-10-01 08:46:32.982040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.294 qpair failed and we were unable to recover it. 00:31:41.294 [2024-10-01 08:46:32.982361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.294 [2024-10-01 08:46:32.982372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.294 qpair failed and we were unable to recover it. 00:31:41.294 [2024-10-01 08:46:32.982698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.294 [2024-10-01 08:46:32.982711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.294 qpair failed and we were unable to recover it. 00:31:41.294 [2024-10-01 08:46:32.982988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.294 [2024-10-01 08:46:32.983002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.294 qpair failed and we were unable to recover it. 00:31:41.294 [2024-10-01 08:46:32.983305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.294 [2024-10-01 08:46:32.983314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.294 qpair failed and we were unable to recover it. 00:31:41.294 [2024-10-01 08:46:32.983505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.294 [2024-10-01 08:46:32.983515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.294 qpair failed and we were unable to recover it. 00:31:41.294 [2024-10-01 08:46:32.983826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.294 [2024-10-01 08:46:32.983836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.294 qpair failed and we were unable to recover it. 00:31:41.294 [2024-10-01 08:46:32.984192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.294 [2024-10-01 08:46:32.984202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.294 qpair failed and we were unable to recover it. 00:31:41.294 [2024-10-01 08:46:32.984478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.294 [2024-10-01 08:46:32.984488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.294 qpair failed and we were unable to recover it. 00:31:41.294 [2024-10-01 08:46:32.984803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.294 [2024-10-01 08:46:32.984812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.294 qpair failed and we were unable to recover it. 00:31:41.294 [2024-10-01 08:46:32.985105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.294 [2024-10-01 08:46:32.985115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.294 qpair failed and we were unable to recover it. 00:31:41.294 [2024-10-01 08:46:32.985430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.294 [2024-10-01 08:46:32.985440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.294 qpair failed and we were unable to recover it. 00:31:41.294 [2024-10-01 08:46:32.985750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.294 [2024-10-01 08:46:32.985761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.294 qpair failed and we were unable to recover it. 00:31:41.294 [2024-10-01 08:46:32.986061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.294 [2024-10-01 08:46:32.986074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.294 qpair failed and we were unable to recover it. 00:31:41.294 [2024-10-01 08:46:32.986368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.294 [2024-10-01 08:46:32.986377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.294 qpair failed and we were unable to recover it. 00:31:41.294 [2024-10-01 08:46:32.986678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.294 [2024-10-01 08:46:32.986688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.294 qpair failed and we were unable to recover it. 00:31:41.294 [2024-10-01 08:46:32.986972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.294 [2024-10-01 08:46:32.986981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.294 qpair failed and we were unable to recover it. 00:31:41.294 [2024-10-01 08:46:32.987317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.294 [2024-10-01 08:46:32.987327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.294 qpair failed and we were unable to recover it. 00:31:41.294 [2024-10-01 08:46:32.987634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.294 [2024-10-01 08:46:32.987644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.294 qpair failed and we were unable to recover it. 00:31:41.294 [2024-10-01 08:46:32.987974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.294 [2024-10-01 08:46:32.987984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.294 qpair failed and we were unable to recover it. 00:31:41.294 [2024-10-01 08:46:32.988272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.294 [2024-10-01 08:46:32.988283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.294 qpair failed and we were unable to recover it. 00:31:41.294 [2024-10-01 08:46:32.988595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.294 [2024-10-01 08:46:32.988606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.294 qpair failed and we were unable to recover it. 00:31:41.294 [2024-10-01 08:46:32.988888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.294 [2024-10-01 08:46:32.988898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.294 qpair failed and we were unable to recover it. 00:31:41.294 [2024-10-01 08:46:32.989205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.294 [2024-10-01 08:46:32.989216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.294 qpair failed and we were unable to recover it. 00:31:41.295 [2024-10-01 08:46:32.989407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.295 [2024-10-01 08:46:32.989419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.295 qpair failed and we were unable to recover it. 00:31:41.295 [2024-10-01 08:46:32.989733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.295 [2024-10-01 08:46:32.989743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.295 qpair failed and we were unable to recover it. 00:31:41.295 [2024-10-01 08:46:32.990021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.295 [2024-10-01 08:46:32.990031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.295 qpair failed and we were unable to recover it. 00:31:41.295 [2024-10-01 08:46:32.990309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.295 [2024-10-01 08:46:32.990320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.295 qpair failed and we were unable to recover it. 00:31:41.295 [2024-10-01 08:46:32.990641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.295 [2024-10-01 08:46:32.990651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.295 qpair failed and we were unable to recover it. 00:31:41.295 [2024-10-01 08:46:32.990980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.295 [2024-10-01 08:46:32.990990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.295 qpair failed and we were unable to recover it. 00:31:41.295 [2024-10-01 08:46:32.991261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.295 [2024-10-01 08:46:32.991271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.295 qpair failed and we were unable to recover it. 00:31:41.295 [2024-10-01 08:46:32.991587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.295 [2024-10-01 08:46:32.991596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.295 qpair failed and we were unable to recover it. 00:31:41.295 [2024-10-01 08:46:32.991882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.295 [2024-10-01 08:46:32.991892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.295 qpair failed and we were unable to recover it. 00:31:41.295 [2024-10-01 08:46:32.992223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.295 [2024-10-01 08:46:32.992234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.295 qpair failed and we were unable to recover it. 00:31:41.295 [2024-10-01 08:46:32.992509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.295 [2024-10-01 08:46:32.992518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.295 qpair failed and we were unable to recover it. 00:31:41.295 [2024-10-01 08:46:32.992777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.295 [2024-10-01 08:46:32.992787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.295 qpair failed and we were unable to recover it. 00:31:41.295 [2024-10-01 08:46:32.993093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.295 [2024-10-01 08:46:32.993103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.295 qpair failed and we were unable to recover it. 00:31:41.295 [2024-10-01 08:46:32.993439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.295 [2024-10-01 08:46:32.993449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.295 qpair failed and we were unable to recover it. 00:31:41.295 [2024-10-01 08:46:32.993735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.295 [2024-10-01 08:46:32.993745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.295 qpair failed and we were unable to recover it. 00:31:41.295 [2024-10-01 08:46:32.994052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.295 [2024-10-01 08:46:32.994062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.295 qpair failed and we were unable to recover it. 00:31:41.295 [2024-10-01 08:46:32.994367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.295 [2024-10-01 08:46:32.994381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.295 qpair failed and we were unable to recover it. 00:31:41.295 [2024-10-01 08:46:32.994712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.295 [2024-10-01 08:46:32.994723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.295 qpair failed and we were unable to recover it. 00:31:41.295 [2024-10-01 08:46:32.994882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.295 [2024-10-01 08:46:32.994893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.295 qpair failed and we were unable to recover it. 00:31:41.295 [2024-10-01 08:46:32.995236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.295 [2024-10-01 08:46:32.995246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.295 qpair failed and we were unable to recover it. 00:31:41.295 [2024-10-01 08:46:32.995537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.295 [2024-10-01 08:46:32.995547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.295 qpair failed and we were unable to recover it. 00:31:41.295 [2024-10-01 08:46:32.995812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.295 [2024-10-01 08:46:32.995821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.295 qpair failed and we were unable to recover it. 00:31:41.295 [2024-10-01 08:46:32.996120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.295 [2024-10-01 08:46:32.996130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.295 qpair failed and we were unable to recover it. 00:31:41.295 [2024-10-01 08:46:32.996383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.295 [2024-10-01 08:46:32.996393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.295 qpair failed and we were unable to recover it. 00:31:41.295 [2024-10-01 08:46:32.996632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.295 [2024-10-01 08:46:32.996641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.295 qpair failed and we were unable to recover it. 00:31:41.295 [2024-10-01 08:46:32.996903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.295 [2024-10-01 08:46:32.996912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.295 qpair failed and we were unable to recover it. 00:31:41.295 [2024-10-01 08:46:32.997338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.295 [2024-10-01 08:46:32.997348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.295 qpair failed and we were unable to recover it. 00:31:41.295 [2024-10-01 08:46:32.997532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.295 [2024-10-01 08:46:32.997543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.296 qpair failed and we were unable to recover it. 00:31:41.296 [2024-10-01 08:46:32.997852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.296 [2024-10-01 08:46:32.997862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.296 qpair failed and we were unable to recover it. 00:31:41.296 [2024-10-01 08:46:32.998128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.296 [2024-10-01 08:46:32.998138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.296 qpair failed and we were unable to recover it. 00:31:41.296 [2024-10-01 08:46:32.998439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.296 [2024-10-01 08:46:32.998449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.296 qpair failed and we were unable to recover it. 00:31:41.296 [2024-10-01 08:46:32.998767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.296 [2024-10-01 08:46:32.998777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.296 qpair failed and we were unable to recover it. 00:31:41.296 [2024-10-01 08:46:32.999061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.296 [2024-10-01 08:46:32.999071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.296 qpair failed and we were unable to recover it. 00:31:41.296 [2024-10-01 08:46:32.999368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.296 [2024-10-01 08:46:32.999378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.296 qpair failed and we were unable to recover it. 00:31:41.296 [2024-10-01 08:46:32.999689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.296 [2024-10-01 08:46:32.999700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.296 qpair failed and we were unable to recover it. 00:31:41.296 [2024-10-01 08:46:32.999989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.296 [2024-10-01 08:46:33.000003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.296 qpair failed and we were unable to recover it. 00:31:41.296 [2024-10-01 08:46:33.000341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.296 [2024-10-01 08:46:33.000352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.296 qpair failed and we were unable to recover it. 00:31:41.296 [2024-10-01 08:46:33.000655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.296 [2024-10-01 08:46:33.000665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.296 qpair failed and we were unable to recover it. 00:31:41.296 [2024-10-01 08:46:33.000949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.296 [2024-10-01 08:46:33.000958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.296 qpair failed and we were unable to recover it. 00:31:41.296 [2024-10-01 08:46:33.001233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.296 [2024-10-01 08:46:33.001243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.296 qpair failed and we were unable to recover it. 00:31:41.296 [2024-10-01 08:46:33.001567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.296 [2024-10-01 08:46:33.001577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.296 qpair failed and we were unable to recover it. 00:31:41.296 [2024-10-01 08:46:33.001904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.296 [2024-10-01 08:46:33.001913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.296 qpair failed and we were unable to recover it. 00:31:41.296 [2024-10-01 08:46:33.002206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.296 [2024-10-01 08:46:33.002216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.296 qpair failed and we were unable to recover it. 00:31:41.296 [2024-10-01 08:46:33.002518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.296 [2024-10-01 08:46:33.002528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.296 qpair failed and we were unable to recover it. 00:31:41.296 [2024-10-01 08:46:33.002815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.296 [2024-10-01 08:46:33.002825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.296 qpair failed and we were unable to recover it. 00:31:41.296 [2024-10-01 08:46:33.003143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.296 [2024-10-01 08:46:33.003153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.296 qpair failed and we were unable to recover it. 00:31:41.296 [2024-10-01 08:46:33.003429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.296 [2024-10-01 08:46:33.003439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.296 qpair failed and we were unable to recover it. 00:31:41.296 [2024-10-01 08:46:33.003755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.296 [2024-10-01 08:46:33.003764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.296 qpair failed and we were unable to recover it. 00:31:41.296 [2024-10-01 08:46:33.004046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.296 [2024-10-01 08:46:33.004056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.296 qpair failed and we were unable to recover it. 00:31:41.296 [2024-10-01 08:46:33.004380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.296 [2024-10-01 08:46:33.004390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.296 qpair failed and we were unable to recover it. 00:31:41.296 [2024-10-01 08:46:33.004714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.296 [2024-10-01 08:46:33.004724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.296 qpair failed and we were unable to recover it. 00:31:41.296 [2024-10-01 08:46:33.005050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.296 [2024-10-01 08:46:33.005060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.296 qpair failed and we were unable to recover it. 00:31:41.296 [2024-10-01 08:46:33.005416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.296 [2024-10-01 08:46:33.005425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.296 qpair failed and we were unable to recover it. 00:31:41.296 [2024-10-01 08:46:33.005756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.296 [2024-10-01 08:46:33.005766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.296 qpair failed and we were unable to recover it. 00:31:41.296 [2024-10-01 08:46:33.006051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.296 [2024-10-01 08:46:33.006062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.296 qpair failed and we were unable to recover it. 00:31:41.296 [2024-10-01 08:46:33.006364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.297 [2024-10-01 08:46:33.006374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.297 qpair failed and we were unable to recover it. 00:31:41.297 [2024-10-01 08:46:33.006654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.297 [2024-10-01 08:46:33.006663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.297 qpair failed and we were unable to recover it. 00:31:41.297 [2024-10-01 08:46:33.006968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.297 [2024-10-01 08:46:33.006980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.297 qpair failed and we were unable to recover it. 00:31:41.297 [2024-10-01 08:46:33.007277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.297 [2024-10-01 08:46:33.007287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.297 qpair failed and we were unable to recover it. 00:31:41.297 [2024-10-01 08:46:33.007578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.297 [2024-10-01 08:46:33.007593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.297 qpair failed and we were unable to recover it. 00:31:41.297 [2024-10-01 08:46:33.007920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.297 [2024-10-01 08:46:33.007929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.297 qpair failed and we were unable to recover it. 00:31:41.297 [2024-10-01 08:46:33.008117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.297 [2024-10-01 08:46:33.008126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.297 qpair failed and we were unable to recover it. 00:31:41.297 [2024-10-01 08:46:33.008449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.297 [2024-10-01 08:46:33.008458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.297 qpair failed and we were unable to recover it. 00:31:41.297 [2024-10-01 08:46:33.008788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.297 [2024-10-01 08:46:33.008797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.297 qpair failed and we were unable to recover it. 00:31:41.297 [2024-10-01 08:46:33.009104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.297 [2024-10-01 08:46:33.009114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.297 qpair failed and we were unable to recover it. 00:31:41.297 [2024-10-01 08:46:33.009417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.297 [2024-10-01 08:46:33.009426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.297 qpair failed and we were unable to recover it. 00:31:41.297 [2024-10-01 08:46:33.009727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.297 [2024-10-01 08:46:33.009737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.297 qpair failed and we were unable to recover it. 00:31:41.297 [2024-10-01 08:46:33.009952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.297 [2024-10-01 08:46:33.009962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.297 qpair failed and we were unable to recover it. 00:31:41.297 [2024-10-01 08:46:33.010181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.297 [2024-10-01 08:46:33.010192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.297 qpair failed and we were unable to recover it. 00:31:41.297 [2024-10-01 08:46:33.010476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.297 [2024-10-01 08:46:33.010486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.297 qpair failed and we were unable to recover it. 00:31:41.297 [2024-10-01 08:46:33.010842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.297 [2024-10-01 08:46:33.010851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.297 qpair failed and we were unable to recover it. 00:31:41.297 [2024-10-01 08:46:33.011138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.297 [2024-10-01 08:46:33.011148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.297 qpair failed and we were unable to recover it. 00:31:41.297 [2024-10-01 08:46:33.011466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.297 [2024-10-01 08:46:33.011476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.297 qpair failed and we were unable to recover it. 00:31:41.297 [2024-10-01 08:46:33.011753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.297 [2024-10-01 08:46:33.011763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.297 qpair failed and we were unable to recover it. 00:31:41.297 [2024-10-01 08:46:33.012103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.297 [2024-10-01 08:46:33.012113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.297 qpair failed and we were unable to recover it. 00:31:41.297 [2024-10-01 08:46:33.012418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.297 [2024-10-01 08:46:33.012427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.297 qpair failed and we were unable to recover it. 00:31:41.297 [2024-10-01 08:46:33.012733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.297 [2024-10-01 08:46:33.012743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.297 qpair failed and we were unable to recover it. 00:31:41.297 [2024-10-01 08:46:33.013032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.297 [2024-10-01 08:46:33.013042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.297 qpair failed and we were unable to recover it. 00:31:41.297 [2024-10-01 08:46:33.013322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.297 [2024-10-01 08:46:33.013331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.297 qpair failed and we were unable to recover it. 00:31:41.298 [2024-10-01 08:46:33.013651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.298 [2024-10-01 08:46:33.013661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.298 qpair failed and we were unable to recover it. 00:31:41.298 [2024-10-01 08:46:33.013931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.298 [2024-10-01 08:46:33.013940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.298 qpair failed and we were unable to recover it. 00:31:41.298 [2024-10-01 08:46:33.014235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.298 [2024-10-01 08:46:33.014245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.298 qpair failed and we were unable to recover it. 00:31:41.298 [2024-10-01 08:46:33.014566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.298 [2024-10-01 08:46:33.014575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.298 qpair failed and we were unable to recover it. 00:31:41.298 [2024-10-01 08:46:33.014845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.298 [2024-10-01 08:46:33.014855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.298 qpair failed and we were unable to recover it. 00:31:41.298 [2024-10-01 08:46:33.015163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.298 [2024-10-01 08:46:33.015174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.298 qpair failed and we were unable to recover it. 00:31:41.298 [2024-10-01 08:46:33.015464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.298 [2024-10-01 08:46:33.015473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.298 qpair failed and we were unable to recover it. 00:31:41.298 [2024-10-01 08:46:33.015782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.298 [2024-10-01 08:46:33.015792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.298 qpair failed and we were unable to recover it. 00:31:41.298 [2024-10-01 08:46:33.016003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.298 [2024-10-01 08:46:33.016014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.298 qpair failed and we were unable to recover it. 00:31:41.298 [2024-10-01 08:46:33.016327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.298 [2024-10-01 08:46:33.016338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.298 qpair failed and we were unable to recover it. 00:31:41.298 [2024-10-01 08:46:33.016617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.298 [2024-10-01 08:46:33.016626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.298 qpair failed and we were unable to recover it. 00:31:41.298 [2024-10-01 08:46:33.016886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.298 [2024-10-01 08:46:33.016896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.298 qpair failed and we were unable to recover it. 00:31:41.298 [2024-10-01 08:46:33.017242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.298 [2024-10-01 08:46:33.017252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.298 qpair failed and we were unable to recover it. 00:31:41.298 [2024-10-01 08:46:33.017523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.298 [2024-10-01 08:46:33.017533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.298 qpair failed and we were unable to recover it. 00:31:41.298 [2024-10-01 08:46:33.017845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.298 [2024-10-01 08:46:33.017855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.298 qpair failed and we were unable to recover it. 00:31:41.298 [2024-10-01 08:46:33.018139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.298 [2024-10-01 08:46:33.018149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.298 qpair failed and we were unable to recover it. 00:31:41.298 [2024-10-01 08:46:33.018478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.298 [2024-10-01 08:46:33.018487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.298 qpair failed and we were unable to recover it. 00:31:41.298 [2024-10-01 08:46:33.018806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.298 [2024-10-01 08:46:33.018816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.298 qpair failed and we were unable to recover it. 00:31:41.298 [2024-10-01 08:46:33.019120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.298 [2024-10-01 08:46:33.019130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.298 qpair failed and we were unable to recover it. 00:31:41.298 [2024-10-01 08:46:33.019492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.298 [2024-10-01 08:46:33.019503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.298 qpair failed and we were unable to recover it. 00:31:41.298 [2024-10-01 08:46:33.019762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.298 [2024-10-01 08:46:33.019771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.298 qpair failed and we were unable to recover it. 00:31:41.298 [2024-10-01 08:46:33.020032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.298 [2024-10-01 08:46:33.020042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.298 qpair failed and we were unable to recover it. 00:31:41.298 [2024-10-01 08:46:33.020363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.298 [2024-10-01 08:46:33.020372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.298 qpair failed and we were unable to recover it. 00:31:41.298 [2024-10-01 08:46:33.020679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.298 [2024-10-01 08:46:33.020689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.298 qpair failed and we were unable to recover it. 00:31:41.298 [2024-10-01 08:46:33.021017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.298 [2024-10-01 08:46:33.021027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.298 qpair failed and we were unable to recover it. 00:31:41.298 [2024-10-01 08:46:33.021324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.298 [2024-10-01 08:46:33.021333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.298 qpair failed and we were unable to recover it. 00:31:41.298 [2024-10-01 08:46:33.021591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.298 [2024-10-01 08:46:33.021601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.298 qpair failed and we were unable to recover it. 00:31:41.298 [2024-10-01 08:46:33.021897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.298 [2024-10-01 08:46:33.021907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.299 qpair failed and we were unable to recover it. 00:31:41.299 [2024-10-01 08:46:33.022214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.299 [2024-10-01 08:46:33.022224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.299 qpair failed and we were unable to recover it. 00:31:41.299 [2024-10-01 08:46:33.022527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.299 [2024-10-01 08:46:33.022536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.299 qpair failed and we were unable to recover it. 00:31:41.299 [2024-10-01 08:46:33.022830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.299 [2024-10-01 08:46:33.022839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.299 qpair failed and we were unable to recover it. 00:31:41.299 [2024-10-01 08:46:33.023115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.299 [2024-10-01 08:46:33.023125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.299 qpair failed and we were unable to recover it. 00:31:41.299 [2024-10-01 08:46:33.023437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.299 [2024-10-01 08:46:33.023446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.299 qpair failed and we were unable to recover it. 00:31:41.299 [2024-10-01 08:46:33.023727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.299 [2024-10-01 08:46:33.023737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.299 qpair failed and we were unable to recover it. 00:31:41.299 [2024-10-01 08:46:33.023933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.299 [2024-10-01 08:46:33.023944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.299 qpair failed and we were unable to recover it. 00:31:41.299 [2024-10-01 08:46:33.024277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.299 [2024-10-01 08:46:33.024287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.299 qpair failed and we were unable to recover it. 00:31:41.299 [2024-10-01 08:46:33.024591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.299 [2024-10-01 08:46:33.024601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.299 qpair failed and we were unable to recover it. 00:31:41.299 [2024-10-01 08:46:33.024969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.299 [2024-10-01 08:46:33.024979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.299 qpair failed and we were unable to recover it. 00:31:41.299 [2024-10-01 08:46:33.025265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.299 [2024-10-01 08:46:33.025276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.299 qpair failed and we were unable to recover it. 00:31:41.299 [2024-10-01 08:46:33.025586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.299 [2024-10-01 08:46:33.025596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.299 qpair failed and we were unable to recover it. 00:31:41.299 [2024-10-01 08:46:33.025807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.299 [2024-10-01 08:46:33.025817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.299 qpair failed and we were unable to recover it. 00:31:41.299 [2024-10-01 08:46:33.026126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.299 [2024-10-01 08:46:33.026136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.299 qpair failed and we were unable to recover it. 00:31:41.299 [2024-10-01 08:46:33.026432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.299 [2024-10-01 08:46:33.026442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.299 qpair failed and we were unable to recover it. 00:31:41.299 [2024-10-01 08:46:33.026730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.299 [2024-10-01 08:46:33.026740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.299 qpair failed and we were unable to recover it. 00:31:41.299 [2024-10-01 08:46:33.026997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.299 [2024-10-01 08:46:33.027007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.299 qpair failed and we were unable to recover it. 00:31:41.299 [2024-10-01 08:46:33.027312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.299 [2024-10-01 08:46:33.027322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.299 qpair failed and we were unable to recover it. 00:31:41.299 [2024-10-01 08:46:33.027612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.299 [2024-10-01 08:46:33.027625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.299 qpair failed and we were unable to recover it. 00:31:41.299 [2024-10-01 08:46:33.027940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.299 [2024-10-01 08:46:33.027949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.299 qpair failed and we were unable to recover it. 00:31:41.299 [2024-10-01 08:46:33.028252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.299 [2024-10-01 08:46:33.028262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.299 qpair failed and we were unable to recover it. 00:31:41.299 [2024-10-01 08:46:33.028479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.299 [2024-10-01 08:46:33.028488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.299 qpair failed and we were unable to recover it. 00:31:41.299 [2024-10-01 08:46:33.028654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.299 [2024-10-01 08:46:33.028664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.299 qpair failed and we were unable to recover it. 00:31:41.299 [2024-10-01 08:46:33.028972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.299 [2024-10-01 08:46:33.028982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.299 qpair failed and we were unable to recover it. 00:31:41.299 [2024-10-01 08:46:33.029269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.299 [2024-10-01 08:46:33.029287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.299 qpair failed and we were unable to recover it. 00:31:41.299 [2024-10-01 08:46:33.029616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.299 [2024-10-01 08:46:33.029625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.299 qpair failed and we were unable to recover it. 00:31:41.299 [2024-10-01 08:46:33.029930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.299 [2024-10-01 08:46:33.029940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.299 qpair failed and we were unable to recover it. 00:31:41.299 [2024-10-01 08:46:33.030239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.300 [2024-10-01 08:46:33.030249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.300 qpair failed and we were unable to recover it. 00:31:41.300 [2024-10-01 08:46:33.030534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.300 [2024-10-01 08:46:33.030544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.300 qpair failed and we were unable to recover it. 00:31:41.300 [2024-10-01 08:46:33.030853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.300 [2024-10-01 08:46:33.030862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.300 qpair failed and we were unable to recover it. 00:31:41.300 [2024-10-01 08:46:33.031171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.300 [2024-10-01 08:46:33.031181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.300 qpair failed and we were unable to recover it. 00:31:41.300 [2024-10-01 08:46:33.031501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.300 [2024-10-01 08:46:33.031510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.300 qpair failed and we were unable to recover it. 00:31:41.300 [2024-10-01 08:46:33.031824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.300 [2024-10-01 08:46:33.031834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.300 qpair failed and we were unable to recover it. 00:31:41.300 [2024-10-01 08:46:33.032230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.300 [2024-10-01 08:46:33.032240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.300 qpair failed and we were unable to recover it. 00:31:41.300 [2024-10-01 08:46:33.032556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.300 [2024-10-01 08:46:33.032566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.300 qpair failed and we were unable to recover it. 00:31:41.300 [2024-10-01 08:46:33.032863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.300 [2024-10-01 08:46:33.032873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.300 qpair failed and we were unable to recover it. 00:31:41.300 [2024-10-01 08:46:33.033163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.300 [2024-10-01 08:46:33.033172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.300 qpair failed and we were unable to recover it. 00:31:41.300 [2024-10-01 08:46:33.033362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.300 [2024-10-01 08:46:33.033371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.300 qpair failed and we were unable to recover it. 00:31:41.300 [2024-10-01 08:46:33.033699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.300 [2024-10-01 08:46:33.033709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.300 qpair failed and we were unable to recover it. 00:31:41.300 [2024-10-01 08:46:33.034031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.300 [2024-10-01 08:46:33.034041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.300 qpair failed and we were unable to recover it. 00:31:41.300 [2024-10-01 08:46:33.034330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.300 [2024-10-01 08:46:33.034339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.300 qpair failed and we were unable to recover it. 00:31:41.300 [2024-10-01 08:46:33.034534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.300 [2024-10-01 08:46:33.034544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.300 qpair failed and we were unable to recover it. 00:31:41.300 [2024-10-01 08:46:33.034818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.300 [2024-10-01 08:46:33.034827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.300 qpair failed and we were unable to recover it. 00:31:41.300 [2024-10-01 08:46:33.035131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.300 [2024-10-01 08:46:33.035141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.300 qpair failed and we were unable to recover it. 00:31:41.300 [2024-10-01 08:46:33.035455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.300 [2024-10-01 08:46:33.035465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.300 qpair failed and we were unable to recover it. 00:31:41.300 [2024-10-01 08:46:33.035747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.300 [2024-10-01 08:46:33.035758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.300 qpair failed and we were unable to recover it. 00:31:41.300 [2024-10-01 08:46:33.036057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.300 [2024-10-01 08:46:33.036067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.300 qpair failed and we were unable to recover it. 00:31:41.300 [2024-10-01 08:46:33.036370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.300 [2024-10-01 08:46:33.036380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.300 qpair failed and we were unable to recover it. 00:31:41.300 [2024-10-01 08:46:33.036731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.300 [2024-10-01 08:46:33.036740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.300 qpair failed and we were unable to recover it. 00:31:41.300 [2024-10-01 08:46:33.037036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.300 [2024-10-01 08:46:33.037046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.300 qpair failed and we were unable to recover it. 00:31:41.300 [2024-10-01 08:46:33.037358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.300 [2024-10-01 08:46:33.037368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.300 qpair failed and we were unable to recover it. 00:31:41.300 [2024-10-01 08:46:33.037648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.300 [2024-10-01 08:46:33.037658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.300 qpair failed and we were unable to recover it. 00:31:41.300 [2024-10-01 08:46:33.038012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.300 [2024-10-01 08:46:33.038022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.300 qpair failed and we were unable to recover it. 00:31:41.300 [2024-10-01 08:46:33.038239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.300 [2024-10-01 08:46:33.038249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.300 qpair failed and we were unable to recover it. 00:31:41.300 [2024-10-01 08:46:33.038575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.300 [2024-10-01 08:46:33.038585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.300 qpair failed and we were unable to recover it. 00:31:41.301 [2024-10-01 08:46:33.038867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.301 [2024-10-01 08:46:33.038877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.301 qpair failed and we were unable to recover it. 00:31:41.301 [2024-10-01 08:46:33.039163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.301 [2024-10-01 08:46:33.039172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.301 qpair failed and we were unable to recover it. 00:31:41.301 [2024-10-01 08:46:33.039459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.301 [2024-10-01 08:46:33.039469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.301 qpair failed and we were unable to recover it. 00:31:41.301 [2024-10-01 08:46:33.039784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.301 [2024-10-01 08:46:33.039794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.301 qpair failed and we were unable to recover it. 00:31:41.301 [2024-10-01 08:46:33.040131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.301 [2024-10-01 08:46:33.040142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.301 qpair failed and we were unable to recover it. 00:31:41.301 [2024-10-01 08:46:33.040441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.301 [2024-10-01 08:46:33.040450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.301 qpair failed and we were unable to recover it. 00:31:41.301 [2024-10-01 08:46:33.040800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.301 [2024-10-01 08:46:33.040809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.301 qpair failed and we were unable to recover it. 00:31:41.301 [2024-10-01 08:46:33.041088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.301 [2024-10-01 08:46:33.041098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.301 qpair failed and we were unable to recover it. 00:31:41.301 [2024-10-01 08:46:33.041424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.301 [2024-10-01 08:46:33.041434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.301 qpair failed and we were unable to recover it. 00:31:41.301 [2024-10-01 08:46:33.041741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.301 [2024-10-01 08:46:33.041751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.301 qpair failed and we were unable to recover it. 00:31:41.301 [2024-10-01 08:46:33.042078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.301 [2024-10-01 08:46:33.042088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.301 qpair failed and we were unable to recover it. 00:31:41.301 [2024-10-01 08:46:33.042376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.301 [2024-10-01 08:46:33.042386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.301 qpair failed and we were unable to recover it. 00:31:41.301 [2024-10-01 08:46:33.042738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.301 [2024-10-01 08:46:33.042748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.301 qpair failed and we were unable to recover it. 00:31:41.301 [2024-10-01 08:46:33.042950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.301 [2024-10-01 08:46:33.042960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.301 qpair failed and we were unable to recover it. 00:31:41.301 [2024-10-01 08:46:33.043245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.301 [2024-10-01 08:46:33.043255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.301 qpair failed and we were unable to recover it. 00:31:41.301 [2024-10-01 08:46:33.043572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.301 [2024-10-01 08:46:33.043582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.301 qpair failed and we were unable to recover it. 00:31:41.301 [2024-10-01 08:46:33.043885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.301 [2024-10-01 08:46:33.043895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.301 qpair failed and we were unable to recover it. 00:31:41.301 [2024-10-01 08:46:33.044086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.301 [2024-10-01 08:46:33.044098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.301 qpair failed and we were unable to recover it. 00:31:41.301 [2024-10-01 08:46:33.044388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.301 [2024-10-01 08:46:33.044398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.301 qpair failed and we were unable to recover it. 00:31:41.301 [2024-10-01 08:46:33.044712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.301 [2024-10-01 08:46:33.044722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.301 qpair failed and we were unable to recover it. 00:31:41.301 [2024-10-01 08:46:33.045005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.301 [2024-10-01 08:46:33.045015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.301 qpair failed and we were unable to recover it. 00:31:41.301 [2024-10-01 08:46:33.045317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.301 [2024-10-01 08:46:33.045326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.301 qpair failed and we were unable to recover it. 00:31:41.301 [2024-10-01 08:46:33.045575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.301 [2024-10-01 08:46:33.045584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.301 qpair failed and we were unable to recover it. 00:31:41.301 [2024-10-01 08:46:33.045918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.301 [2024-10-01 08:46:33.045928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.301 qpair failed and we were unable to recover it. 00:31:41.301 [2024-10-01 08:46:33.046253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.301 [2024-10-01 08:46:33.046263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.301 qpair failed and we were unable to recover it. 00:31:41.301 [2024-10-01 08:46:33.046462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.301 [2024-10-01 08:46:33.046472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.301 qpair failed and we were unable to recover it. 00:31:41.301 [2024-10-01 08:46:33.046765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.301 [2024-10-01 08:46:33.046774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.301 qpair failed and we were unable to recover it. 00:31:41.301 [2024-10-01 08:46:33.047102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.302 [2024-10-01 08:46:33.047111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.302 qpair failed and we were unable to recover it. 00:31:41.302 [2024-10-01 08:46:33.047425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.302 [2024-10-01 08:46:33.047435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.302 qpair failed and we were unable to recover it. 00:31:41.302 [2024-10-01 08:46:33.047712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.302 [2024-10-01 08:46:33.047722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.302 qpair failed and we were unable to recover it. 00:31:41.302 [2024-10-01 08:46:33.048044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.302 [2024-10-01 08:46:33.048054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.302 qpair failed and we were unable to recover it. 00:31:41.302 [2024-10-01 08:46:33.048256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.302 [2024-10-01 08:46:33.048268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.302 qpair failed and we were unable to recover it. 00:31:41.302 [2024-10-01 08:46:33.048563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.302 [2024-10-01 08:46:33.048572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.302 qpair failed and we were unable to recover it. 00:31:41.302 [2024-10-01 08:46:33.048848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.302 [2024-10-01 08:46:33.048858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.302 qpair failed and we were unable to recover it. 00:31:41.302 [2024-10-01 08:46:33.049157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.302 [2024-10-01 08:46:33.049168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.302 qpair failed and we were unable to recover it. 00:31:41.302 [2024-10-01 08:46:33.049350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.302 [2024-10-01 08:46:33.049361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.302 qpair failed and we were unable to recover it. 00:31:41.302 [2024-10-01 08:46:33.049570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.302 [2024-10-01 08:46:33.049580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.302 qpair failed and we were unable to recover it. 00:31:41.302 [2024-10-01 08:46:33.049839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.302 [2024-10-01 08:46:33.049850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.302 qpair failed and we were unable to recover it. 00:31:41.302 [2024-10-01 08:46:33.050069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.302 [2024-10-01 08:46:33.050080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.302 qpair failed and we were unable to recover it. 00:31:41.302 [2024-10-01 08:46:33.050333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.302 [2024-10-01 08:46:33.050343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.302 qpair failed and we were unable to recover it. 00:31:41.302 [2024-10-01 08:46:33.050571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.302 [2024-10-01 08:46:33.050582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.302 qpair failed and we were unable to recover it. 00:31:41.302 [2024-10-01 08:46:33.050918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.302 [2024-10-01 08:46:33.050927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.302 qpair failed and we were unable to recover it. 00:31:41.302 [2024-10-01 08:46:33.051210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.302 [2024-10-01 08:46:33.051220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.302 qpair failed and we were unable to recover it. 00:31:41.302 [2024-10-01 08:46:33.051523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.302 [2024-10-01 08:46:33.051532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.302 qpair failed and we were unable to recover it. 00:31:41.302 [2024-10-01 08:46:33.051809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.302 [2024-10-01 08:46:33.051819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.302 qpair failed and we were unable to recover it. 00:31:41.302 [2024-10-01 08:46:33.052160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.302 [2024-10-01 08:46:33.052171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.302 qpair failed and we were unable to recover it. 00:31:41.302 [2024-10-01 08:46:33.052501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.302 [2024-10-01 08:46:33.052510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.302 qpair failed and we were unable to recover it. 00:31:41.302 [2024-10-01 08:46:33.052837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.302 [2024-10-01 08:46:33.052846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.302 qpair failed and we were unable to recover it. 00:31:41.302 [2024-10-01 08:46:33.053140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.302 [2024-10-01 08:46:33.053151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.302 qpair failed and we were unable to recover it. 00:31:41.302 [2024-10-01 08:46:33.053358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.302 [2024-10-01 08:46:33.053368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.302 qpair failed and we were unable to recover it. 00:31:41.302 [2024-10-01 08:46:33.053709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.302 [2024-10-01 08:46:33.053718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.302 qpair failed and we were unable to recover it. 00:31:41.302 [2024-10-01 08:46:33.053949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.302 [2024-10-01 08:46:33.053958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.302 qpair failed and we were unable to recover it. 00:31:41.302 [2024-10-01 08:46:33.054159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.302 [2024-10-01 08:46:33.054169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.302 qpair failed and we were unable to recover it. 00:31:41.302 [2024-10-01 08:46:33.054359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.302 [2024-10-01 08:46:33.054368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.302 qpair failed and we were unable to recover it. 00:31:41.302 [2024-10-01 08:46:33.054696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.302 [2024-10-01 08:46:33.054706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.303 qpair failed and we were unable to recover it. 00:31:41.303 [2024-10-01 08:46:33.054986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.303 [2024-10-01 08:46:33.055000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.303 qpair failed and we were unable to recover it. 00:31:41.303 [2024-10-01 08:46:33.055307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.303 [2024-10-01 08:46:33.055317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.303 qpair failed and we were unable to recover it. 00:31:41.303 [2024-10-01 08:46:33.055579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.303 [2024-10-01 08:46:33.055589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.303 qpair failed and we were unable to recover it. 00:31:41.303 [2024-10-01 08:46:33.055893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.303 [2024-10-01 08:46:33.055902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.303 qpair failed and we were unable to recover it. 00:31:41.303 [2024-10-01 08:46:33.056190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.303 [2024-10-01 08:46:33.056200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.303 qpair failed and we were unable to recover it. 00:31:41.303 [2024-10-01 08:46:33.056511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.303 [2024-10-01 08:46:33.056521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.303 qpair failed and we were unable to recover it. 00:31:41.303 [2024-10-01 08:46:33.056730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.303 [2024-10-01 08:46:33.056740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.303 qpair failed and we were unable to recover it. 00:31:41.303 [2024-10-01 08:46:33.057037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.303 [2024-10-01 08:46:33.057046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.303 qpair failed and we were unable to recover it. 00:31:41.303 [2024-10-01 08:46:33.057311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.303 [2024-10-01 08:46:33.057320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.303 qpair failed and we were unable to recover it. 00:31:41.303 [2024-10-01 08:46:33.057636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.303 [2024-10-01 08:46:33.057646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.303 qpair failed and we were unable to recover it. 00:31:41.303 [2024-10-01 08:46:33.057935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.303 [2024-10-01 08:46:33.057952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.303 qpair failed and we were unable to recover it. 00:31:41.303 [2024-10-01 08:46:33.058282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.303 [2024-10-01 08:46:33.058292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.303 qpair failed and we were unable to recover it. 00:31:41.303 [2024-10-01 08:46:33.058674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.303 [2024-10-01 08:46:33.058684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.303 qpair failed and we were unable to recover it. 00:31:41.303 [2024-10-01 08:46:33.058973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.303 [2024-10-01 08:46:33.058983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.303 qpair failed and we were unable to recover it. 00:31:41.303 [2024-10-01 08:46:33.059180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.303 [2024-10-01 08:46:33.059190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.303 qpair failed and we were unable to recover it. 00:31:41.303 [2024-10-01 08:46:33.059507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.303 [2024-10-01 08:46:33.059517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.303 qpair failed and we were unable to recover it. 00:31:41.303 [2024-10-01 08:46:33.059807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.303 [2024-10-01 08:46:33.059818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.303 qpair failed and we were unable to recover it. 00:31:41.303 [2024-10-01 08:46:33.060126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.303 [2024-10-01 08:46:33.060136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.303 qpair failed and we were unable to recover it. 00:31:41.303 [2024-10-01 08:46:33.060300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.303 [2024-10-01 08:46:33.060310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.303 qpair failed and we were unable to recover it. 00:31:41.303 [2024-10-01 08:46:33.060596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.303 [2024-10-01 08:46:33.060606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.303 qpair failed and we were unable to recover it. 00:31:41.303 [2024-10-01 08:46:33.060937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.303 [2024-10-01 08:46:33.060947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.303 qpair failed and we were unable to recover it. 00:31:41.303 [2024-10-01 08:46:33.061263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.303 [2024-10-01 08:46:33.061273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.303 qpair failed and we were unable to recover it. 00:31:41.303 [2024-10-01 08:46:33.061556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.303 [2024-10-01 08:46:33.061565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.303 qpair failed and we were unable to recover it. 00:31:41.303 [2024-10-01 08:46:33.061794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.303 [2024-10-01 08:46:33.061803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.303 qpair failed and we were unable to recover it. 00:31:41.303 [2024-10-01 08:46:33.062116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.303 [2024-10-01 08:46:33.062127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.303 qpair failed and we were unable to recover it. 00:31:41.303 [2024-10-01 08:46:33.062448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.303 [2024-10-01 08:46:33.062457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.303 qpair failed and we were unable to recover it. 00:31:41.303 [2024-10-01 08:46:33.062721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.303 [2024-10-01 08:46:33.062730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.303 qpair failed and we were unable to recover it. 00:31:41.303 [2024-10-01 08:46:33.063048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.303 [2024-10-01 08:46:33.063057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.303 qpair failed and we were unable to recover it. 00:31:41.303 [2024-10-01 08:46:33.063357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.303 [2024-10-01 08:46:33.063367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.303 qpair failed and we were unable to recover it. 00:31:41.303 [2024-10-01 08:46:33.063675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.303 [2024-10-01 08:46:33.063685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.304 qpair failed and we were unable to recover it. 00:31:41.304 [2024-10-01 08:46:33.063950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.304 [2024-10-01 08:46:33.063961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.304 qpair failed and we were unable to recover it. 00:31:41.304 [2024-10-01 08:46:33.064258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.304 [2024-10-01 08:46:33.064269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.304 qpair failed and we were unable to recover it. 00:31:41.304 [2024-10-01 08:46:33.064572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.304 [2024-10-01 08:46:33.064583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.304 qpair failed and we were unable to recover it. 00:31:41.304 [2024-10-01 08:46:33.064885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.304 [2024-10-01 08:46:33.064895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.304 qpair failed and we were unable to recover it. 00:31:41.304 [2024-10-01 08:46:33.065179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.304 [2024-10-01 08:46:33.065189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.304 qpair failed and we were unable to recover it. 00:31:41.304 [2024-10-01 08:46:33.065501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.304 [2024-10-01 08:46:33.065510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.304 qpair failed and we were unable to recover it. 00:31:41.304 [2024-10-01 08:46:33.065801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.304 [2024-10-01 08:46:33.065810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.304 qpair failed and we were unable to recover it. 00:31:41.304 [2024-10-01 08:46:33.066101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.304 [2024-10-01 08:46:33.066111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.304 qpair failed and we were unable to recover it. 00:31:41.304 [2024-10-01 08:46:33.066415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.304 [2024-10-01 08:46:33.066424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.304 qpair failed and we were unable to recover it. 00:31:41.304 [2024-10-01 08:46:33.066729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.304 [2024-10-01 08:46:33.066739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.304 qpair failed and we were unable to recover it. 00:31:41.304 [2024-10-01 08:46:33.067044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.304 [2024-10-01 08:46:33.067054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.304 qpair failed and we were unable to recover it. 00:31:41.304 [2024-10-01 08:46:33.067373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.304 [2024-10-01 08:46:33.067384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.304 qpair failed and we were unable to recover it. 00:31:41.304 [2024-10-01 08:46:33.067685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.304 [2024-10-01 08:46:33.067696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.304 qpair failed and we were unable to recover it. 00:31:41.304 [2024-10-01 08:46:33.067861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.304 [2024-10-01 08:46:33.067872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.304 qpair failed and we were unable to recover it. 00:31:41.304 [2024-10-01 08:46:33.068159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.304 [2024-10-01 08:46:33.068171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.304 qpair failed and we were unable to recover it. 00:31:41.304 [2024-10-01 08:46:33.068364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.304 [2024-10-01 08:46:33.068373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.304 qpair failed and we were unable to recover it. 00:31:41.304 [2024-10-01 08:46:33.068709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.304 [2024-10-01 08:46:33.068718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.304 qpair failed and we were unable to recover it. 00:31:41.304 [2024-10-01 08:46:33.069004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.304 [2024-10-01 08:46:33.069014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.304 qpair failed and we were unable to recover it. 00:31:41.304 [2024-10-01 08:46:33.069216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.304 [2024-10-01 08:46:33.069226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.304 qpair failed and we were unable to recover it. 00:31:41.304 [2024-10-01 08:46:33.069604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.304 [2024-10-01 08:46:33.069614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.304 qpair failed and we were unable to recover it. 00:31:41.304 [2024-10-01 08:46:33.069921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.304 [2024-10-01 08:46:33.069931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.304 qpair failed and we were unable to recover it. 00:31:41.304 [2024-10-01 08:46:33.070278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.304 [2024-10-01 08:46:33.070288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.304 qpair failed and we were unable to recover it. 00:31:41.304 [2024-10-01 08:46:33.070659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.304 [2024-10-01 08:46:33.070668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.304 qpair failed and we were unable to recover it. 00:31:41.304 [2024-10-01 08:46:33.070975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.304 [2024-10-01 08:46:33.070985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.304 qpair failed and we were unable to recover it. 00:31:41.304 [2024-10-01 08:46:33.071289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.304 [2024-10-01 08:46:33.071298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.304 qpair failed and we were unable to recover it. 00:31:41.304 [2024-10-01 08:46:33.071587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.305 [2024-10-01 08:46:33.071596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.305 qpair failed and we were unable to recover it. 00:31:41.305 [2024-10-01 08:46:33.071910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.305 [2024-10-01 08:46:33.071919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.305 qpair failed and we were unable to recover it. 00:31:41.305 [2024-10-01 08:46:33.072117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.305 [2024-10-01 08:46:33.072127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.305 qpair failed and we were unable to recover it. 00:31:41.305 [2024-10-01 08:46:33.072440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.305 [2024-10-01 08:46:33.072450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.305 qpair failed and we were unable to recover it. 00:31:41.305 [2024-10-01 08:46:33.072778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.305 [2024-10-01 08:46:33.072787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.305 qpair failed and we were unable to recover it. 00:31:41.305 [2024-10-01 08:46:33.073092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.305 [2024-10-01 08:46:33.073102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.305 qpair failed and we were unable to recover it. 00:31:41.578 [2024-10-01 08:46:33.073487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.578 [2024-10-01 08:46:33.073499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.578 qpair failed and we were unable to recover it. 00:31:41.578 [2024-10-01 08:46:33.073691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.578 [2024-10-01 08:46:33.073702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.578 qpair failed and we were unable to recover it. 00:31:41.578 [2024-10-01 08:46:33.073981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.578 [2024-10-01 08:46:33.073990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.578 qpair failed and we were unable to recover it. 00:31:41.578 [2024-10-01 08:46:33.074289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.578 [2024-10-01 08:46:33.074308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.578 qpair failed and we were unable to recover it. 00:31:41.578 [2024-10-01 08:46:33.074621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.578 [2024-10-01 08:46:33.074630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.578 qpair failed and we were unable to recover it. 00:31:41.578 [2024-10-01 08:46:33.074917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.578 [2024-10-01 08:46:33.074927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.578 qpair failed and we were unable to recover it. 00:31:41.578 [2024-10-01 08:46:33.075210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.578 [2024-10-01 08:46:33.075220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.578 qpair failed and we were unable to recover it. 00:31:41.578 [2024-10-01 08:46:33.075504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.578 [2024-10-01 08:46:33.075514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.578 qpair failed and we were unable to recover it. 00:31:41.578 [2024-10-01 08:46:33.075814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.578 [2024-10-01 08:46:33.075823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.578 qpair failed and we were unable to recover it. 00:31:41.578 [2024-10-01 08:46:33.076125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.578 [2024-10-01 08:46:33.076134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.578 qpair failed and we were unable to recover it. 00:31:41.578 [2024-10-01 08:46:33.076451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.578 [2024-10-01 08:46:33.076461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.578 qpair failed and we were unable to recover it. 00:31:41.578 [2024-10-01 08:46:33.076697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.578 [2024-10-01 08:46:33.076707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.578 qpair failed and we were unable to recover it. 00:31:41.578 [2024-10-01 08:46:33.077042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.578 [2024-10-01 08:46:33.077052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.578 qpair failed and we were unable to recover it. 00:31:41.578 [2024-10-01 08:46:33.077257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.578 [2024-10-01 08:46:33.077267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.578 qpair failed and we were unable to recover it. 00:31:41.578 [2024-10-01 08:46:33.077585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.578 [2024-10-01 08:46:33.077595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.578 qpair failed and we were unable to recover it. 00:31:41.578 [2024-10-01 08:46:33.077929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.578 [2024-10-01 08:46:33.077938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.578 qpair failed and we were unable to recover it. 00:31:41.578 [2024-10-01 08:46:33.078120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.578 [2024-10-01 08:46:33.078130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.578 qpair failed and we were unable to recover it. 00:31:41.578 [2024-10-01 08:46:33.078425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.578 [2024-10-01 08:46:33.078434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.578 qpair failed and we were unable to recover it. 00:31:41.578 [2024-10-01 08:46:33.078720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.578 [2024-10-01 08:46:33.078729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.578 qpair failed and we were unable to recover it. 00:31:41.578 [2024-10-01 08:46:33.079033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.578 [2024-10-01 08:46:33.079043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.578 qpair failed and we were unable to recover it. 00:31:41.578 [2024-10-01 08:46:33.079329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.578 [2024-10-01 08:46:33.079339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.578 qpair failed and we were unable to recover it. 00:31:41.578 [2024-10-01 08:46:33.079646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.578 [2024-10-01 08:46:33.079656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.578 qpair failed and we were unable to recover it. 00:31:41.578 [2024-10-01 08:46:33.079848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.578 [2024-10-01 08:46:33.079857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.578 qpair failed and we were unable to recover it. 00:31:41.578 [2024-10-01 08:46:33.080136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.578 [2024-10-01 08:46:33.080146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.578 qpair failed and we were unable to recover it. 00:31:41.578 [2024-10-01 08:46:33.080459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.578 [2024-10-01 08:46:33.080470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.578 qpair failed and we were unable to recover it. 00:31:41.578 [2024-10-01 08:46:33.080770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.578 [2024-10-01 08:46:33.080780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.578 qpair failed and we were unable to recover it. 00:31:41.578 [2024-10-01 08:46:33.081090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.578 [2024-10-01 08:46:33.081099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.578 qpair failed and we were unable to recover it. 00:31:41.579 [2024-10-01 08:46:33.081381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.579 [2024-10-01 08:46:33.081391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.579 qpair failed and we were unable to recover it. 00:31:41.579 [2024-10-01 08:46:33.081721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.579 [2024-10-01 08:46:33.081730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.579 qpair failed and we were unable to recover it. 00:31:41.579 [2024-10-01 08:46:33.082014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.579 [2024-10-01 08:46:33.082023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.579 qpair failed and we were unable to recover it. 00:31:41.579 [2024-10-01 08:46:33.082320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.579 [2024-10-01 08:46:33.082330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.579 qpair failed and we were unable to recover it. 00:31:41.579 [2024-10-01 08:46:33.082519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.579 [2024-10-01 08:46:33.082529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.579 qpair failed and we were unable to recover it. 00:31:41.579 [2024-10-01 08:46:33.082846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.579 [2024-10-01 08:46:33.082855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.579 qpair failed and we were unable to recover it. 00:31:41.579 [2024-10-01 08:46:33.083141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.579 [2024-10-01 08:46:33.083151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.579 qpair failed and we were unable to recover it. 00:31:41.579 [2024-10-01 08:46:33.083455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.579 [2024-10-01 08:46:33.083465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.579 qpair failed and we were unable to recover it. 00:31:41.579 [2024-10-01 08:46:33.083770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.579 [2024-10-01 08:46:33.083780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.579 qpair failed and we were unable to recover it. 00:31:41.579 [2024-10-01 08:46:33.083970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.579 [2024-10-01 08:46:33.083979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.579 qpair failed and we were unable to recover it. 00:31:41.579 [2024-10-01 08:46:33.084270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.579 [2024-10-01 08:46:33.084280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.579 qpair failed and we were unable to recover it. 00:31:41.579 [2024-10-01 08:46:33.084587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.579 [2024-10-01 08:46:33.084597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.579 qpair failed and we were unable to recover it. 00:31:41.579 [2024-10-01 08:46:33.084928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.579 [2024-10-01 08:46:33.084938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.579 qpair failed and we were unable to recover it. 00:31:41.579 [2024-10-01 08:46:33.085263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.579 [2024-10-01 08:46:33.085274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.579 qpair failed and we were unable to recover it. 00:31:41.579 [2024-10-01 08:46:33.085577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.579 [2024-10-01 08:46:33.085587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.579 qpair failed and we were unable to recover it. 00:31:41.579 [2024-10-01 08:46:33.085833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.579 [2024-10-01 08:46:33.085842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.579 qpair failed and we were unable to recover it. 00:31:41.579 [2024-10-01 08:46:33.086135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.579 [2024-10-01 08:46:33.086145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.579 qpair failed and we were unable to recover it. 00:31:41.579 [2024-10-01 08:46:33.086460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.579 [2024-10-01 08:46:33.086470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.579 qpair failed and we were unable to recover it. 00:31:41.579 [2024-10-01 08:46:33.086797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.579 [2024-10-01 08:46:33.086807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.579 qpair failed and we were unable to recover it. 00:31:41.579 [2024-10-01 08:46:33.087110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.579 [2024-10-01 08:46:33.087120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.579 qpair failed and we were unable to recover it. 00:31:41.579 [2024-10-01 08:46:33.087442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.579 [2024-10-01 08:46:33.087451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.579 qpair failed and we were unable to recover it. 00:31:41.579 [2024-10-01 08:46:33.087756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.579 [2024-10-01 08:46:33.087766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.579 qpair failed and we were unable to recover it. 00:31:41.579 [2024-10-01 08:46:33.088056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.579 [2024-10-01 08:46:33.088066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.579 qpair failed and we were unable to recover it. 00:31:41.579 [2024-10-01 08:46:33.088305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.579 [2024-10-01 08:46:33.088315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.579 qpair failed and we were unable to recover it. 00:31:41.579 [2024-10-01 08:46:33.088600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.579 [2024-10-01 08:46:33.088612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.579 qpair failed and we were unable to recover it. 00:31:41.579 [2024-10-01 08:46:33.088919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.579 [2024-10-01 08:46:33.088929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.579 qpair failed and we were unable to recover it. 00:31:41.579 [2024-10-01 08:46:33.089239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.579 [2024-10-01 08:46:33.089250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.579 qpair failed and we were unable to recover it. 00:31:41.579 [2024-10-01 08:46:33.089552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.579 [2024-10-01 08:46:33.089562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.579 qpair failed and we were unable to recover it. 00:31:41.579 [2024-10-01 08:46:33.089894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.579 [2024-10-01 08:46:33.089904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.579 qpair failed and we were unable to recover it. 00:31:41.579 [2024-10-01 08:46:33.090235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.579 [2024-10-01 08:46:33.090246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.579 qpair failed and we were unable to recover it. 00:31:41.579 [2024-10-01 08:46:33.090573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.579 [2024-10-01 08:46:33.090582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.579 qpair failed and we were unable to recover it. 00:31:41.579 [2024-10-01 08:46:33.090857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.579 [2024-10-01 08:46:33.090867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.579 qpair failed and we were unable to recover it. 00:31:41.579 [2024-10-01 08:46:33.091154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.579 [2024-10-01 08:46:33.091164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.579 qpair failed and we were unable to recover it. 00:31:41.579 [2024-10-01 08:46:33.091503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.579 [2024-10-01 08:46:33.091514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.579 qpair failed and we were unable to recover it. 00:31:41.579 [2024-10-01 08:46:33.091818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.579 [2024-10-01 08:46:33.091827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.579 qpair failed and we were unable to recover it. 00:31:41.579 [2024-10-01 08:46:33.092149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.579 [2024-10-01 08:46:33.092160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.579 qpair failed and we were unable to recover it. 00:31:41.579 [2024-10-01 08:46:33.092448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.579 [2024-10-01 08:46:33.092458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.579 qpair failed and we were unable to recover it. 00:31:41.579 [2024-10-01 08:46:33.092759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.579 [2024-10-01 08:46:33.092769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.579 qpair failed and we were unable to recover it. 00:31:41.579 [2024-10-01 08:46:33.093152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.579 [2024-10-01 08:46:33.093162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.579 qpair failed and we were unable to recover it. 00:31:41.580 [2024-10-01 08:46:33.093459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.580 [2024-10-01 08:46:33.093469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.580 qpair failed and we were unable to recover it. 00:31:41.580 [2024-10-01 08:46:33.093788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.580 [2024-10-01 08:46:33.093798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.580 qpair failed and we were unable to recover it. 00:31:41.580 [2024-10-01 08:46:33.094092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.580 [2024-10-01 08:46:33.094102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.580 qpair failed and we were unable to recover it. 00:31:41.580 [2024-10-01 08:46:33.094388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.580 [2024-10-01 08:46:33.094398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.580 qpair failed and we were unable to recover it. 00:31:41.580 [2024-10-01 08:46:33.094666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.580 [2024-10-01 08:46:33.094675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.580 qpair failed and we were unable to recover it. 00:31:41.580 [2024-10-01 08:46:33.094960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.580 [2024-10-01 08:46:33.094969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.580 qpair failed and we were unable to recover it. 00:31:41.580 [2024-10-01 08:46:33.095273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.580 [2024-10-01 08:46:33.095283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.580 qpair failed and we were unable to recover it. 00:31:41.580 [2024-10-01 08:46:33.095560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.580 [2024-10-01 08:46:33.095570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.580 qpair failed and we were unable to recover it. 00:31:41.580 [2024-10-01 08:46:33.095882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.580 [2024-10-01 08:46:33.095891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.580 qpair failed and we were unable to recover it. 00:31:41.580 [2024-10-01 08:46:33.096078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.580 [2024-10-01 08:46:33.096088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.580 qpair failed and we were unable to recover it. 00:31:41.580 [2024-10-01 08:46:33.096446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.580 [2024-10-01 08:46:33.096456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.580 qpair failed and we were unable to recover it. 00:31:41.580 [2024-10-01 08:46:33.096744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.580 [2024-10-01 08:46:33.096754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.580 qpair failed and we were unable to recover it. 00:31:41.580 [2024-10-01 08:46:33.097081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.580 [2024-10-01 08:46:33.097091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.580 qpair failed and we were unable to recover it. 00:31:41.580 [2024-10-01 08:46:33.097418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.580 [2024-10-01 08:46:33.097428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.580 qpair failed and we were unable to recover it. 00:31:41.580 [2024-10-01 08:46:33.097731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.580 [2024-10-01 08:46:33.097740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.580 qpair failed and we were unable to recover it. 00:31:41.580 [2024-10-01 08:46:33.098035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.580 [2024-10-01 08:46:33.098045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.580 qpair failed and we were unable to recover it. 00:31:41.580 [2024-10-01 08:46:33.098369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.580 [2024-10-01 08:46:33.098378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.580 qpair failed and we were unable to recover it. 00:31:41.580 [2024-10-01 08:46:33.098664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.580 [2024-10-01 08:46:33.098682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.580 qpair failed and we were unable to recover it. 00:31:41.580 [2024-10-01 08:46:33.099007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.580 [2024-10-01 08:46:33.099017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.580 qpair failed and we were unable to recover it. 00:31:41.580 [2024-10-01 08:46:33.099325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.580 [2024-10-01 08:46:33.099334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.580 qpair failed and we were unable to recover it. 00:31:41.580 [2024-10-01 08:46:33.099640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.580 [2024-10-01 08:46:33.099649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.580 qpair failed and we were unable to recover it. 00:31:41.580 [2024-10-01 08:46:33.099948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.580 [2024-10-01 08:46:33.099958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.580 qpair failed and we were unable to recover it. 00:31:41.580 [2024-10-01 08:46:33.100229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.580 [2024-10-01 08:46:33.100239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.580 qpair failed and we were unable to recover it. 00:31:41.580 [2024-10-01 08:46:33.100563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.580 [2024-10-01 08:46:33.100573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.580 qpair failed and we were unable to recover it. 00:31:41.580 [2024-10-01 08:46:33.100850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.580 [2024-10-01 08:46:33.100860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.580 qpair failed and we were unable to recover it. 00:31:41.580 [2024-10-01 08:46:33.101151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.580 [2024-10-01 08:46:33.101161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.580 qpair failed and we were unable to recover it. 00:31:41.580 [2024-10-01 08:46:33.101469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.580 [2024-10-01 08:46:33.101481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.580 qpair failed and we were unable to recover it. 00:31:41.580 [2024-10-01 08:46:33.101756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.580 [2024-10-01 08:46:33.101766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.580 qpair failed and we were unable to recover it. 00:31:41.580 [2024-10-01 08:46:33.102058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.580 [2024-10-01 08:46:33.102075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.580 qpair failed and we were unable to recover it. 00:31:41.580 [2024-10-01 08:46:33.102391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.580 [2024-10-01 08:46:33.102401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.580 qpair failed and we were unable to recover it. 00:31:41.580 [2024-10-01 08:46:33.102685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.580 [2024-10-01 08:46:33.102694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.580 qpair failed and we were unable to recover it. 00:31:41.580 [2024-10-01 08:46:33.103005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.580 [2024-10-01 08:46:33.103015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.580 qpair failed and we were unable to recover it. 00:31:41.580 [2024-10-01 08:46:33.103381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.580 [2024-10-01 08:46:33.103391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.580 qpair failed and we were unable to recover it. 00:31:41.580 [2024-10-01 08:46:33.103668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.580 [2024-10-01 08:46:33.103678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.580 qpair failed and we were unable to recover it. 00:31:41.580 [2024-10-01 08:46:33.103951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.580 [2024-10-01 08:46:33.103961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.580 qpair failed and we were unable to recover it. 00:31:41.580 [2024-10-01 08:46:33.104296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.580 [2024-10-01 08:46:33.104306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.580 qpair failed and we were unable to recover it. 00:31:41.580 [2024-10-01 08:46:33.104673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.580 [2024-10-01 08:46:33.104683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.580 qpair failed and we were unable to recover it. 00:31:41.580 [2024-10-01 08:46:33.104976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.580 [2024-10-01 08:46:33.104986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.580 qpair failed and we were unable to recover it. 00:31:41.580 [2024-10-01 08:46:33.105365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.580 [2024-10-01 08:46:33.105375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.580 qpair failed and we were unable to recover it. 00:31:41.581 [2024-10-01 08:46:33.105661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.581 [2024-10-01 08:46:33.105678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.581 qpair failed and we were unable to recover it. 00:31:41.581 [2024-10-01 08:46:33.105988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.581 [2024-10-01 08:46:33.106000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.581 qpair failed and we were unable to recover it. 00:31:41.581 [2024-10-01 08:46:33.106389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.581 [2024-10-01 08:46:33.106398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.581 qpair failed and we were unable to recover it. 00:31:41.581 [2024-10-01 08:46:33.106695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.581 [2024-10-01 08:46:33.106704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.581 qpair failed and we were unable to recover it. 00:31:41.581 [2024-10-01 08:46:33.106888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.581 [2024-10-01 08:46:33.106899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.581 qpair failed and we were unable to recover it. 00:31:41.581 [2024-10-01 08:46:33.107208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.581 [2024-10-01 08:46:33.107218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.581 qpair failed and we were unable to recover it. 00:31:41.581 [2024-10-01 08:46:33.107523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.581 [2024-10-01 08:46:33.107532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.581 qpair failed and we were unable to recover it. 00:31:41.581 [2024-10-01 08:46:33.107856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.581 [2024-10-01 08:46:33.107865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.581 qpair failed and we were unable to recover it. 00:31:41.581 [2024-10-01 08:46:33.108157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.581 [2024-10-01 08:46:33.108167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.581 qpair failed and we were unable to recover it. 00:31:41.581 [2024-10-01 08:46:33.108523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.581 [2024-10-01 08:46:33.108532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.581 qpair failed and we were unable to recover it. 00:31:41.581 [2024-10-01 08:46:33.108878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.581 [2024-10-01 08:46:33.108887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.581 qpair failed and we were unable to recover it. 00:31:41.581 [2024-10-01 08:46:33.109210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.581 [2024-10-01 08:46:33.109220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.581 qpair failed and we were unable to recover it. 00:31:41.581 [2024-10-01 08:46:33.109507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.581 [2024-10-01 08:46:33.109516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.581 qpair failed and we were unable to recover it. 00:31:41.581 [2024-10-01 08:46:33.109818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.581 [2024-10-01 08:46:33.109827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.581 qpair failed and we were unable to recover it. 00:31:41.581 [2024-10-01 08:46:33.110134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.581 [2024-10-01 08:46:33.110146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.581 qpair failed and we were unable to recover it. 00:31:41.581 [2024-10-01 08:46:33.110466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.581 [2024-10-01 08:46:33.110481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.581 qpair failed and we were unable to recover it. 00:31:41.581 [2024-10-01 08:46:33.110807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.581 [2024-10-01 08:46:33.110818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.581 qpair failed and we were unable to recover it. 00:31:41.581 [2024-10-01 08:46:33.111037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.581 [2024-10-01 08:46:33.111047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.581 qpair failed and we were unable to recover it. 00:31:41.581 [2024-10-01 08:46:33.111389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.581 [2024-10-01 08:46:33.111399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.581 qpair failed and we were unable to recover it. 00:31:41.581 [2024-10-01 08:46:33.111653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.581 [2024-10-01 08:46:33.111663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.581 qpair failed and we were unable to recover it. 00:31:41.581 [2024-10-01 08:46:33.111996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.581 [2024-10-01 08:46:33.112006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.581 qpair failed and we were unable to recover it. 00:31:41.581 [2024-10-01 08:46:33.112371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.581 [2024-10-01 08:46:33.112381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.581 qpair failed and we were unable to recover it. 00:31:41.581 [2024-10-01 08:46:33.112573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.581 [2024-10-01 08:46:33.112582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.581 qpair failed and we were unable to recover it. 00:31:41.581 [2024-10-01 08:46:33.112809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.581 [2024-10-01 08:46:33.112819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.581 qpair failed and we were unable to recover it. 00:31:41.581 [2024-10-01 08:46:33.113104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.581 [2024-10-01 08:46:33.113115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.581 qpair failed and we were unable to recover it. 00:31:41.581 [2024-10-01 08:46:33.113424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.581 [2024-10-01 08:46:33.113434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.581 qpair failed and we were unable to recover it. 00:31:41.581 [2024-10-01 08:46:33.113742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.581 [2024-10-01 08:46:33.113752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.581 qpair failed and we were unable to recover it. 00:31:41.581 [2024-10-01 08:46:33.114030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.581 [2024-10-01 08:46:33.114040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.581 qpair failed and we were unable to recover it. 00:31:41.581 [2024-10-01 08:46:33.114366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.581 [2024-10-01 08:46:33.114375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.581 qpair failed and we were unable to recover it. 00:31:41.581 [2024-10-01 08:46:33.114656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.581 [2024-10-01 08:46:33.114665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.581 qpair failed and we were unable to recover it. 00:31:41.581 [2024-10-01 08:46:33.114947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.581 [2024-10-01 08:46:33.114957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.581 qpair failed and we were unable to recover it. 00:31:41.581 [2024-10-01 08:46:33.115220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.581 [2024-10-01 08:46:33.115230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.581 qpair failed and we were unable to recover it. 00:31:41.581 [2024-10-01 08:46:33.115526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.581 [2024-10-01 08:46:33.115535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.581 qpair failed and we were unable to recover it. 00:31:41.581 [2024-10-01 08:46:33.115815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.581 [2024-10-01 08:46:33.115825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.581 qpair failed and we were unable to recover it. 00:31:41.581 [2024-10-01 08:46:33.116110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.581 [2024-10-01 08:46:33.116120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.581 qpair failed and we were unable to recover it. 00:31:41.581 [2024-10-01 08:46:33.116452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.581 [2024-10-01 08:46:33.116463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.581 qpair failed and we were unable to recover it. 00:31:41.581 [2024-10-01 08:46:33.116777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.581 [2024-10-01 08:46:33.116788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.581 qpair failed and we were unable to recover it. 00:31:41.581 [2024-10-01 08:46:33.117086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.581 [2024-10-01 08:46:33.117096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.581 qpair failed and we were unable to recover it. 00:31:41.581 [2024-10-01 08:46:33.117456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.581 [2024-10-01 08:46:33.117465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.581 qpair failed and we were unable to recover it. 00:31:41.582 [2024-10-01 08:46:33.117762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.582 [2024-10-01 08:46:33.117771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.582 qpair failed and we were unable to recover it. 00:31:41.582 [2024-10-01 08:46:33.118091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.582 [2024-10-01 08:46:33.118101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.582 qpair failed and we were unable to recover it. 00:31:41.582 [2024-10-01 08:46:33.118389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.582 [2024-10-01 08:46:33.118398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.582 qpair failed and we were unable to recover it. 00:31:41.582 [2024-10-01 08:46:33.118686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.582 [2024-10-01 08:46:33.118695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.582 qpair failed and we were unable to recover it. 00:31:41.582 [2024-10-01 08:46:33.119001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.582 [2024-10-01 08:46:33.119011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.582 qpair failed and we were unable to recover it. 00:31:41.582 [2024-10-01 08:46:33.119291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.582 [2024-10-01 08:46:33.119301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.582 qpair failed and we were unable to recover it. 00:31:41.582 [2024-10-01 08:46:33.119581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.582 [2024-10-01 08:46:33.119591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.582 qpair failed and we were unable to recover it. 00:31:41.582 [2024-10-01 08:46:33.119866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.582 [2024-10-01 08:46:33.119876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.582 qpair failed and we were unable to recover it. 00:31:41.582 [2024-10-01 08:46:33.120169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.582 [2024-10-01 08:46:33.120179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.582 qpair failed and we were unable to recover it. 00:31:41.582 [2024-10-01 08:46:33.120473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.582 [2024-10-01 08:46:33.120483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.582 qpair failed and we were unable to recover it. 00:31:41.582 [2024-10-01 08:46:33.120787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.582 [2024-10-01 08:46:33.120798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.582 qpair failed and we were unable to recover it. 00:31:41.582 [2024-10-01 08:46:33.121103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.582 [2024-10-01 08:46:33.121113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.582 qpair failed and we were unable to recover it. 00:31:41.582 [2024-10-01 08:46:33.121418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.582 [2024-10-01 08:46:33.121428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.582 qpair failed and we were unable to recover it. 00:31:41.582 [2024-10-01 08:46:33.121709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.582 [2024-10-01 08:46:33.121719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.582 qpair failed and we were unable to recover it. 00:31:41.582 [2024-10-01 08:46:33.122003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.582 [2024-10-01 08:46:33.122013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.582 qpair failed and we were unable to recover it. 00:31:41.582 [2024-10-01 08:46:33.122349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.582 [2024-10-01 08:46:33.122359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.582 qpair failed and we were unable to recover it. 00:31:41.582 [2024-10-01 08:46:33.122634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.582 [2024-10-01 08:46:33.122646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.582 qpair failed and we were unable to recover it. 00:31:41.582 [2024-10-01 08:46:33.122943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.582 [2024-10-01 08:46:33.122954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.582 qpair failed and we were unable to recover it. 00:31:41.582 [2024-10-01 08:46:33.123280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.582 [2024-10-01 08:46:33.123290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.582 qpair failed and we were unable to recover it. 00:31:41.582 [2024-10-01 08:46:33.123564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.582 [2024-10-01 08:46:33.123574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.582 qpair failed and we were unable to recover it. 00:31:41.582 [2024-10-01 08:46:33.123881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.582 [2024-10-01 08:46:33.123891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.582 qpair failed and we were unable to recover it. 00:31:41.582 [2024-10-01 08:46:33.124181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.582 [2024-10-01 08:46:33.124191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.582 qpair failed and we were unable to recover it. 00:31:41.582 [2024-10-01 08:46:33.124378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.582 [2024-10-01 08:46:33.124390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.582 qpair failed and we were unable to recover it. 00:31:41.582 [2024-10-01 08:46:33.124693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.582 [2024-10-01 08:46:33.124702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.582 qpair failed and we were unable to recover it. 00:31:41.582 [2024-10-01 08:46:33.124986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.582 [2024-10-01 08:46:33.125004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.582 qpair failed and we were unable to recover it. 00:31:41.582 [2024-10-01 08:46:33.125304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.582 [2024-10-01 08:46:33.125314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.582 qpair failed and we were unable to recover it. 00:31:41.582 [2024-10-01 08:46:33.125598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.582 [2024-10-01 08:46:33.125608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.582 qpair failed and we were unable to recover it. 00:31:41.582 [2024-10-01 08:46:33.125888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.582 [2024-10-01 08:46:33.125898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.582 qpair failed and we were unable to recover it. 00:31:41.582 [2024-10-01 08:46:33.126204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.582 [2024-10-01 08:46:33.126214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.582 qpair failed and we were unable to recover it. 00:31:41.582 [2024-10-01 08:46:33.126521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.582 [2024-10-01 08:46:33.126531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.582 qpair failed and we were unable to recover it. 00:31:41.582 [2024-10-01 08:46:33.126864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.582 [2024-10-01 08:46:33.126875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.582 qpair failed and we were unable to recover it. 00:31:41.582 [2024-10-01 08:46:33.127233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.582 [2024-10-01 08:46:33.127243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.582 qpair failed and we were unable to recover it. 00:31:41.582 [2024-10-01 08:46:33.127544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.582 [2024-10-01 08:46:33.127553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.582 qpair failed and we were unable to recover it. 00:31:41.582 [2024-10-01 08:46:33.127861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.583 [2024-10-01 08:46:33.127870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.583 qpair failed and we were unable to recover it. 00:31:41.583 [2024-10-01 08:46:33.128248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.583 [2024-10-01 08:46:33.128258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.583 qpair failed and we were unable to recover it. 00:31:41.583 [2024-10-01 08:46:33.128585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.583 [2024-10-01 08:46:33.128594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.583 qpair failed and we were unable to recover it. 00:31:41.583 [2024-10-01 08:46:33.128923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.583 [2024-10-01 08:46:33.128933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.583 qpair failed and we were unable to recover it. 00:31:41.583 [2024-10-01 08:46:33.129241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.583 [2024-10-01 08:46:33.129251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.583 qpair failed and we were unable to recover it. 00:31:41.583 [2024-10-01 08:46:33.129551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.583 [2024-10-01 08:46:33.129561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.583 qpair failed and we were unable to recover it. 00:31:41.583 [2024-10-01 08:46:33.129857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.583 [2024-10-01 08:46:33.129867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.583 qpair failed and we were unable to recover it. 00:31:41.583 [2024-10-01 08:46:33.130164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.583 [2024-10-01 08:46:33.130174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.583 qpair failed and we were unable to recover it. 00:31:41.583 [2024-10-01 08:46:33.130474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.583 [2024-10-01 08:46:33.130484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.583 qpair failed and we were unable to recover it. 00:31:41.583 [2024-10-01 08:46:33.130753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.583 [2024-10-01 08:46:33.130763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.583 qpair failed and we were unable to recover it. 00:31:41.583 [2024-10-01 08:46:33.131039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.583 [2024-10-01 08:46:33.131051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.583 qpair failed and we were unable to recover it. 00:31:41.583 [2024-10-01 08:46:33.131373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.583 [2024-10-01 08:46:33.131383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.583 qpair failed and we were unable to recover it. 00:31:41.583 [2024-10-01 08:46:33.131663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.583 [2024-10-01 08:46:33.131673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.583 qpair failed and we were unable to recover it. 00:31:41.583 [2024-10-01 08:46:33.131983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.583 [2024-10-01 08:46:33.131992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.583 qpair failed and we were unable to recover it. 00:31:41.583 [2024-10-01 08:46:33.132276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.583 [2024-10-01 08:46:33.132286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.583 qpair failed and we were unable to recover it. 00:31:41.583 [2024-10-01 08:46:33.132576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.583 [2024-10-01 08:46:33.132585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.583 qpair failed and we were unable to recover it. 00:31:41.583 [2024-10-01 08:46:33.132890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.583 [2024-10-01 08:46:33.132900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.583 qpair failed and we were unable to recover it. 00:31:41.583 [2024-10-01 08:46:33.133120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.583 [2024-10-01 08:46:33.133131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.583 qpair failed and we were unable to recover it. 00:31:41.583 [2024-10-01 08:46:33.133404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.583 [2024-10-01 08:46:33.133413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.583 qpair failed and we were unable to recover it. 00:31:41.583 [2024-10-01 08:46:33.133720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.583 [2024-10-01 08:46:33.133729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.583 qpair failed and we were unable to recover it. 00:31:41.583 [2024-10-01 08:46:33.133935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.583 [2024-10-01 08:46:33.133945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.583 qpair failed and we were unable to recover it. 00:31:41.583 [2024-10-01 08:46:33.134251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.583 [2024-10-01 08:46:33.134261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.583 qpair failed and we were unable to recover it. 00:31:41.583 [2024-10-01 08:46:33.134575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.583 [2024-10-01 08:46:33.134585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.583 qpair failed and we were unable to recover it. 00:31:41.583 [2024-10-01 08:46:33.134858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.583 [2024-10-01 08:46:33.134868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.583 qpair failed and we were unable to recover it. 00:31:41.583 [2024-10-01 08:46:33.135155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.583 [2024-10-01 08:46:33.135165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.583 qpair failed and we were unable to recover it. 00:31:41.583 [2024-10-01 08:46:33.135470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.583 [2024-10-01 08:46:33.135479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.583 qpair failed and we were unable to recover it. 00:31:41.583 [2024-10-01 08:46:33.135787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.583 [2024-10-01 08:46:33.135796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.583 qpair failed and we were unable to recover it. 00:31:41.583 [2024-10-01 08:46:33.136104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.583 [2024-10-01 08:46:33.136114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.583 qpair failed and we were unable to recover it. 00:31:41.583 [2024-10-01 08:46:33.136467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.583 [2024-10-01 08:46:33.136477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.583 qpair failed and we were unable to recover it. 00:31:41.583 [2024-10-01 08:46:33.136784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.583 [2024-10-01 08:46:33.136794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.583 qpair failed and we were unable to recover it. 00:31:41.583 [2024-10-01 08:46:33.137080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.583 [2024-10-01 08:46:33.137090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.583 qpair failed and we were unable to recover it. 00:31:41.583 [2024-10-01 08:46:33.137356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.583 [2024-10-01 08:46:33.137366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.583 qpair failed and we were unable to recover it. 00:31:41.583 [2024-10-01 08:46:33.137578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.583 [2024-10-01 08:46:33.137589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.583 qpair failed and we were unable to recover it. 00:31:41.583 [2024-10-01 08:46:33.137881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.583 [2024-10-01 08:46:33.137890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.583 qpair failed and we were unable to recover it. 00:31:41.583 [2024-10-01 08:46:33.138196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.583 [2024-10-01 08:46:33.138206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.583 qpair failed and we were unable to recover it. 00:31:41.583 [2024-10-01 08:46:33.138514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.583 [2024-10-01 08:46:33.138523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.583 qpair failed and we were unable to recover it. 00:31:41.583 [2024-10-01 08:46:33.138706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.583 [2024-10-01 08:46:33.138717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.583 qpair failed and we were unable to recover it. 00:31:41.583 [2024-10-01 08:46:33.139055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.583 [2024-10-01 08:46:33.139065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.583 qpair failed and we were unable to recover it. 00:31:41.583 [2024-10-01 08:46:33.139395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.583 [2024-10-01 08:46:33.139404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.583 qpair failed and we were unable to recover it. 00:31:41.583 [2024-10-01 08:46:33.139689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.584 [2024-10-01 08:46:33.139698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.584 qpair failed and we were unable to recover it. 00:31:41.584 [2024-10-01 08:46:33.139976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.584 [2024-10-01 08:46:33.139985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.584 qpair failed and we were unable to recover it. 00:31:41.584 [2024-10-01 08:46:33.140314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.584 [2024-10-01 08:46:33.140324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.584 qpair failed and we were unable to recover it. 00:31:41.584 [2024-10-01 08:46:33.140610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.584 [2024-10-01 08:46:33.140620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.584 qpair failed and we were unable to recover it. 00:31:41.584 [2024-10-01 08:46:33.140925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.584 [2024-10-01 08:46:33.140935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.584 qpair failed and we were unable to recover it. 00:31:41.584 [2024-10-01 08:46:33.141216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.584 [2024-10-01 08:46:33.141226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.584 qpair failed and we were unable to recover it. 00:31:41.584 [2024-10-01 08:46:33.141409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.584 [2024-10-01 08:46:33.141420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.584 qpair failed and we were unable to recover it. 00:31:41.584 [2024-10-01 08:46:33.141755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.584 [2024-10-01 08:46:33.141765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.584 qpair failed and we were unable to recover it. 00:31:41.584 [2024-10-01 08:46:33.142073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.584 [2024-10-01 08:46:33.142083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.584 qpair failed and we were unable to recover it. 00:31:41.584 [2024-10-01 08:46:33.142384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.584 [2024-10-01 08:46:33.142393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.584 qpair failed and we were unable to recover it. 00:31:41.584 [2024-10-01 08:46:33.142592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.584 [2024-10-01 08:46:33.142602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.584 qpair failed and we were unable to recover it. 00:31:41.584 [2024-10-01 08:46:33.142863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.584 [2024-10-01 08:46:33.142873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.584 qpair failed and we were unable to recover it. 00:31:41.584 [2024-10-01 08:46:33.143170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.584 [2024-10-01 08:46:33.143183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.584 qpair failed and we were unable to recover it. 00:31:41.584 [2024-10-01 08:46:33.143448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.584 [2024-10-01 08:46:33.143458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.584 qpair failed and we were unable to recover it. 00:31:41.584 [2024-10-01 08:46:33.143749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.584 [2024-10-01 08:46:33.143758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.584 qpair failed and we were unable to recover it. 00:31:41.584 [2024-10-01 08:46:33.144058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.584 [2024-10-01 08:46:33.144068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.584 qpair failed and we were unable to recover it. 00:31:41.584 [2024-10-01 08:46:33.144381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.584 [2024-10-01 08:46:33.144391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.584 qpair failed and we were unable to recover it. 00:31:41.584 [2024-10-01 08:46:33.144696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.584 [2024-10-01 08:46:33.144706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.584 qpair failed and we were unable to recover it. 00:31:41.584 [2024-10-01 08:46:33.145008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.584 [2024-10-01 08:46:33.145018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.584 qpair failed and we were unable to recover it. 00:31:41.584 [2024-10-01 08:46:33.145359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.584 [2024-10-01 08:46:33.145368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.584 qpair failed and we were unable to recover it. 00:31:41.584 [2024-10-01 08:46:33.145640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.584 [2024-10-01 08:46:33.145650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.584 qpair failed and we were unable to recover it. 00:31:41.584 [2024-10-01 08:46:33.145919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.584 [2024-10-01 08:46:33.145928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.584 qpair failed and we were unable to recover it. 00:31:41.584 [2024-10-01 08:46:33.146190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.584 [2024-10-01 08:46:33.146200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.584 qpair failed and we were unable to recover it. 00:31:41.584 [2024-10-01 08:46:33.146516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.584 [2024-10-01 08:46:33.146526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.584 qpair failed and we were unable to recover it. 00:31:41.584 [2024-10-01 08:46:33.146801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.584 [2024-10-01 08:46:33.146811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.584 qpair failed and we were unable to recover it. 00:31:41.584 [2024-10-01 08:46:33.147123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.584 [2024-10-01 08:46:33.147132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.584 qpair failed and we were unable to recover it. 00:31:41.584 [2024-10-01 08:46:33.147431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.584 [2024-10-01 08:46:33.147440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.584 qpair failed and we were unable to recover it. 00:31:41.584 [2024-10-01 08:46:33.147808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.584 [2024-10-01 08:46:33.147818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.584 qpair failed and we were unable to recover it. 00:31:41.584 [2024-10-01 08:46:33.148147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.584 [2024-10-01 08:46:33.148158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.584 qpair failed and we were unable to recover it. 00:31:41.584 [2024-10-01 08:46:33.148520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.584 [2024-10-01 08:46:33.148530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.584 qpair failed and we were unable to recover it. 00:31:41.584 [2024-10-01 08:46:33.148839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.584 [2024-10-01 08:46:33.148849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.584 qpair failed and we were unable to recover it. 00:31:41.584 [2024-10-01 08:46:33.149164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.584 [2024-10-01 08:46:33.149174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.584 qpair failed and we were unable to recover it. 00:31:41.584 [2024-10-01 08:46:33.149451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.584 [2024-10-01 08:46:33.149460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.584 qpair failed and we were unable to recover it. 00:31:41.584 [2024-10-01 08:46:33.149783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.584 [2024-10-01 08:46:33.149793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.584 qpair failed and we were unable to recover it. 00:31:41.584 [2024-10-01 08:46:33.149957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.584 [2024-10-01 08:46:33.149968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.584 qpair failed and we were unable to recover it. 00:31:41.584 [2024-10-01 08:46:33.150318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.584 [2024-10-01 08:46:33.150329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.584 qpair failed and we were unable to recover it. 00:31:41.584 [2024-10-01 08:46:33.150631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.584 [2024-10-01 08:46:33.150642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.584 qpair failed and we were unable to recover it. 00:31:41.584 [2024-10-01 08:46:33.150938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.584 [2024-10-01 08:46:33.150948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.584 qpair failed and we were unable to recover it. 00:31:41.584 [2024-10-01 08:46:33.151295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.584 [2024-10-01 08:46:33.151306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.584 qpair failed and we were unable to recover it. 00:31:41.584 [2024-10-01 08:46:33.151523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.585 [2024-10-01 08:46:33.151533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.585 qpair failed and we were unable to recover it. 00:31:41.585 [2024-10-01 08:46:33.151840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.585 [2024-10-01 08:46:33.151850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.585 qpair failed and we were unable to recover it. 00:31:41.585 [2024-10-01 08:46:33.152049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.585 [2024-10-01 08:46:33.152059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.585 qpair failed and we were unable to recover it. 00:31:41.585 [2024-10-01 08:46:33.152358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.585 [2024-10-01 08:46:33.152368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.585 qpair failed and we were unable to recover it. 00:31:41.585 [2024-10-01 08:46:33.152655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.585 [2024-10-01 08:46:33.152664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.585 qpair failed and we were unable to recover it. 00:31:41.585 [2024-10-01 08:46:33.152949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.585 [2024-10-01 08:46:33.152959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.585 qpair failed and we were unable to recover it. 00:31:41.585 [2024-10-01 08:46:33.153273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.585 [2024-10-01 08:46:33.153283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.585 qpair failed and we were unable to recover it. 00:31:41.585 [2024-10-01 08:46:33.153490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.585 [2024-10-01 08:46:33.153499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.585 qpair failed and we were unable to recover it. 00:31:41.585 [2024-10-01 08:46:33.153764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.585 [2024-10-01 08:46:33.153774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.585 qpair failed and we were unable to recover it. 00:31:41.585 [2024-10-01 08:46:33.154095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.585 [2024-10-01 08:46:33.154106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.585 qpair failed and we were unable to recover it. 00:31:41.585 [2024-10-01 08:46:33.154433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.585 [2024-10-01 08:46:33.154443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.585 qpair failed and we were unable to recover it. 00:31:41.585 [2024-10-01 08:46:33.154723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.585 [2024-10-01 08:46:33.154732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.585 qpair failed and we were unable to recover it. 00:31:41.585 [2024-10-01 08:46:33.155011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.585 [2024-10-01 08:46:33.155021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.585 qpair failed and we were unable to recover it. 00:31:41.585 [2024-10-01 08:46:33.155331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.585 [2024-10-01 08:46:33.155341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.585 qpair failed and we were unable to recover it. 00:31:41.585 [2024-10-01 08:46:33.155640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.585 [2024-10-01 08:46:33.155650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.585 qpair failed and we were unable to recover it. 00:31:41.585 [2024-10-01 08:46:33.155933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.585 [2024-10-01 08:46:33.155942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.585 qpair failed and we were unable to recover it. 00:31:41.585 [2024-10-01 08:46:33.156258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.585 [2024-10-01 08:46:33.156269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.585 qpair failed and we were unable to recover it. 00:31:41.585 [2024-10-01 08:46:33.156612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.585 [2024-10-01 08:46:33.156622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.585 qpair failed and we were unable to recover it. 00:31:41.585 [2024-10-01 08:46:33.156950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.585 [2024-10-01 08:46:33.156961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.585 qpair failed and we were unable to recover it. 00:31:41.585 [2024-10-01 08:46:33.157293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.585 [2024-10-01 08:46:33.157304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.585 qpair failed and we were unable to recover it. 00:31:41.585 [2024-10-01 08:46:33.157596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.585 [2024-10-01 08:46:33.157606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.585 qpair failed and we were unable to recover it. 00:31:41.585 [2024-10-01 08:46:33.157884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.585 [2024-10-01 08:46:33.157895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.585 qpair failed and we were unable to recover it. 00:31:41.585 [2024-10-01 08:46:33.158156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.585 [2024-10-01 08:46:33.158167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.585 qpair failed and we were unable to recover it. 00:31:41.585 [2024-10-01 08:46:33.158474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.585 [2024-10-01 08:46:33.158485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.585 qpair failed and we were unable to recover it. 00:31:41.585 [2024-10-01 08:46:33.158791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.585 [2024-10-01 08:46:33.158801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.585 qpair failed and we were unable to recover it. 00:31:41.585 [2024-10-01 08:46:33.159112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.585 [2024-10-01 08:46:33.159123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.585 qpair failed and we were unable to recover it. 00:31:41.585 [2024-10-01 08:46:33.159445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.585 [2024-10-01 08:46:33.159456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.585 qpair failed and we were unable to recover it. 00:31:41.585 [2024-10-01 08:46:33.159764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.585 [2024-10-01 08:46:33.159775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.585 qpair failed and we were unable to recover it. 00:31:41.585 [2024-10-01 08:46:33.160038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.585 [2024-10-01 08:46:33.160049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.585 qpair failed and we were unable to recover it. 00:31:41.585 [2024-10-01 08:46:33.160344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.585 [2024-10-01 08:46:33.160353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.585 qpair failed and we were unable to recover it. 00:31:41.585 [2024-10-01 08:46:33.160648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.585 [2024-10-01 08:46:33.160658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.585 qpair failed and we were unable to recover it. 00:31:41.585 [2024-10-01 08:46:33.160968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.585 [2024-10-01 08:46:33.160978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.585 qpair failed and we were unable to recover it. 00:31:41.585 [2024-10-01 08:46:33.161287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.585 [2024-10-01 08:46:33.161298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.585 qpair failed and we were unable to recover it. 00:31:41.585 [2024-10-01 08:46:33.161516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.585 [2024-10-01 08:46:33.161526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.585 qpair failed and we were unable to recover it. 00:31:41.585 [2024-10-01 08:46:33.161819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.585 [2024-10-01 08:46:33.161830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.585 qpair failed and we were unable to recover it. 00:31:41.585 [2024-10-01 08:46:33.162045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.585 [2024-10-01 08:46:33.162057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.585 qpair failed and we were unable to recover it. 00:31:41.585 [2024-10-01 08:46:33.162387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.585 [2024-10-01 08:46:33.162397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.585 qpair failed and we were unable to recover it. 00:31:41.585 [2024-10-01 08:46:33.162697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.585 [2024-10-01 08:46:33.162708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.585 qpair failed and we were unable to recover it. 00:31:41.585 [2024-10-01 08:46:33.163011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.585 [2024-10-01 08:46:33.163022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.585 qpair failed and we were unable to recover it. 00:31:41.585 [2024-10-01 08:46:33.163323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.586 [2024-10-01 08:46:33.163334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.586 qpair failed and we were unable to recover it. 00:31:41.586 [2024-10-01 08:46:33.163583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.586 [2024-10-01 08:46:33.163593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.586 qpair failed and we were unable to recover it. 00:31:41.586 [2024-10-01 08:46:33.163901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.586 [2024-10-01 08:46:33.163914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.586 qpair failed and we were unable to recover it. 00:31:41.586 [2024-10-01 08:46:33.164229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.586 [2024-10-01 08:46:33.164240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.586 qpair failed and we were unable to recover it. 00:31:41.586 [2024-10-01 08:46:33.164427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.586 [2024-10-01 08:46:33.164439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.586 qpair failed and we were unable to recover it. 00:31:41.586 [2024-10-01 08:46:33.164738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.586 [2024-10-01 08:46:33.164749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.586 qpair failed and we were unable to recover it. 00:31:41.586 [2024-10-01 08:46:33.165074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.586 [2024-10-01 08:46:33.165086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.586 qpair failed and we were unable to recover it. 00:31:41.586 [2024-10-01 08:46:33.165266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.586 [2024-10-01 08:46:33.165276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.586 qpair failed and we were unable to recover it. 00:31:41.586 [2024-10-01 08:46:33.165558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.586 [2024-10-01 08:46:33.165568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.586 qpair failed and we were unable to recover it. 00:31:41.586 [2024-10-01 08:46:33.165892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.586 [2024-10-01 08:46:33.165903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.586 qpair failed and we were unable to recover it. 00:31:41.586 [2024-10-01 08:46:33.166222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.586 [2024-10-01 08:46:33.166234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.586 qpair failed and we were unable to recover it. 00:31:41.586 [2024-10-01 08:46:33.166537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.586 [2024-10-01 08:46:33.166548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.586 qpair failed and we were unable to recover it. 00:31:41.586 [2024-10-01 08:46:33.166835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.586 [2024-10-01 08:46:33.166845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.586 qpair failed and we were unable to recover it. 00:31:41.586 [2024-10-01 08:46:33.167046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.586 [2024-10-01 08:46:33.167058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.586 qpair failed and we were unable to recover it. 00:31:41.586 [2024-10-01 08:46:33.167348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.586 [2024-10-01 08:46:33.167359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.586 qpair failed and we were unable to recover it. 00:31:41.586 [2024-10-01 08:46:33.167675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.586 [2024-10-01 08:46:33.167686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.586 qpair failed and we were unable to recover it. 00:31:41.586 [2024-10-01 08:46:33.168025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.586 [2024-10-01 08:46:33.168036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.586 qpair failed and we were unable to recover it. 00:31:41.586 [2024-10-01 08:46:33.168205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.586 [2024-10-01 08:46:33.168217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.586 qpair failed and we were unable to recover it. 00:31:41.586 [2024-10-01 08:46:33.168513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.586 [2024-10-01 08:46:33.168523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.586 qpair failed and we were unable to recover it. 00:31:41.586 [2024-10-01 08:46:33.168827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.586 [2024-10-01 08:46:33.168838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.586 qpair failed and we were unable to recover it. 00:31:41.586 [2024-10-01 08:46:33.169149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.586 [2024-10-01 08:46:33.169160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.586 qpair failed and we were unable to recover it. 00:31:41.586 [2024-10-01 08:46:33.169460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.586 [2024-10-01 08:46:33.169472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.586 qpair failed and we were unable to recover it. 00:31:41.586 [2024-10-01 08:46:33.169804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.586 [2024-10-01 08:46:33.169815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.586 qpair failed and we were unable to recover it. 00:31:41.586 [2024-10-01 08:46:33.170122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.586 [2024-10-01 08:46:33.170133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.586 qpair failed and we were unable to recover it. 00:31:41.586 [2024-10-01 08:46:33.170466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.586 [2024-10-01 08:46:33.170477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.586 qpair failed and we were unable to recover it. 00:31:41.586 [2024-10-01 08:46:33.170771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.586 [2024-10-01 08:46:33.170782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.586 qpair failed and we were unable to recover it. 00:31:41.586 [2024-10-01 08:46:33.171101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.586 [2024-10-01 08:46:33.171112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.586 qpair failed and we were unable to recover it. 00:31:41.586 [2024-10-01 08:46:33.171313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.586 [2024-10-01 08:46:33.171324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.586 qpair failed and we were unable to recover it. 00:31:41.586 [2024-10-01 08:46:33.171646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.586 [2024-10-01 08:46:33.171657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.586 qpair failed and we were unable to recover it. 00:31:41.586 [2024-10-01 08:46:33.171968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.586 [2024-10-01 08:46:33.171979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.586 qpair failed and we were unable to recover it. 00:31:41.586 [2024-10-01 08:46:33.172286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.586 [2024-10-01 08:46:33.172297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.586 qpair failed and we were unable to recover it. 00:31:41.586 [2024-10-01 08:46:33.172616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.586 [2024-10-01 08:46:33.172627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.586 qpair failed and we were unable to recover it. 00:31:41.586 [2024-10-01 08:46:33.172930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.586 [2024-10-01 08:46:33.172941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.586 qpair failed and we were unable to recover it. 00:31:41.586 [2024-10-01 08:46:33.173325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.586 [2024-10-01 08:46:33.173336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.586 qpair failed and we were unable to recover it. 00:31:41.586 [2024-10-01 08:46:33.173596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.586 [2024-10-01 08:46:33.173607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.586 qpair failed and we were unable to recover it. 00:31:41.586 [2024-10-01 08:46:33.173914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.586 [2024-10-01 08:46:33.173925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.586 qpair failed and we were unable to recover it. 00:31:41.586 [2024-10-01 08:46:33.174257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.586 [2024-10-01 08:46:33.174268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.586 qpair failed and we were unable to recover it. 00:31:41.586 [2024-10-01 08:46:33.174543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.586 [2024-10-01 08:46:33.174554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.586 qpair failed and we were unable to recover it. 00:31:41.586 [2024-10-01 08:46:33.174742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.586 [2024-10-01 08:46:33.174753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.586 qpair failed and we were unable to recover it. 00:31:41.586 [2024-10-01 08:46:33.174942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.586 [2024-10-01 08:46:33.174952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.587 qpair failed and we were unable to recover it. 00:31:41.587 [2024-10-01 08:46:33.175218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.587 [2024-10-01 08:46:33.175229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.587 qpair failed and we were unable to recover it. 00:31:41.587 [2024-10-01 08:46:33.175446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.587 [2024-10-01 08:46:33.175457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.587 qpair failed and we were unable to recover it. 00:31:41.587 [2024-10-01 08:46:33.175749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.587 [2024-10-01 08:46:33.175759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.587 qpair failed and we were unable to recover it. 00:31:41.587 [2024-10-01 08:46:33.176051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.587 [2024-10-01 08:46:33.176063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.587 qpair failed and we were unable to recover it. 00:31:41.587 [2024-10-01 08:46:33.176359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.587 [2024-10-01 08:46:33.176370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.587 qpair failed and we were unable to recover it. 00:31:41.587 [2024-10-01 08:46:33.176665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.587 [2024-10-01 08:46:33.176676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.587 qpair failed and we were unable to recover it. 00:31:41.587 [2024-10-01 08:46:33.176968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.587 [2024-10-01 08:46:33.176979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.587 qpair failed and we were unable to recover it. 00:31:41.587 [2024-10-01 08:46:33.177309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.587 [2024-10-01 08:46:33.177322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.587 qpair failed and we were unable to recover it. 00:31:41.587 [2024-10-01 08:46:33.177672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.587 [2024-10-01 08:46:33.177682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.587 qpair failed and we were unable to recover it. 00:31:41.587 [2024-10-01 08:46:33.177985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.587 [2024-10-01 08:46:33.178005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.587 qpair failed and we were unable to recover it. 00:31:41.587 [2024-10-01 08:46:33.178368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.587 [2024-10-01 08:46:33.178379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.587 qpair failed and we were unable to recover it. 00:31:41.587 [2024-10-01 08:46:33.178718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.587 [2024-10-01 08:46:33.178729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.587 qpair failed and we were unable to recover it. 00:31:41.587 [2024-10-01 08:46:33.179053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.587 [2024-10-01 08:46:33.179064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.587 qpair failed and we were unable to recover it. 00:31:41.587 [2024-10-01 08:46:33.179295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.587 [2024-10-01 08:46:33.179306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.587 qpair failed and we were unable to recover it. 00:31:41.587 [2024-10-01 08:46:33.179597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.587 [2024-10-01 08:46:33.179606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.587 qpair failed and we were unable to recover it. 00:31:41.587 [2024-10-01 08:46:33.179946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.587 [2024-10-01 08:46:33.179956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.587 qpair failed and we were unable to recover it. 00:31:41.587 [2024-10-01 08:46:33.180256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.587 [2024-10-01 08:46:33.180273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.587 qpair failed and we were unable to recover it. 00:31:41.587 [2024-10-01 08:46:33.180619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.587 [2024-10-01 08:46:33.180629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.587 qpair failed and we were unable to recover it. 00:31:41.587 [2024-10-01 08:46:33.180934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.587 [2024-10-01 08:46:33.180943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.587 qpair failed and we were unable to recover it. 00:31:41.587 [2024-10-01 08:46:33.181233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.587 [2024-10-01 08:46:33.181244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.587 qpair failed and we were unable to recover it. 00:31:41.587 [2024-10-01 08:46:33.181436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.587 [2024-10-01 08:46:33.181446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.587 qpair failed and we were unable to recover it. 00:31:41.587 [2024-10-01 08:46:33.181672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.587 [2024-10-01 08:46:33.181682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.587 qpair failed and we were unable to recover it. 00:31:41.587 [2024-10-01 08:46:33.181985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.587 [2024-10-01 08:46:33.182003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.587 qpair failed and we were unable to recover it. 00:31:41.587 [2024-10-01 08:46:33.182348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.587 [2024-10-01 08:46:33.182359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.587 qpair failed and we were unable to recover it. 00:31:41.587 [2024-10-01 08:46:33.182619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.587 [2024-10-01 08:46:33.182629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.587 qpair failed and we were unable to recover it. 00:31:41.587 [2024-10-01 08:46:33.182957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.587 [2024-10-01 08:46:33.182969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.587 qpair failed and we were unable to recover it. 00:31:41.587 [2024-10-01 08:46:33.183295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.587 [2024-10-01 08:46:33.183305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.587 qpair failed and we were unable to recover it. 00:31:41.587 [2024-10-01 08:46:33.183610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.587 [2024-10-01 08:46:33.183620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.587 qpair failed and we were unable to recover it. 00:31:41.587 [2024-10-01 08:46:33.183924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.587 [2024-10-01 08:46:33.183934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.587 qpair failed and we were unable to recover it. 00:31:41.587 [2024-10-01 08:46:33.184235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.587 [2024-10-01 08:46:33.184252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.587 qpair failed and we were unable to recover it. 00:31:41.587 [2024-10-01 08:46:33.184469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.587 [2024-10-01 08:46:33.184480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.587 qpair failed and we were unable to recover it. 00:31:41.587 [2024-10-01 08:46:33.184805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.587 [2024-10-01 08:46:33.184816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.587 qpair failed and we were unable to recover it. 00:31:41.587 [2024-10-01 08:46:33.185140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.587 [2024-10-01 08:46:33.185150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.587 qpair failed and we were unable to recover it. 00:31:41.587 [2024-10-01 08:46:33.185433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.587 [2024-10-01 08:46:33.185443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.587 qpair failed and we were unable to recover it. 00:31:41.587 [2024-10-01 08:46:33.185742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.588 [2024-10-01 08:46:33.185752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.588 qpair failed and we were unable to recover it. 00:31:41.588 [2024-10-01 08:46:33.186034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.588 [2024-10-01 08:46:33.186044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.588 qpair failed and we were unable to recover it. 00:31:41.588 [2024-10-01 08:46:33.186374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.588 [2024-10-01 08:46:33.186384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.588 qpair failed and we were unable to recover it. 00:31:41.588 [2024-10-01 08:46:33.186584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.588 [2024-10-01 08:46:33.186594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.588 qpair failed and we were unable to recover it. 00:31:41.588 [2024-10-01 08:46:33.186899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.588 [2024-10-01 08:46:33.186911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.588 qpair failed and we were unable to recover it. 00:31:41.588 [2024-10-01 08:46:33.187104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.588 [2024-10-01 08:46:33.187116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.588 qpair failed and we were unable to recover it. 00:31:41.588 [2024-10-01 08:46:33.187483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.588 [2024-10-01 08:46:33.187494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.588 qpair failed and we were unable to recover it. 00:31:41.588 [2024-10-01 08:46:33.187794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.588 [2024-10-01 08:46:33.187805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.588 qpair failed and we were unable to recover it. 00:31:41.588 [2024-10-01 08:46:33.188126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.588 [2024-10-01 08:46:33.188136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.588 qpair failed and we were unable to recover it. 00:31:41.588 [2024-10-01 08:46:33.188440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.588 [2024-10-01 08:46:33.188450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.588 qpair failed and we were unable to recover it. 00:31:41.588 [2024-10-01 08:46:33.188765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.588 [2024-10-01 08:46:33.188775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.588 qpair failed and we were unable to recover it. 00:31:41.588 [2024-10-01 08:46:33.188976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.588 [2024-10-01 08:46:33.188986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.588 qpair failed and we were unable to recover it. 00:31:41.588 [2024-10-01 08:46:33.189264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.588 [2024-10-01 08:46:33.189274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.588 qpair failed and we were unable to recover it. 00:31:41.588 [2024-10-01 08:46:33.189457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.588 [2024-10-01 08:46:33.189468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.588 qpair failed and we were unable to recover it. 00:31:41.588 [2024-10-01 08:46:33.189839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.588 [2024-10-01 08:46:33.189850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.588 qpair failed and we were unable to recover it. 00:31:41.588 [2024-10-01 08:46:33.190164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.588 [2024-10-01 08:46:33.190174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.588 qpair failed and we were unable to recover it. 00:31:41.588 [2024-10-01 08:46:33.190488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.588 [2024-10-01 08:46:33.190497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.588 qpair failed and we were unable to recover it. 00:31:41.588 [2024-10-01 08:46:33.190800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.588 [2024-10-01 08:46:33.190810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.588 qpair failed and we were unable to recover it. 00:31:41.588 [2024-10-01 08:46:33.191023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.588 [2024-10-01 08:46:33.191034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.588 qpair failed and we were unable to recover it. 00:31:41.588 [2024-10-01 08:46:33.191335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.588 [2024-10-01 08:46:33.191345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.588 qpair failed and we were unable to recover it. 00:31:41.588 [2024-10-01 08:46:33.191626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.588 [2024-10-01 08:46:33.191635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.588 qpair failed and we were unable to recover it. 00:31:41.588 [2024-10-01 08:46:33.191898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.588 [2024-10-01 08:46:33.191908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.588 qpair failed and we were unable to recover it. 00:31:41.588 [2024-10-01 08:46:33.192219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.588 [2024-10-01 08:46:33.192230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.588 qpair failed and we were unable to recover it. 00:31:41.588 [2024-10-01 08:46:33.192542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.588 [2024-10-01 08:46:33.192552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.588 qpair failed and we were unable to recover it. 00:31:41.588 [2024-10-01 08:46:33.192838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.588 [2024-10-01 08:46:33.192848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.588 qpair failed and we were unable to recover it. 00:31:41.588 [2024-10-01 08:46:33.193149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.588 [2024-10-01 08:46:33.193160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.588 qpair failed and we were unable to recover it. 00:31:41.588 [2024-10-01 08:46:33.193475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.588 [2024-10-01 08:46:33.193485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.588 qpair failed and we were unable to recover it. 00:31:41.588 [2024-10-01 08:46:33.193761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.588 [2024-10-01 08:46:33.193770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.588 qpair failed and we were unable to recover it. 00:31:41.588 [2024-10-01 08:46:33.194136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.588 [2024-10-01 08:46:33.194146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.588 qpair failed and we were unable to recover it. 00:31:41.588 [2024-10-01 08:46:33.194412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.588 [2024-10-01 08:46:33.194422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.588 qpair failed and we were unable to recover it. 00:31:41.588 [2024-10-01 08:46:33.194741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.588 [2024-10-01 08:46:33.194751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.588 qpair failed and we were unable to recover it. 00:31:41.588 [2024-10-01 08:46:33.195029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.588 [2024-10-01 08:46:33.195040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.588 qpair failed and we were unable to recover it. 00:31:41.588 [2024-10-01 08:46:33.195356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.588 [2024-10-01 08:46:33.195366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.588 qpair failed and we were unable to recover it. 00:31:41.588 [2024-10-01 08:46:33.195639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.588 [2024-10-01 08:46:33.195649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.588 qpair failed and we were unable to recover it. 00:31:41.588 [2024-10-01 08:46:33.195982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.588 [2024-10-01 08:46:33.195992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.588 qpair failed and we were unable to recover it. 00:31:41.588 [2024-10-01 08:46:33.196156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.588 [2024-10-01 08:46:33.196167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.588 qpair failed and we were unable to recover it. 00:31:41.588 [2024-10-01 08:46:33.196476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.588 [2024-10-01 08:46:33.196486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.588 qpair failed and we were unable to recover it. 00:31:41.588 [2024-10-01 08:46:33.196790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.588 [2024-10-01 08:46:33.196804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.588 qpair failed and we were unable to recover it. 00:31:41.588 [2024-10-01 08:46:33.197125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.588 [2024-10-01 08:46:33.197135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.588 qpair failed and we were unable to recover it. 00:31:41.588 [2024-10-01 08:46:33.197441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.588 [2024-10-01 08:46:33.197451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.589 qpair failed and we were unable to recover it. 00:31:41.589 [2024-10-01 08:46:33.197753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.589 [2024-10-01 08:46:33.197763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.589 qpair failed and we were unable to recover it. 00:31:41.589 [2024-10-01 08:46:33.198060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.589 [2024-10-01 08:46:33.198070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.589 qpair failed and we were unable to recover it. 00:31:41.589 [2024-10-01 08:46:33.198387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.589 [2024-10-01 08:46:33.198397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.589 qpair failed and we were unable to recover it. 00:31:41.589 [2024-10-01 08:46:33.198673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.589 [2024-10-01 08:46:33.198684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.589 qpair failed and we were unable to recover it. 00:31:41.589 [2024-10-01 08:46:33.199015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.589 [2024-10-01 08:46:33.199026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.589 qpair failed and we were unable to recover it. 00:31:41.589 [2024-10-01 08:46:33.199313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.589 [2024-10-01 08:46:33.199323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.589 qpair failed and we were unable to recover it. 00:31:41.589 [2024-10-01 08:46:33.199654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.589 [2024-10-01 08:46:33.199664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.589 qpair failed and we were unable to recover it. 00:31:41.589 [2024-10-01 08:46:33.199894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.589 [2024-10-01 08:46:33.199904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.589 qpair failed and we were unable to recover it. 00:31:41.589 [2024-10-01 08:46:33.200233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.589 [2024-10-01 08:46:33.200243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.589 qpair failed and we were unable to recover it. 00:31:41.589 [2024-10-01 08:46:33.200650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.589 [2024-10-01 08:46:33.200661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.589 qpair failed and we were unable to recover it. 00:31:41.589 [2024-10-01 08:46:33.200941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.589 [2024-10-01 08:46:33.200952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.589 qpair failed and we were unable to recover it. 00:31:41.589 [2024-10-01 08:46:33.201253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.589 [2024-10-01 08:46:33.201264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.589 qpair failed and we were unable to recover it. 00:31:41.589 [2024-10-01 08:46:33.201550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.589 [2024-10-01 08:46:33.201559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.589 qpair failed and we were unable to recover it. 00:31:41.589 [2024-10-01 08:46:33.201865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.589 [2024-10-01 08:46:33.201875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.589 qpair failed and we were unable to recover it. 00:31:41.589 [2024-10-01 08:46:33.202159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.589 [2024-10-01 08:46:33.202169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.589 qpair failed and we were unable to recover it. 00:31:41.589 [2024-10-01 08:46:33.202372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.589 [2024-10-01 08:46:33.202382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.589 qpair failed and we were unable to recover it. 00:31:41.589 [2024-10-01 08:46:33.202700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.589 [2024-10-01 08:46:33.202709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.589 qpair failed and we were unable to recover it. 00:31:41.589 [2024-10-01 08:46:33.203005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.589 [2024-10-01 08:46:33.203016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.589 qpair failed and we were unable to recover it. 00:31:41.589 [2024-10-01 08:46:33.203206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.589 [2024-10-01 08:46:33.203216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.589 qpair failed and we were unable to recover it. 00:31:41.589 [2024-10-01 08:46:33.203488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.589 [2024-10-01 08:46:33.203498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.589 qpair failed and we were unable to recover it. 00:31:41.589 [2024-10-01 08:46:33.203783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.589 [2024-10-01 08:46:33.203793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.589 qpair failed and we were unable to recover it. 00:31:41.589 [2024-10-01 08:46:33.204072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.589 [2024-10-01 08:46:33.204082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.589 qpair failed and we were unable to recover it. 00:31:41.589 [2024-10-01 08:46:33.204374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.589 [2024-10-01 08:46:33.204384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.589 qpair failed and we were unable to recover it. 00:31:41.589 [2024-10-01 08:46:33.204692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.589 [2024-10-01 08:46:33.204701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.589 qpair failed and we were unable to recover it. 00:31:41.589 [2024-10-01 08:46:33.204989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.589 [2024-10-01 08:46:33.205005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.589 qpair failed and we were unable to recover it. 00:31:41.589 [2024-10-01 08:46:33.205288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.589 [2024-10-01 08:46:33.205298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.589 qpair failed and we were unable to recover it. 00:31:41.589 [2024-10-01 08:46:33.205509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.589 [2024-10-01 08:46:33.205519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.589 qpair failed and we were unable to recover it. 00:31:41.589 [2024-10-01 08:46:33.205731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.589 [2024-10-01 08:46:33.205741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.589 qpair failed and we were unable to recover it. 00:31:41.589 [2024-10-01 08:46:33.206003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.589 [2024-10-01 08:46:33.206014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.589 qpair failed and we were unable to recover it. 00:31:41.589 [2024-10-01 08:46:33.206322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.589 [2024-10-01 08:46:33.206332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.589 qpair failed and we were unable to recover it. 00:31:41.589 [2024-10-01 08:46:33.206608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.589 [2024-10-01 08:46:33.206618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.589 qpair failed and we were unable to recover it. 00:31:41.589 [2024-10-01 08:46:33.206941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.589 [2024-10-01 08:46:33.206951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.589 qpair failed and we were unable to recover it. 00:31:41.589 [2024-10-01 08:46:33.207332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.589 [2024-10-01 08:46:33.207343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.589 qpair failed and we were unable to recover it. 00:31:41.589 [2024-10-01 08:46:33.207638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.589 [2024-10-01 08:46:33.207647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.589 qpair failed and we were unable to recover it. 00:31:41.589 [2024-10-01 08:46:33.207949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.589 [2024-10-01 08:46:33.207959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.589 qpair failed and we were unable to recover it. 00:31:41.589 [2024-10-01 08:46:33.208259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.589 [2024-10-01 08:46:33.208269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.589 qpair failed and we were unable to recover it. 00:31:41.589 [2024-10-01 08:46:33.208567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.589 [2024-10-01 08:46:33.208577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.589 qpair failed and we were unable to recover it. 00:31:41.589 [2024-10-01 08:46:33.208766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.589 [2024-10-01 08:46:33.208776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.589 qpair failed and we were unable to recover it. 00:31:41.589 [2024-10-01 08:46:33.209068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.589 [2024-10-01 08:46:33.209078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.589 qpair failed and we were unable to recover it. 00:31:41.590 [2024-10-01 08:46:33.209398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.590 [2024-10-01 08:46:33.209414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.590 qpair failed and we were unable to recover it. 00:31:41.590 [2024-10-01 08:46:33.209734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.590 [2024-10-01 08:46:33.209744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.590 qpair failed and we were unable to recover it. 00:31:41.590 [2024-10-01 08:46:33.210018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.590 [2024-10-01 08:46:33.210028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.590 qpair failed and we were unable to recover it. 00:31:41.590 [2024-10-01 08:46:33.210315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.590 [2024-10-01 08:46:33.210325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.590 qpair failed and we were unable to recover it. 00:31:41.590 [2024-10-01 08:46:33.210517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.590 [2024-10-01 08:46:33.210526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.590 qpair failed and we were unable to recover it. 00:31:41.590 [2024-10-01 08:46:33.210790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.590 [2024-10-01 08:46:33.210799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.590 qpair failed and we were unable to recover it. 00:31:41.590 [2024-10-01 08:46:33.211094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.590 [2024-10-01 08:46:33.211104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.590 qpair failed and we were unable to recover it. 00:31:41.590 [2024-10-01 08:46:33.211366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.590 [2024-10-01 08:46:33.211376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.590 qpair failed and we were unable to recover it. 00:31:41.590 [2024-10-01 08:46:33.211673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.590 [2024-10-01 08:46:33.211683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.590 qpair failed and we were unable to recover it. 00:31:41.590 [2024-10-01 08:46:33.211975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.590 [2024-10-01 08:46:33.211985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.590 qpair failed and we were unable to recover it. 00:31:41.590 [2024-10-01 08:46:33.212280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.590 [2024-10-01 08:46:33.212289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.590 qpair failed and we were unable to recover it. 00:31:41.590 [2024-10-01 08:46:33.212595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.590 [2024-10-01 08:46:33.212604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.590 qpair failed and we were unable to recover it. 00:31:41.590 [2024-10-01 08:46:33.212931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.590 [2024-10-01 08:46:33.212941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.590 qpair failed and we were unable to recover it. 00:31:41.590 [2024-10-01 08:46:33.213227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.590 [2024-10-01 08:46:33.213237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.590 qpair failed and we were unable to recover it. 00:31:41.590 [2024-10-01 08:46:33.213563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.590 [2024-10-01 08:46:33.213573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.590 qpair failed and we were unable to recover it. 00:31:41.590 [2024-10-01 08:46:33.213864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.590 [2024-10-01 08:46:33.213874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.590 qpair failed and we were unable to recover it. 00:31:41.590 [2024-10-01 08:46:33.214158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.590 [2024-10-01 08:46:33.214169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.590 qpair failed and we were unable to recover it. 00:31:41.590 [2024-10-01 08:46:33.214484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.590 [2024-10-01 08:46:33.214493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.590 qpair failed and we were unable to recover it. 00:31:41.590 [2024-10-01 08:46:33.214771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.590 [2024-10-01 08:46:33.214780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.590 qpair failed and we were unable to recover it. 00:31:41.590 [2024-10-01 08:46:33.215093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.590 [2024-10-01 08:46:33.215103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.590 qpair failed and we were unable to recover it. 00:31:41.590 [2024-10-01 08:46:33.215383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.590 [2024-10-01 08:46:33.215393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.590 qpair failed and we were unable to recover it. 00:31:41.590 [2024-10-01 08:46:33.215700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.590 [2024-10-01 08:46:33.215709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.590 qpair failed and we were unable to recover it. 00:31:41.590 [2024-10-01 08:46:33.215984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.590 [2024-10-01 08:46:33.215996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.590 qpair failed and we were unable to recover it. 00:31:41.590 [2024-10-01 08:46:33.216301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.590 [2024-10-01 08:46:33.216311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.590 qpair failed and we were unable to recover it. 00:31:41.590 [2024-10-01 08:46:33.216590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.590 [2024-10-01 08:46:33.216600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.590 qpair failed and we were unable to recover it. 00:31:41.590 [2024-10-01 08:46:33.216865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.590 [2024-10-01 08:46:33.216874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.590 qpair failed and we were unable to recover it. 00:31:41.590 [2024-10-01 08:46:33.217213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.590 [2024-10-01 08:46:33.217225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.590 qpair failed and we were unable to recover it. 00:31:41.590 [2024-10-01 08:46:33.217552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.590 [2024-10-01 08:46:33.217562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.590 qpair failed and we were unable to recover it. 00:31:41.590 [2024-10-01 08:46:33.217837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.590 [2024-10-01 08:46:33.217846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.590 qpair failed and we were unable to recover it. 00:31:41.590 [2024-10-01 08:46:33.218193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.590 [2024-10-01 08:46:33.218202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.590 qpair failed and we were unable to recover it. 00:31:41.590 [2024-10-01 08:46:33.218501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.590 [2024-10-01 08:46:33.218511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.590 qpair failed and we were unable to recover it. 00:31:41.590 [2024-10-01 08:46:33.218821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.590 [2024-10-01 08:46:33.218830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.590 qpair failed and we were unable to recover it. 00:31:41.590 [2024-10-01 08:46:33.219105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.590 [2024-10-01 08:46:33.219115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.590 qpair failed and we were unable to recover it. 00:31:41.590 [2024-10-01 08:46:33.219320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.590 [2024-10-01 08:46:33.219330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.590 qpair failed and we were unable to recover it. 00:31:41.590 [2024-10-01 08:46:33.219587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.590 [2024-10-01 08:46:33.219596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.590 qpair failed and we were unable to recover it. 00:31:41.590 [2024-10-01 08:46:33.219915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.590 [2024-10-01 08:46:33.219924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.590 qpair failed and we were unable to recover it. 00:31:41.590 [2024-10-01 08:46:33.220226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.590 [2024-10-01 08:46:33.220235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.590 qpair failed and we were unable to recover it. 00:31:41.590 [2024-10-01 08:46:33.220552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.590 [2024-10-01 08:46:33.220562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.590 qpair failed and we were unable to recover it. 00:31:41.590 [2024-10-01 08:46:33.220894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.590 [2024-10-01 08:46:33.220904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.590 qpair failed and we were unable to recover it. 00:31:41.591 [2024-10-01 08:46:33.221211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.591 [2024-10-01 08:46:33.221221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.591 qpair failed and we were unable to recover it. 00:31:41.591 [2024-10-01 08:46:33.221493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.591 [2024-10-01 08:46:33.221503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.591 qpair failed and we were unable to recover it. 00:31:41.591 [2024-10-01 08:46:33.221837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.591 [2024-10-01 08:46:33.221847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.591 qpair failed and we were unable to recover it. 00:31:41.591 [2024-10-01 08:46:33.222146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.591 [2024-10-01 08:46:33.222156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.591 qpair failed and we were unable to recover it. 00:31:41.591 [2024-10-01 08:46:33.222449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.591 [2024-10-01 08:46:33.222458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.591 qpair failed and we were unable to recover it. 00:31:41.591 [2024-10-01 08:46:33.222797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.591 [2024-10-01 08:46:33.222807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.591 qpair failed and we were unable to recover it. 00:31:41.591 [2024-10-01 08:46:33.223137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.591 [2024-10-01 08:46:33.223148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.591 qpair failed and we were unable to recover it. 00:31:41.591 [2024-10-01 08:46:33.223481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.591 [2024-10-01 08:46:33.223492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.591 qpair failed and we were unable to recover it. 00:31:41.591 [2024-10-01 08:46:33.223792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.591 [2024-10-01 08:46:33.223802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.591 qpair failed and we were unable to recover it. 00:31:41.591 [2024-10-01 08:46:33.224101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.591 [2024-10-01 08:46:33.224111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.591 qpair failed and we were unable to recover it. 00:31:41.591 [2024-10-01 08:46:33.224415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.591 [2024-10-01 08:46:33.224425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.591 qpair failed and we were unable to recover it. 00:31:41.591 [2024-10-01 08:46:33.224707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.591 [2024-10-01 08:46:33.224717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.591 qpair failed and we were unable to recover it. 00:31:41.591 [2024-10-01 08:46:33.225035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.591 [2024-10-01 08:46:33.225045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.591 qpair failed and we were unable to recover it. 00:31:41.591 [2024-10-01 08:46:33.225335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.591 [2024-10-01 08:46:33.225345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.591 qpair failed and we were unable to recover it. 00:31:41.591 [2024-10-01 08:46:33.225648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.591 [2024-10-01 08:46:33.225660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.591 qpair failed and we were unable to recover it. 00:31:41.591 [2024-10-01 08:46:33.225958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.591 [2024-10-01 08:46:33.225968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.591 qpair failed and we were unable to recover it. 00:31:41.591 [2024-10-01 08:46:33.226238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.591 [2024-10-01 08:46:33.226247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.591 qpair failed and we were unable to recover it. 00:31:41.591 [2024-10-01 08:46:33.226554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.591 [2024-10-01 08:46:33.226564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.591 qpair failed and we were unable to recover it. 00:31:41.591 [2024-10-01 08:46:33.226776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.591 [2024-10-01 08:46:33.226786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.591 qpair failed and we were unable to recover it. 00:31:41.591 [2024-10-01 08:46:33.227047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.591 [2024-10-01 08:46:33.227057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.591 qpair failed and we were unable to recover it. 00:31:41.591 [2024-10-01 08:46:33.227357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.591 [2024-10-01 08:46:33.227368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.591 qpair failed and we were unable to recover it. 00:31:41.591 [2024-10-01 08:46:33.227646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.591 [2024-10-01 08:46:33.227656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.591 qpair failed and we were unable to recover it. 00:31:41.591 [2024-10-01 08:46:33.227977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.591 [2024-10-01 08:46:33.227987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.591 qpair failed and we were unable to recover it. 00:31:41.591 [2024-10-01 08:46:33.228361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.591 [2024-10-01 08:46:33.228372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.591 qpair failed and we were unable to recover it. 00:31:41.591 [2024-10-01 08:46:33.228648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.591 [2024-10-01 08:46:33.228658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.591 qpair failed and we were unable to recover it. 00:31:41.591 [2024-10-01 08:46:33.228928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.591 [2024-10-01 08:46:33.228938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.591 qpair failed and we were unable to recover it. 00:31:41.591 [2024-10-01 08:46:33.229254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.591 [2024-10-01 08:46:33.229264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.591 qpair failed and we were unable to recover it. 00:31:41.591 [2024-10-01 08:46:33.229590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.591 [2024-10-01 08:46:33.229599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.591 qpair failed and we were unable to recover it. 00:31:41.591 [2024-10-01 08:46:33.229907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.591 [2024-10-01 08:46:33.229916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.591 qpair failed and we were unable to recover it. 00:31:41.591 [2024-10-01 08:46:33.230234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.591 [2024-10-01 08:46:33.230244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.591 qpair failed and we were unable to recover it. 00:31:41.591 [2024-10-01 08:46:33.230438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.591 [2024-10-01 08:46:33.230447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.591 qpair failed and we were unable to recover it. 00:31:41.591 [2024-10-01 08:46:33.230730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.591 [2024-10-01 08:46:33.230740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.591 qpair failed and we were unable to recover it. 00:31:41.591 [2024-10-01 08:46:33.231044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.591 [2024-10-01 08:46:33.231054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.591 qpair failed and we were unable to recover it. 00:31:41.591 [2024-10-01 08:46:33.231276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.591 [2024-10-01 08:46:33.231285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.591 qpair failed and we were unable to recover it. 00:31:41.591 [2024-10-01 08:46:33.231604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.591 [2024-10-01 08:46:33.231622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.591 qpair failed and we were unable to recover it. 00:31:41.592 [2024-10-01 08:46:33.231873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.592 [2024-10-01 08:46:33.231883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.592 qpair failed and we were unable to recover it. 00:31:41.592 [2024-10-01 08:46:33.232178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.592 [2024-10-01 08:46:33.232189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.592 qpair failed and we were unable to recover it. 00:31:41.592 [2024-10-01 08:46:33.232497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.592 [2024-10-01 08:46:33.232507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.592 qpair failed and we were unable to recover it. 00:31:41.592 [2024-10-01 08:46:33.232787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.592 [2024-10-01 08:46:33.232796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.592 qpair failed and we were unable to recover it. 00:31:41.592 [2024-10-01 08:46:33.233082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.592 [2024-10-01 08:46:33.233092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.592 qpair failed and we were unable to recover it. 00:31:41.592 [2024-10-01 08:46:33.233396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.592 [2024-10-01 08:46:33.233405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.592 qpair failed and we were unable to recover it. 00:31:41.592 [2024-10-01 08:46:33.233710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.592 [2024-10-01 08:46:33.233719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.592 qpair failed and we were unable to recover it. 00:31:41.592 [2024-10-01 08:46:33.234015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.592 [2024-10-01 08:46:33.234025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.592 qpair failed and we were unable to recover it. 00:31:41.592 [2024-10-01 08:46:33.234298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.592 [2024-10-01 08:46:33.234307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.592 qpair failed and we were unable to recover it. 00:31:41.592 [2024-10-01 08:46:33.234610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.592 [2024-10-01 08:46:33.234620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.592 qpair failed and we were unable to recover it. 00:31:41.592 [2024-10-01 08:46:33.234810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.592 [2024-10-01 08:46:33.234819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.592 qpair failed and we were unable to recover it. 00:31:41.592 [2024-10-01 08:46:33.235107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.592 [2024-10-01 08:46:33.235117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.592 qpair failed and we were unable to recover it. 00:31:41.592 [2024-10-01 08:46:33.235423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.592 [2024-10-01 08:46:33.235433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.592 qpair failed and we were unable to recover it. 00:31:41.592 [2024-10-01 08:46:33.235711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.592 [2024-10-01 08:46:33.235721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.592 qpair failed and we were unable to recover it. 00:31:41.592 [2024-10-01 08:46:33.236029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.592 [2024-10-01 08:46:33.236039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.592 qpair failed and we were unable to recover it. 00:31:41.592 [2024-10-01 08:46:33.236331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.592 [2024-10-01 08:46:33.236340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.592 qpair failed and we were unable to recover it. 00:31:41.592 [2024-10-01 08:46:33.236626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.592 [2024-10-01 08:46:33.236635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.592 qpair failed and we were unable to recover it. 00:31:41.592 [2024-10-01 08:46:33.236924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.592 [2024-10-01 08:46:33.236933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.592 qpair failed and we were unable to recover it. 00:31:41.592 [2024-10-01 08:46:33.237242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.592 [2024-10-01 08:46:33.237252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.592 qpair failed and we were unable to recover it. 00:31:41.592 [2024-10-01 08:46:33.237426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.592 [2024-10-01 08:46:33.237436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.592 qpair failed and we were unable to recover it. 00:31:41.592 [2024-10-01 08:46:33.237767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.592 [2024-10-01 08:46:33.237779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.592 qpair failed and we were unable to recover it. 00:31:41.592 [2024-10-01 08:46:33.238139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.592 [2024-10-01 08:46:33.238150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.592 qpair failed and we were unable to recover it. 00:31:41.592 [2024-10-01 08:46:33.238446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.592 [2024-10-01 08:46:33.238456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.592 qpair failed and we were unable to recover it. 00:31:41.592 [2024-10-01 08:46:33.238641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.592 [2024-10-01 08:46:33.238651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.592 qpair failed and we were unable to recover it. 00:31:41.592 [2024-10-01 08:46:33.238843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.592 [2024-10-01 08:46:33.238854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.592 qpair failed and we were unable to recover it. 00:31:41.592 [2024-10-01 08:46:33.239150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.592 [2024-10-01 08:46:33.239160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.592 qpair failed and we were unable to recover it. 00:31:41.592 [2024-10-01 08:46:33.239372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.592 [2024-10-01 08:46:33.239382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.592 qpair failed and we were unable to recover it. 00:31:41.592 [2024-10-01 08:46:33.239651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.592 [2024-10-01 08:46:33.239661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.592 qpair failed and we were unable to recover it. 00:31:41.592 [2024-10-01 08:46:33.239957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.592 [2024-10-01 08:46:33.239968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.592 qpair failed and we were unable to recover it. 00:31:41.592 [2024-10-01 08:46:33.240162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.592 [2024-10-01 08:46:33.240171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.592 qpair failed and we were unable to recover it. 00:31:41.592 [2024-10-01 08:46:33.240434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.592 [2024-10-01 08:46:33.240443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.592 qpair failed and we were unable to recover it. 00:31:41.592 [2024-10-01 08:46:33.240623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.592 [2024-10-01 08:46:33.240634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.592 qpair failed and we were unable to recover it. 00:31:41.592 [2024-10-01 08:46:33.240894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.592 [2024-10-01 08:46:33.240904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.592 qpair failed and we were unable to recover it. 00:31:41.592 [2024-10-01 08:46:33.241213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.592 [2024-10-01 08:46:33.241224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.592 qpair failed and we were unable to recover it. 00:31:41.592 [2024-10-01 08:46:33.241527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.592 [2024-10-01 08:46:33.241537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.592 qpair failed and we were unable to recover it. 00:31:41.592 [2024-10-01 08:46:33.241845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.592 [2024-10-01 08:46:33.241855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.592 qpair failed and we were unable to recover it. 00:31:41.592 [2024-10-01 08:46:33.242225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.592 [2024-10-01 08:46:33.242235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.592 qpair failed and we were unable to recover it. 00:31:41.592 [2024-10-01 08:46:33.242491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.592 [2024-10-01 08:46:33.242502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.592 qpair failed and we were unable to recover it. 00:31:41.592 [2024-10-01 08:46:33.242808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.592 [2024-10-01 08:46:33.242818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.592 qpair failed and we were unable to recover it. 00:31:41.593 [2024-10-01 08:46:33.243096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.593 [2024-10-01 08:46:33.243106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.593 qpair failed and we were unable to recover it. 00:31:41.593 [2024-10-01 08:46:33.243418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.593 [2024-10-01 08:46:33.243428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.593 qpair failed and we were unable to recover it. 00:31:41.593 [2024-10-01 08:46:33.243736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.593 [2024-10-01 08:46:33.243746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.593 qpair failed and we were unable to recover it. 00:31:41.593 [2024-10-01 08:46:33.244051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.593 [2024-10-01 08:46:33.244061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.593 qpair failed and we were unable to recover it. 00:31:41.593 [2024-10-01 08:46:33.244372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.593 [2024-10-01 08:46:33.244382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.593 qpair failed and we were unable to recover it. 00:31:41.593 [2024-10-01 08:46:33.244643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.593 [2024-10-01 08:46:33.244653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.593 qpair failed and we were unable to recover it. 00:31:41.593 [2024-10-01 08:46:33.244971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.593 [2024-10-01 08:46:33.244981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.593 qpair failed and we were unable to recover it. 00:31:41.593 [2024-10-01 08:46:33.245277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.593 [2024-10-01 08:46:33.245287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.593 qpair failed and we were unable to recover it. 00:31:41.593 [2024-10-01 08:46:33.245591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.593 [2024-10-01 08:46:33.245601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.593 qpair failed and we were unable to recover it. 00:31:41.593 [2024-10-01 08:46:33.245927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.593 [2024-10-01 08:46:33.245937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.593 qpair failed and we were unable to recover it. 00:31:41.593 [2024-10-01 08:46:33.246315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.593 [2024-10-01 08:46:33.246325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.593 qpair failed and we were unable to recover it. 00:31:41.593 [2024-10-01 08:46:33.246583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.593 [2024-10-01 08:46:33.246592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.593 qpair failed and we were unable to recover it. 00:31:41.593 [2024-10-01 08:46:33.246921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.593 [2024-10-01 08:46:33.246930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.593 qpair failed and we were unable to recover it. 00:31:41.593 [2024-10-01 08:46:33.247232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.593 [2024-10-01 08:46:33.247242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.593 qpair failed and we were unable to recover it. 00:31:41.593 [2024-10-01 08:46:33.247547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.593 [2024-10-01 08:46:33.247556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.593 qpair failed and we were unable to recover it. 00:31:41.593 [2024-10-01 08:46:33.247864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.593 [2024-10-01 08:46:33.247874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.593 qpair failed and we were unable to recover it. 00:31:41.593 [2024-10-01 08:46:33.248165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.593 [2024-10-01 08:46:33.248175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.593 qpair failed and we were unable to recover it. 00:31:41.593 [2024-10-01 08:46:33.248362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.593 [2024-10-01 08:46:33.248373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.593 qpair failed and we were unable to recover it. 00:31:41.593 [2024-10-01 08:46:33.248692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.593 [2024-10-01 08:46:33.248702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.593 qpair failed and we were unable to recover it. 00:31:41.593 [2024-10-01 08:46:33.249041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.593 [2024-10-01 08:46:33.249051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.593 qpair failed and we were unable to recover it. 00:31:41.593 [2024-10-01 08:46:33.249408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.593 [2024-10-01 08:46:33.249418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.593 qpair failed and we were unable to recover it. 00:31:41.593 [2024-10-01 08:46:33.249788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.593 [2024-10-01 08:46:33.249798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.593 qpair failed and we were unable to recover it. 00:31:41.593 [2024-10-01 08:46:33.250127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.593 [2024-10-01 08:46:33.250137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.593 qpair failed and we were unable to recover it. 00:31:41.593 [2024-10-01 08:46:33.250405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.593 [2024-10-01 08:46:33.250415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.593 qpair failed and we were unable to recover it. 00:31:41.593 [2024-10-01 08:46:33.250797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.593 [2024-10-01 08:46:33.250808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.593 qpair failed and we were unable to recover it. 00:31:41.593 [2024-10-01 08:46:33.251124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.593 [2024-10-01 08:46:33.251134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.593 qpair failed and we were unable to recover it. 00:31:41.593 [2024-10-01 08:46:33.251434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.593 [2024-10-01 08:46:33.251444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.593 qpair failed and we were unable to recover it. 00:31:41.593 [2024-10-01 08:46:33.251771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.593 [2024-10-01 08:46:33.251780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.593 qpair failed and we were unable to recover it. 00:31:41.593 [2024-10-01 08:46:33.252067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.593 [2024-10-01 08:46:33.252077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.593 qpair failed and we were unable to recover it. 00:31:41.593 [2024-10-01 08:46:33.252403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.593 [2024-10-01 08:46:33.252412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.593 qpair failed and we were unable to recover it. 00:31:41.593 [2024-10-01 08:46:33.252717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.593 [2024-10-01 08:46:33.252727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.593 qpair failed and we were unable to recover it. 00:31:41.593 [2024-10-01 08:46:33.253029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.593 [2024-10-01 08:46:33.253039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.593 qpair failed and we were unable to recover it. 00:31:41.593 [2024-10-01 08:46:33.253197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.593 [2024-10-01 08:46:33.253207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.593 qpair failed and we were unable to recover it. 00:31:41.593 [2024-10-01 08:46:33.253478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.593 [2024-10-01 08:46:33.253487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.593 qpair failed and we were unable to recover it. 00:31:41.593 [2024-10-01 08:46:33.253791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.593 [2024-10-01 08:46:33.253801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.593 qpair failed and we were unable to recover it. 00:31:41.593 [2024-10-01 08:46:33.254151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.593 [2024-10-01 08:46:33.254161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.593 qpair failed and we were unable to recover it. 00:31:41.593 [2024-10-01 08:46:33.254448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.593 [2024-10-01 08:46:33.254458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.593 qpair failed and we were unable to recover it. 00:31:41.593 [2024-10-01 08:46:33.254785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.593 [2024-10-01 08:46:33.254794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.593 qpair failed and we were unable to recover it. 00:31:41.593 [2024-10-01 08:46:33.255102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.593 [2024-10-01 08:46:33.255112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.593 qpair failed and we were unable to recover it. 00:31:41.593 [2024-10-01 08:46:33.255434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.594 [2024-10-01 08:46:33.255443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.594 qpair failed and we were unable to recover it. 00:31:41.594 [2024-10-01 08:46:33.255734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.594 [2024-10-01 08:46:33.255743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.594 qpair failed and we were unable to recover it. 00:31:41.594 [2024-10-01 08:46:33.256061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.594 [2024-10-01 08:46:33.256072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.594 qpair failed and we were unable to recover it. 00:31:41.594 [2024-10-01 08:46:33.256417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.594 [2024-10-01 08:46:33.256427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.594 qpair failed and we were unable to recover it. 00:31:41.594 [2024-10-01 08:46:33.256720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.594 [2024-10-01 08:46:33.256730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.594 qpair failed and we were unable to recover it. 00:31:41.594 [2024-10-01 08:46:33.257057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.594 [2024-10-01 08:46:33.257067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.594 qpair failed and we were unable to recover it. 00:31:41.594 [2024-10-01 08:46:33.257394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.594 [2024-10-01 08:46:33.257403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.594 qpair failed and we were unable to recover it. 00:31:41.594 [2024-10-01 08:46:33.257681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.594 [2024-10-01 08:46:33.257691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.594 qpair failed and we were unable to recover it. 00:31:41.594 [2024-10-01 08:46:33.257958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.594 [2024-10-01 08:46:33.257967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.594 qpair failed and we were unable to recover it. 00:31:41.594 [2024-10-01 08:46:33.258263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.594 [2024-10-01 08:46:33.258273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.594 qpair failed and we were unable to recover it. 00:31:41.594 [2024-10-01 08:46:33.258562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.594 [2024-10-01 08:46:33.258574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.594 qpair failed and we were unable to recover it. 00:31:41.594 [2024-10-01 08:46:33.258832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.594 [2024-10-01 08:46:33.258841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.594 qpair failed and we were unable to recover it. 00:31:41.594 [2024-10-01 08:46:33.259144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.594 [2024-10-01 08:46:33.259154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.594 qpair failed and we were unable to recover it. 00:31:41.594 [2024-10-01 08:46:33.259493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.594 [2024-10-01 08:46:33.259503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.594 qpair failed and we were unable to recover it. 00:31:41.594 [2024-10-01 08:46:33.259827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.594 [2024-10-01 08:46:33.259837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.594 qpair failed and we were unable to recover it. 00:31:41.594 [2024-10-01 08:46:33.260116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.594 [2024-10-01 08:46:33.260126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.594 qpair failed and we were unable to recover it. 00:31:41.594 [2024-10-01 08:46:33.260452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.594 [2024-10-01 08:46:33.260461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.594 qpair failed and we were unable to recover it. 00:31:41.594 [2024-10-01 08:46:33.260659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.594 [2024-10-01 08:46:33.260669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.594 qpair failed and we were unable to recover it. 00:31:41.594 [2024-10-01 08:46:33.261004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.594 [2024-10-01 08:46:33.261014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.594 qpair failed and we were unable to recover it. 00:31:41.594 [2024-10-01 08:46:33.261307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.594 [2024-10-01 08:46:33.261318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.594 qpair failed and we were unable to recover it. 00:31:41.594 [2024-10-01 08:46:33.261621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.594 [2024-10-01 08:46:33.261632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.594 qpair failed and we were unable to recover it. 00:31:41.594 [2024-10-01 08:46:33.261807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.594 [2024-10-01 08:46:33.261818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.594 qpair failed and we were unable to recover it. 00:31:41.594 [2024-10-01 08:46:33.262129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.594 [2024-10-01 08:46:33.262139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.594 qpair failed and we were unable to recover it. 00:31:41.594 [2024-10-01 08:46:33.262349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.594 [2024-10-01 08:46:33.262359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.594 qpair failed and we were unable to recover it. 00:31:41.594 [2024-10-01 08:46:33.262682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.594 [2024-10-01 08:46:33.262692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.594 qpair failed and we were unable to recover it. 00:31:41.594 [2024-10-01 08:46:33.262982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.594 [2024-10-01 08:46:33.262992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.594 qpair failed and we were unable to recover it. 00:31:41.594 [2024-10-01 08:46:33.263254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.594 [2024-10-01 08:46:33.263263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.594 qpair failed and we were unable to recover it. 00:31:41.594 [2024-10-01 08:46:33.263650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.594 [2024-10-01 08:46:33.263660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.594 qpair failed and we were unable to recover it. 00:31:41.594 [2024-10-01 08:46:33.263987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.594 [2024-10-01 08:46:33.264000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.594 qpair failed and we were unable to recover it. 00:31:41.594 [2024-10-01 08:46:33.264168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.594 [2024-10-01 08:46:33.264180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.594 qpair failed and we were unable to recover it. 00:31:41.594 [2024-10-01 08:46:33.264494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.594 [2024-10-01 08:46:33.264503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.594 qpair failed and we were unable to recover it. 00:31:41.594 [2024-10-01 08:46:33.264792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.594 [2024-10-01 08:46:33.264803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.594 qpair failed and we were unable to recover it. 00:31:41.594 [2024-10-01 08:46:33.265030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.594 [2024-10-01 08:46:33.265040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.594 qpair failed and we were unable to recover it. 00:31:41.594 [2024-10-01 08:46:33.265202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.594 [2024-10-01 08:46:33.265212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.594 qpair failed and we were unable to recover it. 00:31:41.594 [2024-10-01 08:46:33.265490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.594 [2024-10-01 08:46:33.265500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.594 qpair failed and we were unable to recover it. 00:31:41.594 [2024-10-01 08:46:33.265805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.594 [2024-10-01 08:46:33.265815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.594 qpair failed and we were unable to recover it. 00:31:41.594 [2024-10-01 08:46:33.266105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.594 [2024-10-01 08:46:33.266115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.594 qpair failed and we were unable to recover it. 00:31:41.594 [2024-10-01 08:46:33.266435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.594 [2024-10-01 08:46:33.266445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.594 qpair failed and we were unable to recover it. 00:31:41.594 [2024-10-01 08:46:33.266755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.594 [2024-10-01 08:46:33.266765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.594 qpair failed and we were unable to recover it. 00:31:41.594 [2024-10-01 08:46:33.266982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.595 [2024-10-01 08:46:33.266992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.595 qpair failed and we were unable to recover it. 00:31:41.595 [2024-10-01 08:46:33.267294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.595 [2024-10-01 08:46:33.267305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.595 qpair failed and we were unable to recover it. 00:31:41.595 [2024-10-01 08:46:33.267613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.595 [2024-10-01 08:46:33.267622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.595 qpair failed and we were unable to recover it. 00:31:41.595 [2024-10-01 08:46:33.267903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.595 [2024-10-01 08:46:33.267912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.595 qpair failed and we were unable to recover it. 00:31:41.595 [2024-10-01 08:46:33.268240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.595 [2024-10-01 08:46:33.268250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.595 qpair failed and we were unable to recover it. 00:31:41.595 [2024-10-01 08:46:33.268551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.595 [2024-10-01 08:46:33.268560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.595 qpair failed and we were unable to recover it. 00:31:41.595 [2024-10-01 08:46:33.268866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.595 [2024-10-01 08:46:33.268875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.595 qpair failed and we were unable to recover it. 00:31:41.595 [2024-10-01 08:46:33.269154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.595 [2024-10-01 08:46:33.269164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.595 qpair failed and we were unable to recover it. 00:31:41.595 [2024-10-01 08:46:33.269483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.595 [2024-10-01 08:46:33.269493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.595 qpair failed and we were unable to recover it. 00:31:41.595 [2024-10-01 08:46:33.269798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.595 [2024-10-01 08:46:33.269808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.595 qpair failed and we were unable to recover it. 00:31:41.595 [2024-10-01 08:46:33.270116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.595 [2024-10-01 08:46:33.270125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.595 qpair failed and we were unable to recover it. 00:31:41.595 [2024-10-01 08:46:33.270409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.595 [2024-10-01 08:46:33.270418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.595 qpair failed and we were unable to recover it. 00:31:41.595 [2024-10-01 08:46:33.270728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.595 [2024-10-01 08:46:33.270738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.595 qpair failed and we were unable to recover it. 00:31:41.595 [2024-10-01 08:46:33.271019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.595 [2024-10-01 08:46:33.271029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.595 qpair failed and we were unable to recover it. 00:31:41.595 [2024-10-01 08:46:33.271320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.595 [2024-10-01 08:46:33.271330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.595 qpair failed and we were unable to recover it. 00:31:41.595 [2024-10-01 08:46:33.271636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.595 [2024-10-01 08:46:33.271645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.595 qpair failed and we were unable to recover it. 00:31:41.595 [2024-10-01 08:46:33.271929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.595 [2024-10-01 08:46:33.271939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.595 qpair failed and we were unable to recover it. 00:31:41.595 [2024-10-01 08:46:33.272262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.595 [2024-10-01 08:46:33.272272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.595 qpair failed and we were unable to recover it. 00:31:41.595 [2024-10-01 08:46:33.272660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.595 [2024-10-01 08:46:33.272670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.595 qpair failed and we were unable to recover it. 00:31:41.595 [2024-10-01 08:46:33.273002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.595 [2024-10-01 08:46:33.273012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.595 qpair failed and we were unable to recover it. 00:31:41.595 [2024-10-01 08:46:33.273353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.595 [2024-10-01 08:46:33.273363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.595 qpair failed and we were unable to recover it. 00:31:41.595 [2024-10-01 08:46:33.273643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.595 [2024-10-01 08:46:33.273652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.595 qpair failed and we were unable to recover it. 00:31:41.595 [2024-10-01 08:46:33.273964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.595 [2024-10-01 08:46:33.273973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.595 qpair failed and we were unable to recover it. 00:31:41.595 [2024-10-01 08:46:33.274251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.595 [2024-10-01 08:46:33.274261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.595 qpair failed and we were unable to recover it. 00:31:41.595 [2024-10-01 08:46:33.274573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.595 [2024-10-01 08:46:33.274583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.595 qpair failed and we were unable to recover it. 00:31:41.595 [2024-10-01 08:46:33.274871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.595 [2024-10-01 08:46:33.274880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.595 qpair failed and we were unable to recover it. 00:31:41.595 [2024-10-01 08:46:33.275069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.595 [2024-10-01 08:46:33.275080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.595 qpair failed and we were unable to recover it. 00:31:41.595 [2024-10-01 08:46:33.275468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.595 [2024-10-01 08:46:33.275477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.595 qpair failed and we were unable to recover it. 00:31:41.595 [2024-10-01 08:46:33.275769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.595 [2024-10-01 08:46:33.275779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.595 qpair failed and we were unable to recover it. 00:31:41.595 [2024-10-01 08:46:33.276043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.595 [2024-10-01 08:46:33.276053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.595 qpair failed and we were unable to recover it. 00:31:41.595 [2024-10-01 08:46:33.276255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.595 [2024-10-01 08:46:33.276264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.595 qpair failed and we were unable to recover it. 00:31:41.595 [2024-10-01 08:46:33.276470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.595 [2024-10-01 08:46:33.276479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.595 qpair failed and we were unable to recover it. 00:31:41.595 [2024-10-01 08:46:33.276677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.595 [2024-10-01 08:46:33.276686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.595 qpair failed and we were unable to recover it. 00:31:41.595 [2024-10-01 08:46:33.276989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.595 [2024-10-01 08:46:33.277002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.595 qpair failed and we were unable to recover it. 00:31:41.595 [2024-10-01 08:46:33.277311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.595 [2024-10-01 08:46:33.277321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.595 qpair failed and we were unable to recover it. 00:31:41.595 [2024-10-01 08:46:33.277627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.595 [2024-10-01 08:46:33.277636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.595 qpair failed and we were unable to recover it. 00:31:41.595 [2024-10-01 08:46:33.277940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.595 [2024-10-01 08:46:33.277950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.595 qpair failed and we were unable to recover it. 00:31:41.595 [2024-10-01 08:46:33.278162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.595 [2024-10-01 08:46:33.278173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.595 qpair failed and we were unable to recover it. 00:31:41.595 [2024-10-01 08:46:33.278504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.595 [2024-10-01 08:46:33.278514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.595 qpair failed and we were unable to recover it. 00:31:41.595 [2024-10-01 08:46:33.278790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.596 [2024-10-01 08:46:33.278802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.596 qpair failed and we were unable to recover it. 00:31:41.596 [2024-10-01 08:46:33.279110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.596 [2024-10-01 08:46:33.279120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.596 qpair failed and we were unable to recover it. 00:31:41.596 [2024-10-01 08:46:33.279398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.596 [2024-10-01 08:46:33.279408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.596 qpair failed and we were unable to recover it. 00:31:41.596 [2024-10-01 08:46:33.279715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.596 [2024-10-01 08:46:33.279724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.596 qpair failed and we were unable to recover it. 00:31:41.596 [2024-10-01 08:46:33.280012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.596 [2024-10-01 08:46:33.280022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.596 qpair failed and we were unable to recover it. 00:31:41.596 [2024-10-01 08:46:33.280202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.596 [2024-10-01 08:46:33.280213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.596 qpair failed and we were unable to recover it. 00:31:41.596 [2024-10-01 08:46:33.280550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.596 [2024-10-01 08:46:33.280560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.596 qpair failed and we were unable to recover it. 00:31:41.596 [2024-10-01 08:46:33.280854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.596 [2024-10-01 08:46:33.280864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.596 qpair failed and we were unable to recover it. 00:31:41.596 [2024-10-01 08:46:33.281100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.596 [2024-10-01 08:46:33.281110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.596 qpair failed and we were unable to recover it. 00:31:41.596 [2024-10-01 08:46:33.281426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.596 [2024-10-01 08:46:33.281435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.596 qpair failed and we were unable to recover it. 00:31:41.596 [2024-10-01 08:46:33.281748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.596 [2024-10-01 08:46:33.281757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.596 qpair failed and we were unable to recover it. 00:31:41.596 [2024-10-01 08:46:33.282068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.596 [2024-10-01 08:46:33.282078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.596 qpair failed and we were unable to recover it. 00:31:41.596 [2024-10-01 08:46:33.282368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.596 [2024-10-01 08:46:33.282378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.596 qpair failed and we were unable to recover it. 00:31:41.596 [2024-10-01 08:46:33.282654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.596 [2024-10-01 08:46:33.282664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.596 qpair failed and we were unable to recover it. 00:31:41.596 [2024-10-01 08:46:33.282937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.596 [2024-10-01 08:46:33.282947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.596 qpair failed and we were unable to recover it. 00:31:41.596 [2024-10-01 08:46:33.283159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.596 [2024-10-01 08:46:33.283169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.596 qpair failed and we were unable to recover it. 00:31:41.596 [2024-10-01 08:46:33.283486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.596 [2024-10-01 08:46:33.283495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.596 qpair failed and we were unable to recover it. 00:31:41.596 [2024-10-01 08:46:33.283769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.596 [2024-10-01 08:46:33.283779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.596 qpair failed and we were unable to recover it. 00:31:41.596 [2024-10-01 08:46:33.284084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.596 [2024-10-01 08:46:33.284094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.596 qpair failed and we were unable to recover it. 00:31:41.596 [2024-10-01 08:46:33.284382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.596 [2024-10-01 08:46:33.284391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.596 qpair failed and we were unable to recover it. 00:31:41.596 [2024-10-01 08:46:33.284639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.596 [2024-10-01 08:46:33.284649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.596 qpair failed and we were unable to recover it. 00:31:41.596 [2024-10-01 08:46:33.284962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.596 [2024-10-01 08:46:33.284972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.596 qpair failed and we were unable to recover it. 00:31:41.596 [2024-10-01 08:46:33.285301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.596 [2024-10-01 08:46:33.285310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.596 qpair failed and we were unable to recover it. 00:31:41.596 [2024-10-01 08:46:33.285605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.596 [2024-10-01 08:46:33.285615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.596 qpair failed and we were unable to recover it. 00:31:41.596 [2024-10-01 08:46:33.285924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.596 [2024-10-01 08:46:33.285933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.596 qpair failed and we were unable to recover it. 00:31:41.596 [2024-10-01 08:46:33.286228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.596 [2024-10-01 08:46:33.286239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.596 qpair failed and we were unable to recover it. 00:31:41.596 [2024-10-01 08:46:33.286551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.596 [2024-10-01 08:46:33.286561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.596 qpair failed and we were unable to recover it. 00:31:41.596 [2024-10-01 08:46:33.286768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.596 [2024-10-01 08:46:33.286779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.596 qpair failed and we were unable to recover it. 00:31:41.596 [2024-10-01 08:46:33.287043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.596 [2024-10-01 08:46:33.287054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.596 qpair failed and we were unable to recover it. 00:31:41.596 [2024-10-01 08:46:33.287355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.596 [2024-10-01 08:46:33.287365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.596 qpair failed and we were unable to recover it. 00:31:41.596 [2024-10-01 08:46:33.287667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.596 [2024-10-01 08:46:33.287677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.596 qpair failed and we were unable to recover it. 00:31:41.596 [2024-10-01 08:46:33.287959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.596 [2024-10-01 08:46:33.287968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.596 qpair failed and we were unable to recover it. 00:31:41.596 [2024-10-01 08:46:33.288273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.596 [2024-10-01 08:46:33.288283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.596 qpair failed and we were unable to recover it. 00:31:41.596 [2024-10-01 08:46:33.288590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.596 [2024-10-01 08:46:33.288599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.596 qpair failed and we were unable to recover it. 00:31:41.596 [2024-10-01 08:46:33.288887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.596 [2024-10-01 08:46:33.288897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.596 qpair failed and we were unable to recover it. 00:31:41.596 [2024-10-01 08:46:33.289228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.596 [2024-10-01 08:46:33.289238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.597 qpair failed and we were unable to recover it. 00:31:41.597 [2024-10-01 08:46:33.289518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.597 [2024-10-01 08:46:33.289528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.597 qpair failed and we were unable to recover it. 00:31:41.597 [2024-10-01 08:46:33.289810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.597 [2024-10-01 08:46:33.289819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.597 qpair failed and we were unable to recover it. 00:31:41.597 [2024-10-01 08:46:33.290102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.597 [2024-10-01 08:46:33.290113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.597 qpair failed and we were unable to recover it. 00:31:41.597 [2024-10-01 08:46:33.290359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.597 [2024-10-01 08:46:33.290368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.597 qpair failed and we were unable to recover it. 00:31:41.597 [2024-10-01 08:46:33.290697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.597 [2024-10-01 08:46:33.290707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.597 qpair failed and we were unable to recover it. 00:31:41.597 [2024-10-01 08:46:33.291044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.597 [2024-10-01 08:46:33.291056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.597 qpair failed and we were unable to recover it. 00:31:41.597 [2024-10-01 08:46:33.291414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.597 [2024-10-01 08:46:33.291423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.597 qpair failed and we were unable to recover it. 00:31:41.597 [2024-10-01 08:46:33.291765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.597 [2024-10-01 08:46:33.291774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.597 qpair failed and we were unable to recover it. 00:31:41.597 [2024-10-01 08:46:33.292092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.597 [2024-10-01 08:46:33.292102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.597 qpair failed and we were unable to recover it. 00:31:41.597 [2024-10-01 08:46:33.292431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.597 [2024-10-01 08:46:33.292441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.597 qpair failed and we were unable to recover it. 00:31:41.597 [2024-10-01 08:46:33.292770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.597 [2024-10-01 08:46:33.292781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.597 qpair failed and we were unable to recover it. 00:31:41.597 [2024-10-01 08:46:33.292953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.597 [2024-10-01 08:46:33.292964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.597 qpair failed and we were unable to recover it. 00:31:41.597 [2024-10-01 08:46:33.293281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.597 [2024-10-01 08:46:33.293291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.597 qpair failed and we were unable to recover it. 00:31:41.597 [2024-10-01 08:46:33.293455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.597 [2024-10-01 08:46:33.293466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.597 qpair failed and we were unable to recover it. 00:31:41.597 [2024-10-01 08:46:33.293670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.597 [2024-10-01 08:46:33.293680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.597 qpair failed and we were unable to recover it. 00:31:41.597 [2024-10-01 08:46:33.293986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.597 [2024-10-01 08:46:33.294000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.597 qpair failed and we were unable to recover it. 00:31:41.597 [2024-10-01 08:46:33.294287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.597 [2024-10-01 08:46:33.294296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.597 qpair failed and we were unable to recover it. 00:31:41.597 [2024-10-01 08:46:33.294595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.597 [2024-10-01 08:46:33.294604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.597 qpair failed and we were unable to recover it. 00:31:41.597 [2024-10-01 08:46:33.294913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.597 [2024-10-01 08:46:33.294922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.597 qpair failed and we were unable to recover it. 00:31:41.597 [2024-10-01 08:46:33.295210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.597 [2024-10-01 08:46:33.295220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.597 qpair failed and we were unable to recover it. 00:31:41.597 [2024-10-01 08:46:33.295528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.597 [2024-10-01 08:46:33.295538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.597 qpair failed and we were unable to recover it. 00:31:41.597 [2024-10-01 08:46:33.295818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.597 [2024-10-01 08:46:33.295827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.597 qpair failed and we were unable to recover it. 00:31:41.597 [2024-10-01 08:46:33.296106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.597 [2024-10-01 08:46:33.296116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.597 qpair failed and we were unable to recover it. 00:31:41.597 [2024-10-01 08:46:33.296420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.597 [2024-10-01 08:46:33.296429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.597 qpair failed and we were unable to recover it. 00:31:41.597 [2024-10-01 08:46:33.296635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.597 [2024-10-01 08:46:33.296645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.597 qpair failed and we were unable to recover it. 00:31:41.597 [2024-10-01 08:46:33.296981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.597 [2024-10-01 08:46:33.296992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.597 qpair failed and we were unable to recover it. 00:31:41.597 [2024-10-01 08:46:33.297321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.597 [2024-10-01 08:46:33.297332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.597 qpair failed and we were unable to recover it. 00:31:41.597 [2024-10-01 08:46:33.297636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.597 [2024-10-01 08:46:33.297646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.597 qpair failed and we were unable to recover it. 00:31:41.597 [2024-10-01 08:46:33.297922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.597 [2024-10-01 08:46:33.297931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.597 qpair failed and we were unable to recover it. 00:31:41.597 [2024-10-01 08:46:33.298244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.597 [2024-10-01 08:46:33.298254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.597 qpair failed and we were unable to recover it. 00:31:41.597 [2024-10-01 08:46:33.298531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.597 [2024-10-01 08:46:33.298540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.597 qpair failed and we were unable to recover it. 00:31:41.597 [2024-10-01 08:46:33.298823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.597 [2024-10-01 08:46:33.298832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.597 qpair failed and we were unable to recover it. 00:31:41.597 [2024-10-01 08:46:33.299098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.597 [2024-10-01 08:46:33.299110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.597 qpair failed and we were unable to recover it. 00:31:41.597 [2024-10-01 08:46:33.299430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.597 [2024-10-01 08:46:33.299439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.597 qpair failed and we were unable to recover it. 00:31:41.597 [2024-10-01 08:46:33.299719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.597 [2024-10-01 08:46:33.299728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.597 qpair failed and we were unable to recover it. 00:31:41.597 [2024-10-01 08:46:33.300043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.597 [2024-10-01 08:46:33.300053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.597 qpair failed and we were unable to recover it. 00:31:41.597 [2024-10-01 08:46:33.300352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.597 [2024-10-01 08:46:33.300362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.597 qpair failed and we were unable to recover it. 00:31:41.597 [2024-10-01 08:46:33.300695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.597 [2024-10-01 08:46:33.300704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.597 qpair failed and we were unable to recover it. 00:31:41.597 [2024-10-01 08:46:33.300982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.597 [2024-10-01 08:46:33.300991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.598 qpair failed and we were unable to recover it. 00:31:41.598 [2024-10-01 08:46:33.301299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.598 [2024-10-01 08:46:33.301309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.598 qpair failed and we were unable to recover it. 00:31:41.598 [2024-10-01 08:46:33.301586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.598 [2024-10-01 08:46:33.301596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.598 qpair failed and we were unable to recover it. 00:31:41.598 [2024-10-01 08:46:33.301862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.598 [2024-10-01 08:46:33.301872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.598 qpair failed and we were unable to recover it. 00:31:41.598 [2024-10-01 08:46:33.302156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.598 [2024-10-01 08:46:33.302166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.598 qpair failed and we were unable to recover it. 00:31:41.598 [2024-10-01 08:46:33.302455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.598 [2024-10-01 08:46:33.302465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.598 qpair failed and we were unable to recover it. 00:31:41.598 [2024-10-01 08:46:33.302767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.598 [2024-10-01 08:46:33.302777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.598 qpair failed and we were unable to recover it. 00:31:41.598 [2024-10-01 08:46:33.303074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.598 [2024-10-01 08:46:33.303084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.598 qpair failed and we were unable to recover it. 00:31:41.598 [2024-10-01 08:46:33.303389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.598 [2024-10-01 08:46:33.303399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.598 qpair failed and we were unable to recover it. 00:31:41.598 [2024-10-01 08:46:33.303709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.598 [2024-10-01 08:46:33.303719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.598 qpair failed and we were unable to recover it. 00:31:41.598 [2024-10-01 08:46:33.303993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.598 [2024-10-01 08:46:33.304005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.598 qpair failed and we were unable to recover it. 00:31:41.598 [2024-10-01 08:46:33.304319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.598 [2024-10-01 08:46:33.304328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.598 qpair failed and we were unable to recover it. 00:31:41.598 [2024-10-01 08:46:33.304606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.598 [2024-10-01 08:46:33.304616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.598 qpair failed and we were unable to recover it. 00:31:41.598 [2024-10-01 08:46:33.304918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.598 [2024-10-01 08:46:33.304928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.598 qpair failed and we were unable to recover it. 00:31:41.598 [2024-10-01 08:46:33.305226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.598 [2024-10-01 08:46:33.305236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.598 qpair failed and we were unable to recover it. 00:31:41.598 [2024-10-01 08:46:33.305540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.598 [2024-10-01 08:46:33.305550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.598 qpair failed and we were unable to recover it. 00:31:41.598 [2024-10-01 08:46:33.305855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.598 [2024-10-01 08:46:33.305866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.598 qpair failed and we were unable to recover it. 00:31:41.598 [2024-10-01 08:46:33.306144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.598 [2024-10-01 08:46:33.306154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.598 qpair failed and we were unable to recover it. 00:31:41.598 [2024-10-01 08:46:33.306458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.598 [2024-10-01 08:46:33.306468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.598 qpair failed and we were unable to recover it. 00:31:41.598 [2024-10-01 08:46:33.306816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.598 [2024-10-01 08:46:33.306825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.598 qpair failed and we were unable to recover it. 00:31:41.598 [2024-10-01 08:46:33.307125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.598 [2024-10-01 08:46:33.307136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.598 qpair failed and we were unable to recover it. 00:31:41.598 [2024-10-01 08:46:33.307448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.598 [2024-10-01 08:46:33.307458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.598 qpair failed and we were unable to recover it. 00:31:41.598 [2024-10-01 08:46:33.307760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.598 [2024-10-01 08:46:33.307770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.598 qpair failed and we were unable to recover it. 00:31:41.598 [2024-10-01 08:46:33.308105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.598 [2024-10-01 08:46:33.308116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.598 qpair failed and we were unable to recover it. 00:31:41.598 [2024-10-01 08:46:33.308437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.598 [2024-10-01 08:46:33.308448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.598 qpair failed and we were unable to recover it. 00:31:41.598 [2024-10-01 08:46:33.308747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.598 [2024-10-01 08:46:33.308757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.598 qpair failed and we were unable to recover it. 00:31:41.598 [2024-10-01 08:46:33.309025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.598 [2024-10-01 08:46:33.309035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.598 qpair failed and we were unable to recover it. 00:31:41.598 [2024-10-01 08:46:33.309249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.598 [2024-10-01 08:46:33.309258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.598 qpair failed and we were unable to recover it. 00:31:41.598 [2024-10-01 08:46:33.309597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.598 [2024-10-01 08:46:33.309607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.598 qpair failed and we were unable to recover it. 00:31:41.598 [2024-10-01 08:46:33.309944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.598 [2024-10-01 08:46:33.309953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.598 qpair failed and we were unable to recover it. 00:31:41.598 [2024-10-01 08:46:33.310304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.598 [2024-10-01 08:46:33.310315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.598 qpair failed and we were unable to recover it. 00:31:41.598 [2024-10-01 08:46:33.310543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.598 [2024-10-01 08:46:33.310552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.598 qpair failed and we were unable to recover it. 00:31:41.598 [2024-10-01 08:46:33.310855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.598 [2024-10-01 08:46:33.310865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.598 qpair failed and we were unable to recover it. 00:31:41.598 [2024-10-01 08:46:33.311181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.598 [2024-10-01 08:46:33.311191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.598 qpair failed and we were unable to recover it. 00:31:41.598 [2024-10-01 08:46:33.311479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.598 [2024-10-01 08:46:33.311488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.598 qpair failed and we were unable to recover it. 00:31:41.598 [2024-10-01 08:46:33.311777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.598 [2024-10-01 08:46:33.311791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.598 qpair failed and we were unable to recover it. 00:31:41.598 [2024-10-01 08:46:33.312081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.598 [2024-10-01 08:46:33.312091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.598 qpair failed and we were unable to recover it. 00:31:41.598 [2024-10-01 08:46:33.312400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.598 [2024-10-01 08:46:33.312411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.598 qpair failed and we were unable to recover it. 00:31:41.598 [2024-10-01 08:46:33.312688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.598 [2024-10-01 08:46:33.312697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.598 qpair failed and we were unable to recover it. 00:31:41.598 [2024-10-01 08:46:33.312893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.598 [2024-10-01 08:46:33.312903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.598 qpair failed and we were unable to recover it. 00:31:41.599 [2024-10-01 08:46:33.313202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.599 [2024-10-01 08:46:33.313213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.599 qpair failed and we were unable to recover it. 00:31:41.599 [2024-10-01 08:46:33.313511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.599 [2024-10-01 08:46:33.313520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.599 qpair failed and we were unable to recover it. 00:31:41.599 [2024-10-01 08:46:33.313827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.599 [2024-10-01 08:46:33.313836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.599 qpair failed and we were unable to recover it. 00:31:41.599 [2024-10-01 08:46:33.314041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.599 [2024-10-01 08:46:33.314051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.599 qpair failed and we were unable to recover it. 00:31:41.599 [2024-10-01 08:46:33.314251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.599 [2024-10-01 08:46:33.314262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.599 qpair failed and we were unable to recover it. 00:31:41.599 [2024-10-01 08:46:33.314534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.599 [2024-10-01 08:46:33.314543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.599 qpair failed and we were unable to recover it. 00:31:41.599 [2024-10-01 08:46:33.314848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.599 [2024-10-01 08:46:33.314857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.599 qpair failed and we were unable to recover it. 00:31:41.599 [2024-10-01 08:46:33.315150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.599 [2024-10-01 08:46:33.315161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.599 qpair failed and we were unable to recover it. 00:31:41.599 [2024-10-01 08:46:33.315355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.599 [2024-10-01 08:46:33.315365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.599 qpair failed and we were unable to recover it. 00:31:41.599 [2024-10-01 08:46:33.315550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.599 [2024-10-01 08:46:33.315561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.599 qpair failed and we were unable to recover it. 00:31:41.599 [2024-10-01 08:46:33.315902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.599 [2024-10-01 08:46:33.315912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.599 qpair failed and we were unable to recover it. 00:31:41.599 [2024-10-01 08:46:33.316225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.599 [2024-10-01 08:46:33.316235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.599 qpair failed and we were unable to recover it. 00:31:41.599 [2024-10-01 08:46:33.316548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.599 [2024-10-01 08:46:33.316558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.599 qpair failed and we were unable to recover it. 00:31:41.599 [2024-10-01 08:46:33.316889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.599 [2024-10-01 08:46:33.316899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.599 qpair failed and we were unable to recover it. 00:31:41.599 [2024-10-01 08:46:33.317223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.599 [2024-10-01 08:46:33.317234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.599 qpair failed and we were unable to recover it. 00:31:41.599 [2024-10-01 08:46:33.317410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.599 [2024-10-01 08:46:33.317420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.599 qpair failed and we were unable to recover it. 00:31:41.599 [2024-10-01 08:46:33.317725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.599 [2024-10-01 08:46:33.317736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.599 qpair failed and we were unable to recover it. 00:31:41.599 [2024-10-01 08:46:33.318063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.599 [2024-10-01 08:46:33.318073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.599 qpair failed and we were unable to recover it. 00:31:41.599 [2024-10-01 08:46:33.318401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.599 [2024-10-01 08:46:33.318411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.599 qpair failed and we were unable to recover it. 00:31:41.599 [2024-10-01 08:46:33.318681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.599 [2024-10-01 08:46:33.318690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.599 qpair failed and we were unable to recover it. 00:31:41.599 [2024-10-01 08:46:33.319022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.599 [2024-10-01 08:46:33.319033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.599 qpair failed and we were unable to recover it. 00:31:41.599 [2024-10-01 08:46:33.319327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.599 [2024-10-01 08:46:33.319336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.599 qpair failed and we were unable to recover it. 00:31:41.599 [2024-10-01 08:46:33.319618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.599 [2024-10-01 08:46:33.319630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.599 qpair failed and we were unable to recover it. 00:31:41.599 [2024-10-01 08:46:33.319941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.599 [2024-10-01 08:46:33.319950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.599 qpair failed and we were unable to recover it. 00:31:41.599 [2024-10-01 08:46:33.320239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.599 [2024-10-01 08:46:33.320249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.599 qpair failed and we were unable to recover it. 00:31:41.599 [2024-10-01 08:46:33.320540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.599 [2024-10-01 08:46:33.320550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.599 qpair failed and we were unable to recover it. 00:31:41.599 [2024-10-01 08:46:33.320831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.599 [2024-10-01 08:46:33.320841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.599 qpair failed and we were unable to recover it. 00:31:41.599 [2024-10-01 08:46:33.321145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.599 [2024-10-01 08:46:33.321155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.599 qpair failed and we were unable to recover it. 00:31:41.599 [2024-10-01 08:46:33.321357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.599 [2024-10-01 08:46:33.321366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.599 qpair failed and we were unable to recover it. 00:31:41.599 [2024-10-01 08:46:33.321687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.599 [2024-10-01 08:46:33.321696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.599 qpair failed and we were unable to recover it. 00:31:41.599 [2024-10-01 08:46:33.322001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.599 [2024-10-01 08:46:33.322011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.599 qpair failed and we were unable to recover it. 00:31:41.599 [2024-10-01 08:46:33.322321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.599 [2024-10-01 08:46:33.322331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.599 qpair failed and we were unable to recover it. 00:31:41.599 [2024-10-01 08:46:33.322605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.599 [2024-10-01 08:46:33.322615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.599 qpair failed and we were unable to recover it. 00:31:41.599 [2024-10-01 08:46:33.322918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.599 [2024-10-01 08:46:33.322928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.599 qpair failed and we were unable to recover it. 00:31:41.599 [2024-10-01 08:46:33.323280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.599 [2024-10-01 08:46:33.323291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.599 qpair failed and we were unable to recover it. 00:31:41.599 [2024-10-01 08:46:33.323613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.599 [2024-10-01 08:46:33.323622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.599 qpair failed and we were unable to recover it. 00:31:41.599 [2024-10-01 08:46:33.323901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.599 [2024-10-01 08:46:33.323911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.599 qpair failed and we were unable to recover it. 00:31:41.599 [2024-10-01 08:46:33.324191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.599 [2024-10-01 08:46:33.324201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.599 qpair failed and we were unable to recover it. 00:31:41.599 [2024-10-01 08:46:33.324504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.599 [2024-10-01 08:46:33.324514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.599 qpair failed and we were unable to recover it. 00:31:41.600 [2024-10-01 08:46:33.324838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.600 [2024-10-01 08:46:33.324848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.600 qpair failed and we were unable to recover it. 00:31:41.600 [2024-10-01 08:46:33.325132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.600 [2024-10-01 08:46:33.325142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.600 qpair failed and we were unable to recover it. 00:31:41.600 [2024-10-01 08:46:33.325403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.600 [2024-10-01 08:46:33.325413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.600 qpair failed and we were unable to recover it. 00:31:41.600 [2024-10-01 08:46:33.325611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.600 [2024-10-01 08:46:33.325621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.600 qpair failed and we were unable to recover it. 00:31:41.600 [2024-10-01 08:46:33.325932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.600 [2024-10-01 08:46:33.325943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.600 qpair failed and we were unable to recover it. 00:31:41.600 [2024-10-01 08:46:33.326206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.600 [2024-10-01 08:46:33.326215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.600 qpair failed and we were unable to recover it. 00:31:41.600 [2024-10-01 08:46:33.326489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.600 [2024-10-01 08:46:33.326498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.600 qpair failed and we were unable to recover it. 00:31:41.600 [2024-10-01 08:46:33.326823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.600 [2024-10-01 08:46:33.326833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.600 qpair failed and we were unable to recover it. 00:31:41.600 [2024-10-01 08:46:33.327116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.600 [2024-10-01 08:46:33.327127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.600 qpair failed and we were unable to recover it. 00:31:41.600 [2024-10-01 08:46:33.327451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.600 [2024-10-01 08:46:33.327462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.600 qpair failed and we were unable to recover it. 00:31:41.600 [2024-10-01 08:46:33.327769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.600 [2024-10-01 08:46:33.327778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.600 qpair failed and we were unable to recover it. 00:31:41.600 [2024-10-01 08:46:33.328086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.600 [2024-10-01 08:46:33.328096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.600 qpair failed and we were unable to recover it. 00:31:41.600 [2024-10-01 08:46:33.328418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.600 [2024-10-01 08:46:33.328427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.600 qpair failed and we were unable to recover it. 00:31:41.600 [2024-10-01 08:46:33.328712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.600 [2024-10-01 08:46:33.328730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.600 qpair failed and we were unable to recover it. 00:31:41.600 [2024-10-01 08:46:33.328956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.600 [2024-10-01 08:46:33.328965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.600 qpair failed and we were unable to recover it. 00:31:41.600 [2024-10-01 08:46:33.329273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.600 [2024-10-01 08:46:33.329283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.600 qpair failed and we were unable to recover it. 00:31:41.600 [2024-10-01 08:46:33.329454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.600 [2024-10-01 08:46:33.329464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.600 qpair failed and we were unable to recover it. 00:31:41.600 [2024-10-01 08:46:33.329767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.600 [2024-10-01 08:46:33.329776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.600 qpair failed and we were unable to recover it. 00:31:41.600 [2024-10-01 08:46:33.330061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.600 [2024-10-01 08:46:33.330071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.600 qpair failed and we were unable to recover it. 00:31:41.600 [2024-10-01 08:46:33.330279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.600 [2024-10-01 08:46:33.330288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.600 qpair failed and we were unable to recover it. 00:31:41.600 [2024-10-01 08:46:33.330584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.600 [2024-10-01 08:46:33.330594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.600 qpair failed and we were unable to recover it. 00:31:41.600 [2024-10-01 08:46:33.330910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.600 [2024-10-01 08:46:33.330921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.600 qpair failed and we were unable to recover it. 00:31:41.600 [2024-10-01 08:46:33.331227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.600 [2024-10-01 08:46:33.331238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.600 qpair failed and we were unable to recover it. 00:31:41.600 [2024-10-01 08:46:33.331427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.600 [2024-10-01 08:46:33.331437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.600 qpair failed and we were unable to recover it. 00:31:41.600 [2024-10-01 08:46:33.331742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.600 [2024-10-01 08:46:33.331754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.600 qpair failed and we were unable to recover it. 00:31:41.600 [2024-10-01 08:46:33.332034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.600 [2024-10-01 08:46:33.332044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.600 qpair failed and we were unable to recover it. 00:31:41.600 [2024-10-01 08:46:33.332359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.600 [2024-10-01 08:46:33.332368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.600 qpair failed and we were unable to recover it. 00:31:41.600 [2024-10-01 08:46:33.332658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.600 [2024-10-01 08:46:33.332674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.600 qpair failed and we were unable to recover it. 00:31:41.600 [2024-10-01 08:46:33.332965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.600 [2024-10-01 08:46:33.332976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.600 qpair failed and we were unable to recover it. 00:31:41.600 [2024-10-01 08:46:33.333208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.600 [2024-10-01 08:46:33.333218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.600 qpair failed and we were unable to recover it. 00:31:41.600 [2024-10-01 08:46:33.333534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.600 [2024-10-01 08:46:33.333543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.600 qpair failed and we were unable to recover it. 00:31:41.600 [2024-10-01 08:46:33.333827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.600 [2024-10-01 08:46:33.333844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.600 qpair failed and we were unable to recover it. 00:31:41.600 [2024-10-01 08:46:33.334146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.600 [2024-10-01 08:46:33.334156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.600 qpair failed and we were unable to recover it. 00:31:41.600 [2024-10-01 08:46:33.334462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.600 [2024-10-01 08:46:33.334471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.600 qpair failed and we were unable to recover it. 00:31:41.600 [2024-10-01 08:46:33.334800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.600 [2024-10-01 08:46:33.334810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.600 qpair failed and we were unable to recover it. 00:31:41.600 [2024-10-01 08:46:33.335076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.601 [2024-10-01 08:46:33.335087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.601 qpair failed and we were unable to recover it. 00:31:41.601 [2024-10-01 08:46:33.335408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.601 [2024-10-01 08:46:33.335418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.601 qpair failed and we were unable to recover it. 00:31:41.601 [2024-10-01 08:46:33.335695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.601 [2024-10-01 08:46:33.335705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.601 qpair failed and we were unable to recover it. 00:31:41.601 [2024-10-01 08:46:33.336018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.601 [2024-10-01 08:46:33.336028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.601 qpair failed and we were unable to recover it. 00:31:41.601 [2024-10-01 08:46:33.336317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.601 [2024-10-01 08:46:33.336327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.601 qpair failed and we were unable to recover it. 00:31:41.601 [2024-10-01 08:46:33.336640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.601 [2024-10-01 08:46:33.336649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.601 qpair failed and we were unable to recover it. 00:31:41.601 [2024-10-01 08:46:33.336959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.601 [2024-10-01 08:46:33.336969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.601 qpair failed and we were unable to recover it. 00:31:41.601 [2024-10-01 08:46:33.337272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.601 [2024-10-01 08:46:33.337282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.601 qpair failed and we were unable to recover it. 00:31:41.601 [2024-10-01 08:46:33.337548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.601 [2024-10-01 08:46:33.337557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.601 qpair failed and we were unable to recover it. 00:31:41.601 [2024-10-01 08:46:33.337872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.601 [2024-10-01 08:46:33.337882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.601 qpair failed and we were unable to recover it. 00:31:41.601 [2024-10-01 08:46:33.338166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.601 [2024-10-01 08:46:33.338176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.601 qpair failed and we were unable to recover it. 00:31:41.601 [2024-10-01 08:46:33.338468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.601 [2024-10-01 08:46:33.338478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.601 qpair failed and we were unable to recover it. 00:31:41.601 [2024-10-01 08:46:33.338736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.601 [2024-10-01 08:46:33.338746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.601 qpair failed and we were unable to recover it. 00:31:41.601 [2024-10-01 08:46:33.339052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.601 [2024-10-01 08:46:33.339063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.601 qpair failed and we were unable to recover it. 00:31:41.601 [2024-10-01 08:46:33.339249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.601 [2024-10-01 08:46:33.339258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.601 qpair failed and we were unable to recover it. 00:31:41.601 [2024-10-01 08:46:33.339524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.601 [2024-10-01 08:46:33.339534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.601 qpair failed and we were unable to recover it. 00:31:41.601 [2024-10-01 08:46:33.339851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.601 [2024-10-01 08:46:33.339863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.601 qpair failed and we were unable to recover it. 00:31:41.601 [2024-10-01 08:46:33.340233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.601 [2024-10-01 08:46:33.340243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.601 qpair failed and we were unable to recover it. 00:31:41.601 [2024-10-01 08:46:33.340556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.601 [2024-10-01 08:46:33.340565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.601 qpair failed and we were unable to recover it. 00:31:41.601 [2024-10-01 08:46:33.340844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.601 [2024-10-01 08:46:33.340853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.601 qpair failed and we were unable to recover it. 00:31:41.601 [2024-10-01 08:46:33.341160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.601 [2024-10-01 08:46:33.341170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.601 qpair failed and we were unable to recover it. 00:31:41.601 [2024-10-01 08:46:33.341430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.601 [2024-10-01 08:46:33.341439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.601 qpair failed and we were unable to recover it. 00:31:41.601 [2024-10-01 08:46:33.341636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.601 [2024-10-01 08:46:33.341646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.601 qpair failed and we were unable to recover it. 00:31:41.601 [2024-10-01 08:46:33.341954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.601 [2024-10-01 08:46:33.341964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.601 qpair failed and we were unable to recover it. 00:31:41.601 [2024-10-01 08:46:33.342227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.601 [2024-10-01 08:46:33.342238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.601 qpair failed and we were unable to recover it. 00:31:41.601 [2024-10-01 08:46:33.342539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.601 [2024-10-01 08:46:33.342548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.601 qpair failed and we were unable to recover it. 00:31:41.601 [2024-10-01 08:46:33.342834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.601 [2024-10-01 08:46:33.342844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.601 qpair failed and we were unable to recover it. 00:31:41.601 [2024-10-01 08:46:33.343040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.601 [2024-10-01 08:46:33.343051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.601 qpair failed and we were unable to recover it. 00:31:41.601 [2024-10-01 08:46:33.343336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.601 [2024-10-01 08:46:33.343345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.601 qpair failed and we were unable to recover it. 00:31:41.601 [2024-10-01 08:46:33.343655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.601 [2024-10-01 08:46:33.343666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.601 qpair failed and we were unable to recover it. 00:31:41.601 [2024-10-01 08:46:33.343970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.601 [2024-10-01 08:46:33.343980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.601 qpair failed and we were unable to recover it. 00:31:41.601 [2024-10-01 08:46:33.344316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.601 [2024-10-01 08:46:33.344327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.601 qpair failed and we were unable to recover it. 00:31:41.601 [2024-10-01 08:46:33.344655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.601 [2024-10-01 08:46:33.344665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.601 qpair failed and we were unable to recover it. 00:31:41.601 [2024-10-01 08:46:33.344934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.601 [2024-10-01 08:46:33.344944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.601 qpair failed and we were unable to recover it. 00:31:41.601 [2024-10-01 08:46:33.345265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.601 [2024-10-01 08:46:33.345274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.601 qpair failed and we were unable to recover it. 00:31:41.601 [2024-10-01 08:46:33.345558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.601 [2024-10-01 08:46:33.345568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.601 qpair failed and we were unable to recover it. 00:31:41.601 [2024-10-01 08:46:33.345770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.601 [2024-10-01 08:46:33.345780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.601 qpair failed and we were unable to recover it. 00:31:41.601 [2024-10-01 08:46:33.346049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.601 [2024-10-01 08:46:33.346059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.601 qpair failed and we were unable to recover it. 00:31:41.601 [2024-10-01 08:46:33.346260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.601 [2024-10-01 08:46:33.346270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.601 qpair failed and we were unable to recover it. 00:31:41.601 [2024-10-01 08:46:33.346596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.601 [2024-10-01 08:46:33.346606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.602 qpair failed and we were unable to recover it. 00:31:41.602 [2024-10-01 08:46:33.346892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.602 [2024-10-01 08:46:33.346902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.602 qpair failed and we were unable to recover it. 00:31:41.602 [2024-10-01 08:46:33.347188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.602 [2024-10-01 08:46:33.347198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.602 qpair failed and we were unable to recover it. 00:31:41.602 [2024-10-01 08:46:33.347378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.602 [2024-10-01 08:46:33.347388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.602 qpair failed and we were unable to recover it. 00:31:41.602 [2024-10-01 08:46:33.347593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.602 [2024-10-01 08:46:33.347603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.602 qpair failed and we were unable to recover it. 00:31:41.602 [2024-10-01 08:46:33.347935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.602 [2024-10-01 08:46:33.347946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.602 qpair failed and we were unable to recover it. 00:31:41.602 [2024-10-01 08:46:33.348271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.602 [2024-10-01 08:46:33.348281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.602 qpair failed and we were unable to recover it. 00:31:41.602 [2024-10-01 08:46:33.348591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.602 [2024-10-01 08:46:33.348601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.602 qpair failed and we were unable to recover it. 00:31:41.602 [2024-10-01 08:46:33.348901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.602 [2024-10-01 08:46:33.348911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.602 qpair failed and we were unable to recover it. 00:31:41.602 [2024-10-01 08:46:33.349161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.602 [2024-10-01 08:46:33.349172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.602 qpair failed and we were unable to recover it. 00:31:41.602 [2024-10-01 08:46:33.349496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.602 [2024-10-01 08:46:33.349506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.602 qpair failed and we were unable to recover it. 00:31:41.602 [2024-10-01 08:46:33.349835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.602 [2024-10-01 08:46:33.349846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.602 qpair failed and we were unable to recover it. 00:31:41.602 [2024-10-01 08:46:33.350151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.602 [2024-10-01 08:46:33.350160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.602 qpair failed and we were unable to recover it. 00:31:41.602 [2024-10-01 08:46:33.350451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.602 [2024-10-01 08:46:33.350461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.602 qpair failed and we were unable to recover it. 00:31:41.602 [2024-10-01 08:46:33.350762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.602 [2024-10-01 08:46:33.350772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.602 qpair failed and we were unable to recover it. 00:31:41.602 [2024-10-01 08:46:33.351053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.602 [2024-10-01 08:46:33.351063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.602 qpair failed and we were unable to recover it. 00:31:41.602 [2024-10-01 08:46:33.351399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.602 [2024-10-01 08:46:33.351408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.602 qpair failed and we were unable to recover it. 00:31:41.602 [2024-10-01 08:46:33.351695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.602 [2024-10-01 08:46:33.351705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.602 qpair failed and we were unable to recover it. 00:31:41.602 [2024-10-01 08:46:33.352028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.602 [2024-10-01 08:46:33.352040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.602 qpair failed and we were unable to recover it. 00:31:41.602 [2024-10-01 08:46:33.352332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.602 [2024-10-01 08:46:33.352341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.602 qpair failed and we were unable to recover it. 00:31:41.602 [2024-10-01 08:46:33.352660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.602 [2024-10-01 08:46:33.352669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.602 qpair failed and we were unable to recover it. 00:31:41.602 [2024-10-01 08:46:33.352980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.602 [2024-10-01 08:46:33.352989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.602 qpair failed and we were unable to recover it. 00:31:41.602 [2024-10-01 08:46:33.353207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.602 [2024-10-01 08:46:33.353217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.602 qpair failed and we were unable to recover it. 00:31:41.602 [2024-10-01 08:46:33.353523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.602 [2024-10-01 08:46:33.353532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.602 qpair failed and we were unable to recover it. 00:31:41.602 [2024-10-01 08:46:33.353839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.602 [2024-10-01 08:46:33.353849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.602 qpair failed and we were unable to recover it. 00:31:41.602 [2024-10-01 08:46:33.354176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.602 [2024-10-01 08:46:33.354186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.602 qpair failed and we were unable to recover it. 00:31:41.602 [2024-10-01 08:46:33.354468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.602 [2024-10-01 08:46:33.354478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.602 qpair failed and we were unable to recover it. 00:31:41.602 [2024-10-01 08:46:33.354676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.602 [2024-10-01 08:46:33.354685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.602 qpair failed and we were unable to recover it. 00:31:41.602 [2024-10-01 08:46:33.354967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.602 [2024-10-01 08:46:33.354976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.602 qpair failed and we were unable to recover it. 00:31:41.602 [2024-10-01 08:46:33.355295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.602 [2024-10-01 08:46:33.355305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.602 qpair failed and we were unable to recover it. 00:31:41.602 [2024-10-01 08:46:33.355577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.602 [2024-10-01 08:46:33.355587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.602 qpair failed and we were unable to recover it. 00:31:41.602 [2024-10-01 08:46:33.355799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.602 [2024-10-01 08:46:33.355808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.602 qpair failed and we were unable to recover it. 00:31:41.602 [2024-10-01 08:46:33.356148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.602 [2024-10-01 08:46:33.356158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.602 qpair failed and we were unable to recover it. 00:31:41.602 [2024-10-01 08:46:33.356478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.602 [2024-10-01 08:46:33.356488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.602 qpair failed and we were unable to recover it. 00:31:41.602 [2024-10-01 08:46:33.356764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.602 [2024-10-01 08:46:33.356774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.602 qpair failed and we were unable to recover it. 00:31:41.602 [2024-10-01 08:46:33.357001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.602 [2024-10-01 08:46:33.357011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.602 qpair failed and we were unable to recover it. 00:31:41.602 [2024-10-01 08:46:33.357328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.602 [2024-10-01 08:46:33.357338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.602 qpair failed and we were unable to recover it. 00:31:41.602 [2024-10-01 08:46:33.357631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.602 [2024-10-01 08:46:33.357640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.602 qpair failed and we were unable to recover it. 00:31:41.602 [2024-10-01 08:46:33.357969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.602 [2024-10-01 08:46:33.357979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.602 qpair failed and we were unable to recover it. 00:31:41.602 [2024-10-01 08:46:33.358270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.602 [2024-10-01 08:46:33.358279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.602 qpair failed and we were unable to recover it. 00:31:41.603 [2024-10-01 08:46:33.358600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.603 [2024-10-01 08:46:33.358616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.603 qpair failed and we were unable to recover it. 00:31:41.603 [2024-10-01 08:46:33.358907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.603 [2024-10-01 08:46:33.358917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.603 qpair failed and we were unable to recover it. 00:31:41.603 [2024-10-01 08:46:33.359106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.603 [2024-10-01 08:46:33.359117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.603 qpair failed and we were unable to recover it. 00:31:41.603 [2024-10-01 08:46:33.359448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.603 [2024-10-01 08:46:33.359458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.603 qpair failed and we were unable to recover it. 00:31:41.603 [2024-10-01 08:46:33.359790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.603 [2024-10-01 08:46:33.359800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.603 qpair failed and we were unable to recover it. 00:31:41.603 [2024-10-01 08:46:33.360121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.603 [2024-10-01 08:46:33.360131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.603 qpair failed and we were unable to recover it. 00:31:41.603 [2024-10-01 08:46:33.360441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.603 [2024-10-01 08:46:33.360451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.603 qpair failed and we were unable to recover it. 00:31:41.603 [2024-10-01 08:46:33.360770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.603 [2024-10-01 08:46:33.360780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.603 qpair failed and we were unable to recover it. 00:31:41.603 [2024-10-01 08:46:33.361088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.603 [2024-10-01 08:46:33.361098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.603 qpair failed and we were unable to recover it. 00:31:41.603 [2024-10-01 08:46:33.361379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.603 [2024-10-01 08:46:33.361389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.603 qpair failed and we were unable to recover it. 00:31:41.603 [2024-10-01 08:46:33.361673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.603 [2024-10-01 08:46:33.361683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.603 qpair failed and we were unable to recover it. 00:31:41.603 [2024-10-01 08:46:33.362043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.603 [2024-10-01 08:46:33.362052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.603 qpair failed and we were unable to recover it. 00:31:41.603 [2024-10-01 08:46:33.362349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.603 [2024-10-01 08:46:33.362359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.603 qpair failed and we were unable to recover it. 00:31:41.603 [2024-10-01 08:46:33.362685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.603 [2024-10-01 08:46:33.362695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.603 qpair failed and we were unable to recover it. 00:31:41.603 [2024-10-01 08:46:33.363018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.603 [2024-10-01 08:46:33.363028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.603 qpair failed and we were unable to recover it. 00:31:41.603 [2024-10-01 08:46:33.363226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.603 [2024-10-01 08:46:33.363236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.603 qpair failed and we were unable to recover it. 00:31:41.603 [2024-10-01 08:46:33.363535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.603 [2024-10-01 08:46:33.363544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.603 qpair failed and we were unable to recover it. 00:31:41.603 [2024-10-01 08:46:33.363835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.603 [2024-10-01 08:46:33.363845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.603 qpair failed and we were unable to recover it. 00:31:41.603 [2024-10-01 08:46:33.364209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.603 [2024-10-01 08:46:33.364219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.603 qpair failed and we were unable to recover it. 00:31:41.603 [2024-10-01 08:46:33.364554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.603 [2024-10-01 08:46:33.364564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.603 qpair failed and we were unable to recover it. 00:31:41.603 [2024-10-01 08:46:33.364833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.603 [2024-10-01 08:46:33.364842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.603 qpair failed and we were unable to recover it. 00:31:41.603 [2024-10-01 08:46:33.365156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.603 [2024-10-01 08:46:33.365166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.603 qpair failed and we were unable to recover it. 00:31:41.603 [2024-10-01 08:46:33.365444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.603 [2024-10-01 08:46:33.365453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.603 qpair failed and we were unable to recover it. 00:31:41.603 [2024-10-01 08:46:33.365767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.603 [2024-10-01 08:46:33.365776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.603 qpair failed and we were unable to recover it. 00:31:41.603 [2024-10-01 08:46:33.366057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.603 [2024-10-01 08:46:33.366067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.603 qpair failed and we were unable to recover it. 00:31:41.603 [2024-10-01 08:46:33.366412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.603 [2024-10-01 08:46:33.366421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.603 qpair failed and we were unable to recover it. 00:31:41.603 [2024-10-01 08:46:33.366694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.603 [2024-10-01 08:46:33.366704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.603 qpair failed and we were unable to recover it. 00:31:41.603 [2024-10-01 08:46:33.366900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.603 [2024-10-01 08:46:33.366910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.603 qpair failed and we were unable to recover it. 00:31:41.603 [2024-10-01 08:46:33.367229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.603 [2024-10-01 08:46:33.367240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.603 qpair failed and we were unable to recover it. 00:31:41.603 [2024-10-01 08:46:33.367432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.603 [2024-10-01 08:46:33.367442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.603 qpair failed and we were unable to recover it. 00:31:41.603 [2024-10-01 08:46:33.367525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.603 [2024-10-01 08:46:33.367535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.603 qpair failed and we were unable to recover it. 00:31:41.603 [2024-10-01 08:46:33.367812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.603 [2024-10-01 08:46:33.367822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.603 qpair failed and we were unable to recover it. 00:31:41.603 [2024-10-01 08:46:33.368101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.603 [2024-10-01 08:46:33.368111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.603 qpair failed and we were unable to recover it. 00:31:41.603 [2024-10-01 08:46:33.368442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.603 [2024-10-01 08:46:33.368452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.603 qpair failed and we were unable to recover it. 00:31:41.603 [2024-10-01 08:46:33.368751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.603 [2024-10-01 08:46:33.368761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.603 qpair failed and we were unable to recover it. 00:31:41.603 [2024-10-01 08:46:33.368952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.603 [2024-10-01 08:46:33.368962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.603 qpair failed and we were unable to recover it. 00:31:41.603 [2024-10-01 08:46:33.369284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.603 [2024-10-01 08:46:33.369294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.603 qpair failed and we were unable to recover it. 00:31:41.603 [2024-10-01 08:46:33.369488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.603 [2024-10-01 08:46:33.369498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.603 qpair failed and we were unable to recover it. 00:31:41.603 [2024-10-01 08:46:33.369771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.604 [2024-10-01 08:46:33.369782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.604 qpair failed and we were unable to recover it. 00:31:41.604 [2024-10-01 08:46:33.370083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.604 [2024-10-01 08:46:33.370093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.604 qpair failed and we were unable to recover it. 00:31:41.604 [2024-10-01 08:46:33.370368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.604 [2024-10-01 08:46:33.370378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.604 qpair failed and we were unable to recover it. 00:31:41.604 [2024-10-01 08:46:33.370660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.604 [2024-10-01 08:46:33.370669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.604 qpair failed and we were unable to recover it. 00:31:41.604 [2024-10-01 08:46:33.370831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.604 [2024-10-01 08:46:33.370842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.604 qpair failed and we were unable to recover it. 00:31:41.604 [2024-10-01 08:46:33.371209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.604 [2024-10-01 08:46:33.371219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.604 qpair failed and we were unable to recover it. 00:31:41.604 [2024-10-01 08:46:33.371549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.604 [2024-10-01 08:46:33.371558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.604 qpair failed and we were unable to recover it. 00:31:41.604 [2024-10-01 08:46:33.371913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.604 [2024-10-01 08:46:33.371923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.604 qpair failed and we were unable to recover it. 00:31:41.604 [2024-10-01 08:46:33.372082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.604 [2024-10-01 08:46:33.372098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.604 qpair failed and we were unable to recover it. 00:31:41.604 [2024-10-01 08:46:33.372376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.604 [2024-10-01 08:46:33.372386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.604 qpair failed and we were unable to recover it. 00:31:41.604 [2024-10-01 08:46:33.372657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.604 [2024-10-01 08:46:33.372666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.604 qpair failed and we were unable to recover it. 00:31:41.604 [2024-10-01 08:46:33.372972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.604 [2024-10-01 08:46:33.372982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.604 qpair failed and we were unable to recover it. 00:31:41.604 [2024-10-01 08:46:33.373262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.604 [2024-10-01 08:46:33.373272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.604 qpair failed and we were unable to recover it. 00:31:41.604 [2024-10-01 08:46:33.373587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.604 [2024-10-01 08:46:33.373596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.604 qpair failed and we were unable to recover it. 00:31:41.604 [2024-10-01 08:46:33.373868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.604 [2024-10-01 08:46:33.373877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.604 qpair failed and we were unable to recover it. 00:31:41.604 [2024-10-01 08:46:33.374166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.604 [2024-10-01 08:46:33.374176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.604 qpair failed and we were unable to recover it. 00:31:41.604 [2024-10-01 08:46:33.374457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.604 [2024-10-01 08:46:33.374466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.604 qpair failed and we were unable to recover it. 00:31:41.604 [2024-10-01 08:46:33.374725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.604 [2024-10-01 08:46:33.374734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.604 qpair failed and we were unable to recover it. 00:31:41.604 [2024-10-01 08:46:33.375039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.604 [2024-10-01 08:46:33.375050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.604 qpair failed and we were unable to recover it. 00:31:41.604 [2024-10-01 08:46:33.375338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.604 [2024-10-01 08:46:33.375348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.604 qpair failed and we were unable to recover it. 00:31:41.604 [2024-10-01 08:46:33.375559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.604 [2024-10-01 08:46:33.375569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.604 qpair failed and we were unable to recover it. 00:31:41.604 [2024-10-01 08:46:33.375878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.604 [2024-10-01 08:46:33.375889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.604 qpair failed and we were unable to recover it. 00:31:41.604 [2024-10-01 08:46:33.376203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.604 [2024-10-01 08:46:33.376214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.604 qpair failed and we were unable to recover it. 00:31:41.604 [2024-10-01 08:46:33.376411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.604 [2024-10-01 08:46:33.376420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.604 qpair failed and we were unable to recover it. 00:31:41.604 [2024-10-01 08:46:33.376751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.604 [2024-10-01 08:46:33.376761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.604 qpair failed and we were unable to recover it. 00:31:41.604 [2024-10-01 08:46:33.377057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.604 [2024-10-01 08:46:33.377067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.604 qpair failed and we were unable to recover it. 00:31:41.604 [2024-10-01 08:46:33.377276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.604 [2024-10-01 08:46:33.377285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.604 qpair failed and we were unable to recover it. 00:31:41.604 [2024-10-01 08:46:33.377603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.604 [2024-10-01 08:46:33.377613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.604 qpair failed and we were unable to recover it. 00:31:41.604 [2024-10-01 08:46:33.377871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.604 [2024-10-01 08:46:33.377881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.604 qpair failed and we were unable to recover it. 00:31:41.604 [2024-10-01 08:46:33.378208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.604 [2024-10-01 08:46:33.378218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.604 qpair failed and we were unable to recover it. 00:31:41.604 [2024-10-01 08:46:33.378494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.604 [2024-10-01 08:46:33.378504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.604 qpair failed and we were unable to recover it. 00:31:41.604 [2024-10-01 08:46:33.378825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.604 [2024-10-01 08:46:33.378834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.604 qpair failed and we were unable to recover it. 00:31:41.604 [2024-10-01 08:46:33.379137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.604 [2024-10-01 08:46:33.379148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.604 qpair failed and we were unable to recover it. 00:31:41.604 [2024-10-01 08:46:33.379449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.604 [2024-10-01 08:46:33.379459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.604 qpair failed and we were unable to recover it. 00:31:41.604 [2024-10-01 08:46:33.379762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.604 [2024-10-01 08:46:33.379771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.604 qpair failed and we were unable to recover it. 00:31:41.604 [2024-10-01 08:46:33.380076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.604 [2024-10-01 08:46:33.380086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.604 qpair failed and we were unable to recover it. 00:31:41.604 [2024-10-01 08:46:33.380284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.604 [2024-10-01 08:46:33.380293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.604 qpair failed and we were unable to recover it. 00:31:41.604 [2024-10-01 08:46:33.380635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.604 [2024-10-01 08:46:33.380644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.604 qpair failed and we were unable to recover it. 00:31:41.604 [2024-10-01 08:46:33.380930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.604 [2024-10-01 08:46:33.380939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.604 qpair failed and we were unable to recover it. 00:31:41.604 [2024-10-01 08:46:33.381240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.604 [2024-10-01 08:46:33.381250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.605 qpair failed and we were unable to recover it. 00:31:41.605 [2024-10-01 08:46:33.381512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.605 [2024-10-01 08:46:33.381521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.605 qpair failed and we were unable to recover it. 00:31:41.605 [2024-10-01 08:46:33.381793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.605 [2024-10-01 08:46:33.381803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.605 qpair failed and we were unable to recover it. 00:31:41.605 [2024-10-01 08:46:33.382095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.605 [2024-10-01 08:46:33.382105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.605 qpair failed and we were unable to recover it. 00:31:41.605 [2024-10-01 08:46:33.382429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.605 [2024-10-01 08:46:33.382439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.605 qpair failed and we were unable to recover it. 00:31:41.605 [2024-10-01 08:46:33.382767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.605 [2024-10-01 08:46:33.382777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.605 qpair failed and we were unable to recover it. 00:31:41.605 [2024-10-01 08:46:33.383104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.605 [2024-10-01 08:46:33.383114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.605 qpair failed and we were unable to recover it. 00:31:41.605 [2024-10-01 08:46:33.383429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.605 [2024-10-01 08:46:33.383439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.605 qpair failed and we were unable to recover it. 00:31:41.605 [2024-10-01 08:46:33.383746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.605 [2024-10-01 08:46:33.383755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.605 qpair failed and we were unable to recover it. 00:31:41.605 [2024-10-01 08:46:33.384100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.605 [2024-10-01 08:46:33.384111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.605 qpair failed and we were unable to recover it. 00:31:41.605 [2024-10-01 08:46:33.384417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.605 [2024-10-01 08:46:33.384429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.605 qpair failed and we were unable to recover it. 00:31:41.605 [2024-10-01 08:46:33.384733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.605 [2024-10-01 08:46:33.384743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.605 qpair failed and we were unable to recover it. 00:31:41.605 [2024-10-01 08:46:33.385035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.605 [2024-10-01 08:46:33.385045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.605 qpair failed and we were unable to recover it. 00:31:41.605 [2024-10-01 08:46:33.385358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.605 [2024-10-01 08:46:33.385368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.605 qpair failed and we were unable to recover it. 00:31:41.605 [2024-10-01 08:46:33.385693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.605 [2024-10-01 08:46:33.385703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.605 qpair failed and we were unable to recover it. 00:31:41.605 [2024-10-01 08:46:33.385982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.605 [2024-10-01 08:46:33.385992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.605 qpair failed and we were unable to recover it. 00:31:41.878 [2024-10-01 08:46:33.386322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.878 [2024-10-01 08:46:33.386333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.878 qpair failed and we were unable to recover it. 00:31:41.878 [2024-10-01 08:46:33.386609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.878 [2024-10-01 08:46:33.386619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.878 qpair failed and we were unable to recover it. 00:31:41.878 [2024-10-01 08:46:33.386910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.878 [2024-10-01 08:46:33.386919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.878 qpair failed and we were unable to recover it. 00:31:41.878 [2024-10-01 08:46:33.387199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.878 [2024-10-01 08:46:33.387209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.878 qpair failed and we were unable to recover it. 00:31:41.878 [2024-10-01 08:46:33.387531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.878 [2024-10-01 08:46:33.387541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.878 qpair failed and we were unable to recover it. 00:31:41.878 [2024-10-01 08:46:33.387868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.878 [2024-10-01 08:46:33.387878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.878 qpair failed and we were unable to recover it. 00:31:41.878 [2024-10-01 08:46:33.388158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.878 [2024-10-01 08:46:33.388168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.878 qpair failed and we were unable to recover it. 00:31:41.878 [2024-10-01 08:46:33.388466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.878 [2024-10-01 08:46:33.388476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.878 qpair failed and we were unable to recover it. 00:31:41.878 [2024-10-01 08:46:33.388811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.878 [2024-10-01 08:46:33.388821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.878 qpair failed and we were unable to recover it. 00:31:41.878 [2024-10-01 08:46:33.389130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.878 [2024-10-01 08:46:33.389141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.878 qpair failed and we were unable to recover it. 00:31:41.878 [2024-10-01 08:46:33.389492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.878 [2024-10-01 08:46:33.389501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.878 qpair failed and we were unable to recover it. 00:31:41.878 [2024-10-01 08:46:33.389774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.878 [2024-10-01 08:46:33.389784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.878 qpair failed and we were unable to recover it. 00:31:41.878 [2024-10-01 08:46:33.390066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.878 [2024-10-01 08:46:33.390076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.878 qpair failed and we were unable to recover it. 00:31:41.878 [2024-10-01 08:46:33.390382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.878 [2024-10-01 08:46:33.390393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.878 qpair failed and we were unable to recover it. 00:31:41.878 [2024-10-01 08:46:33.390680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.878 [2024-10-01 08:46:33.390690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.878 qpair failed and we were unable to recover it. 00:31:41.878 [2024-10-01 08:46:33.390992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.878 [2024-10-01 08:46:33.391007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.878 qpair failed and we were unable to recover it. 00:31:41.878 [2024-10-01 08:46:33.391298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.878 [2024-10-01 08:46:33.391307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.878 qpair failed and we were unable to recover it. 00:31:41.878 [2024-10-01 08:46:33.391583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.879 [2024-10-01 08:46:33.391592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.879 qpair failed and we were unable to recover it. 00:31:41.879 [2024-10-01 08:46:33.391862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.879 [2024-10-01 08:46:33.391872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.879 qpair failed and we were unable to recover it. 00:31:41.879 [2024-10-01 08:46:33.392203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.879 [2024-10-01 08:46:33.392212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.879 qpair failed and we were unable to recover it. 00:31:41.879 [2024-10-01 08:46:33.392493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.879 [2024-10-01 08:46:33.392503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.879 qpair failed and we were unable to recover it. 00:31:41.879 [2024-10-01 08:46:33.392688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.879 [2024-10-01 08:46:33.392699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.879 qpair failed and we were unable to recover it. 00:31:41.879 [2024-10-01 08:46:33.393008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.879 [2024-10-01 08:46:33.393019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.879 qpair failed and we were unable to recover it. 00:31:41.879 [2024-10-01 08:46:33.393313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.879 [2024-10-01 08:46:33.393323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.879 qpair failed and we were unable to recover it. 00:31:41.879 [2024-10-01 08:46:33.393603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.879 [2024-10-01 08:46:33.393613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.879 qpair failed and we were unable to recover it. 00:31:41.879 [2024-10-01 08:46:33.393945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.879 [2024-10-01 08:46:33.393954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.879 qpair failed and we were unable to recover it. 00:31:41.879 [2024-10-01 08:46:33.394343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.879 [2024-10-01 08:46:33.394352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.879 qpair failed and we were unable to recover it. 00:31:41.879 [2024-10-01 08:46:33.394580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.879 [2024-10-01 08:46:33.394589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.879 qpair failed and we were unable to recover it. 00:31:41.879 [2024-10-01 08:46:33.394776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.879 [2024-10-01 08:46:33.394787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.879 qpair failed and we were unable to recover it. 00:31:41.879 [2024-10-01 08:46:33.395129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.879 [2024-10-01 08:46:33.395139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.879 qpair failed and we were unable to recover it. 00:31:41.879 [2024-10-01 08:46:33.395347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.879 [2024-10-01 08:46:33.395356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.879 qpair failed and we were unable to recover it. 00:31:41.879 [2024-10-01 08:46:33.395658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.879 [2024-10-01 08:46:33.395667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.879 qpair failed and we were unable to recover it. 00:31:41.879 [2024-10-01 08:46:33.395884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.879 [2024-10-01 08:46:33.395894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.879 qpair failed and we were unable to recover it. 00:31:41.879 [2024-10-01 08:46:33.396070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.879 [2024-10-01 08:46:33.396081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.879 qpair failed and we were unable to recover it. 00:31:41.879 [2024-10-01 08:46:33.396401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.879 [2024-10-01 08:46:33.396410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.879 qpair failed and we were unable to recover it. 00:31:41.879 [2024-10-01 08:46:33.396780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.879 [2024-10-01 08:46:33.396790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.879 qpair failed and we were unable to recover it. 00:31:41.879 [2024-10-01 08:46:33.397128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.879 [2024-10-01 08:46:33.397138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.879 qpair failed and we were unable to recover it. 00:31:41.879 [2024-10-01 08:46:33.397371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.879 [2024-10-01 08:46:33.397382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.879 qpair failed and we were unable to recover it. 00:31:41.879 [2024-10-01 08:46:33.397695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.879 [2024-10-01 08:46:33.397705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.879 qpair failed and we were unable to recover it. 00:31:41.879 [2024-10-01 08:46:33.398006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.879 [2024-10-01 08:46:33.398016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.879 qpair failed and we were unable to recover it. 00:31:41.879 [2024-10-01 08:46:33.398356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.879 [2024-10-01 08:46:33.398366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.879 qpair failed and we were unable to recover it. 00:31:41.879 [2024-10-01 08:46:33.398649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.879 [2024-10-01 08:46:33.398667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.879 qpair failed and we were unable to recover it. 00:31:41.879 [2024-10-01 08:46:33.398968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.879 [2024-10-01 08:46:33.398978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.879 qpair failed and we were unable to recover it. 00:31:41.879 [2024-10-01 08:46:33.399216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.879 [2024-10-01 08:46:33.399227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.879 qpair failed and we were unable to recover it. 00:31:41.879 [2024-10-01 08:46:33.399518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.879 [2024-10-01 08:46:33.399528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.879 qpair failed and we were unable to recover it. 00:31:41.879 [2024-10-01 08:46:33.399866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.879 [2024-10-01 08:46:33.399884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.879 qpair failed and we were unable to recover it. 00:31:41.879 [2024-10-01 08:46:33.400212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.879 [2024-10-01 08:46:33.400222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.879 qpair failed and we were unable to recover it. 00:31:41.879 [2024-10-01 08:46:33.400507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.879 [2024-10-01 08:46:33.400525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.879 qpair failed and we were unable to recover it. 00:31:41.879 [2024-10-01 08:46:33.400831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.879 [2024-10-01 08:46:33.400841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.879 qpair failed and we were unable to recover it. 00:31:41.879 [2024-10-01 08:46:33.401137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.879 [2024-10-01 08:46:33.401147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.879 qpair failed and we were unable to recover it. 00:31:41.879 [2024-10-01 08:46:33.401445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.879 [2024-10-01 08:46:33.401455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.879 qpair failed and we were unable to recover it. 00:31:41.879 [2024-10-01 08:46:33.401762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.879 [2024-10-01 08:46:33.401772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.879 qpair failed and we were unable to recover it. 00:31:41.879 [2024-10-01 08:46:33.402109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.879 [2024-10-01 08:46:33.402120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.879 qpair failed and we were unable to recover it. 00:31:41.879 [2024-10-01 08:46:33.402414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.879 [2024-10-01 08:46:33.402424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.879 qpair failed and we were unable to recover it. 00:31:41.879 [2024-10-01 08:46:33.402623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.879 [2024-10-01 08:46:33.402632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.879 qpair failed and we were unable to recover it. 00:31:41.879 [2024-10-01 08:46:33.402936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.879 [2024-10-01 08:46:33.402945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.879 qpair failed and we were unable to recover it. 00:31:41.879 [2024-10-01 08:46:33.403242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.880 [2024-10-01 08:46:33.403253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.880 qpair failed and we were unable to recover it. 00:31:41.880 [2024-10-01 08:46:33.403609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.880 [2024-10-01 08:46:33.403619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.880 qpair failed and we were unable to recover it. 00:31:41.880 [2024-10-01 08:46:33.403928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.880 [2024-10-01 08:46:33.403938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.880 qpair failed and we were unable to recover it. 00:31:41.880 [2024-10-01 08:46:33.404263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.880 [2024-10-01 08:46:33.404273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.880 qpair failed and we were unable to recover it. 00:31:41.880 [2024-10-01 08:46:33.404586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.880 [2024-10-01 08:46:33.404596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.880 qpair failed and we were unable to recover it. 00:31:41.880 [2024-10-01 08:46:33.404871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.880 [2024-10-01 08:46:33.404881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.880 qpair failed and we were unable to recover it. 00:31:41.880 [2024-10-01 08:46:33.405166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.880 [2024-10-01 08:46:33.405178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.880 qpair failed and we were unable to recover it. 00:31:41.880 [2024-10-01 08:46:33.405464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.880 [2024-10-01 08:46:33.405474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.880 qpair failed and we were unable to recover it. 00:31:41.880 [2024-10-01 08:46:33.405736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.880 [2024-10-01 08:46:33.405745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.880 qpair failed and we were unable to recover it. 00:31:41.880 [2024-10-01 08:46:33.406038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.880 [2024-10-01 08:46:33.406048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.880 qpair failed and we were unable to recover it. 00:31:41.880 [2024-10-01 08:46:33.406364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.880 [2024-10-01 08:46:33.406373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.880 qpair failed and we were unable to recover it. 00:31:41.880 [2024-10-01 08:46:33.406696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.880 [2024-10-01 08:46:33.406706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.880 qpair failed and we were unable to recover it. 00:31:41.880 [2024-10-01 08:46:33.407036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.880 [2024-10-01 08:46:33.407046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.880 qpair failed and we were unable to recover it. 00:31:41.880 [2024-10-01 08:46:33.407315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.880 [2024-10-01 08:46:33.407325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.880 qpair failed and we were unable to recover it. 00:31:41.880 [2024-10-01 08:46:33.407654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.880 [2024-10-01 08:46:33.407663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.880 qpair failed and we were unable to recover it. 00:31:41.880 [2024-10-01 08:46:33.407859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.880 [2024-10-01 08:46:33.407869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.880 qpair failed and we were unable to recover it. 00:31:41.880 [2024-10-01 08:46:33.408161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.880 [2024-10-01 08:46:33.408171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.880 qpair failed and we were unable to recover it. 00:31:41.880 [2024-10-01 08:46:33.408374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.880 [2024-10-01 08:46:33.408384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.880 qpair failed and we were unable to recover it. 00:31:41.880 [2024-10-01 08:46:33.408711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.880 [2024-10-01 08:46:33.408722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.880 qpair failed and we were unable to recover it. 00:31:41.880 [2024-10-01 08:46:33.408919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.880 [2024-10-01 08:46:33.408928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.880 qpair failed and we were unable to recover it. 00:31:41.880 [2024-10-01 08:46:33.409069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.880 [2024-10-01 08:46:33.409079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.880 qpair failed and we were unable to recover it. 00:31:41.880 [2024-10-01 08:46:33.409300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.880 [2024-10-01 08:46:33.409311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.880 qpair failed and we were unable to recover it. 00:31:41.880 [2024-10-01 08:46:33.409650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.880 [2024-10-01 08:46:33.409661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.880 qpair failed and we were unable to recover it. 00:31:41.880 [2024-10-01 08:46:33.409877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.880 [2024-10-01 08:46:33.409887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.880 qpair failed and we were unable to recover it. 00:31:41.880 [2024-10-01 08:46:33.410205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.880 [2024-10-01 08:46:33.410216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.880 qpair failed and we were unable to recover it. 00:31:41.880 [2024-10-01 08:46:33.410505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.880 [2024-10-01 08:46:33.410515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.880 qpair failed and we were unable to recover it. 00:31:41.880 [2024-10-01 08:46:33.410819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.880 [2024-10-01 08:46:33.410829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.880 qpair failed and we were unable to recover it. 00:31:41.880 [2024-10-01 08:46:33.411033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.880 [2024-10-01 08:46:33.411043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.880 qpair failed and we were unable to recover it. 00:31:41.880 [2024-10-01 08:46:33.411368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.880 [2024-10-01 08:46:33.411378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.880 qpair failed and we were unable to recover it. 00:31:41.880 [2024-10-01 08:46:33.411675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.880 [2024-10-01 08:46:33.411685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.880 qpair failed and we were unable to recover it. 00:31:41.880 [2024-10-01 08:46:33.411974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.880 [2024-10-01 08:46:33.411984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.880 qpair failed and we were unable to recover it. 00:31:41.880 [2024-10-01 08:46:33.412276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.880 [2024-10-01 08:46:33.412287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.880 qpair failed and we were unable to recover it. 00:31:41.880 [2024-10-01 08:46:33.412587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.880 [2024-10-01 08:46:33.412597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.880 qpair failed and we were unable to recover it. 00:31:41.880 [2024-10-01 08:46:33.412975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.880 [2024-10-01 08:46:33.412988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.880 qpair failed and we were unable to recover it. 00:31:41.880 [2024-10-01 08:46:33.413286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.880 [2024-10-01 08:46:33.413296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.880 qpair failed and we were unable to recover it. 00:31:41.880 [2024-10-01 08:46:33.413504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.880 [2024-10-01 08:46:33.413514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.880 qpair failed and we were unable to recover it. 00:31:41.880 [2024-10-01 08:46:33.413865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.880 [2024-10-01 08:46:33.413875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.880 qpair failed and we were unable to recover it. 00:31:41.880 [2024-10-01 08:46:33.414163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.880 [2024-10-01 08:46:33.414174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.880 qpair failed and we were unable to recover it. 00:31:41.880 [2024-10-01 08:46:33.414529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.880 [2024-10-01 08:46:33.414539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.880 qpair failed and we were unable to recover it. 00:31:41.880 [2024-10-01 08:46:33.414874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.881 [2024-10-01 08:46:33.414884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.881 qpair failed and we were unable to recover it. 00:31:41.881 [2024-10-01 08:46:33.415207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.881 [2024-10-01 08:46:33.415217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.881 qpair failed and we were unable to recover it. 00:31:41.881 [2024-10-01 08:46:33.415537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.881 [2024-10-01 08:46:33.415547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.881 qpair failed and we were unable to recover it. 00:31:41.881 [2024-10-01 08:46:33.415890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.881 [2024-10-01 08:46:33.415901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.881 qpair failed and we were unable to recover it. 00:31:41.881 [2024-10-01 08:46:33.416114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.881 [2024-10-01 08:46:33.416124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.881 qpair failed and we were unable to recover it. 00:31:41.881 [2024-10-01 08:46:33.416445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.881 [2024-10-01 08:46:33.416455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.881 qpair failed and we were unable to recover it. 00:31:41.881 [2024-10-01 08:46:33.416784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.881 [2024-10-01 08:46:33.416794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.881 qpair failed and we were unable to recover it. 00:31:41.881 [2024-10-01 08:46:33.417056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.881 [2024-10-01 08:46:33.417066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.881 qpair failed and we were unable to recover it. 00:31:41.881 [2024-10-01 08:46:33.417281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.881 [2024-10-01 08:46:33.417291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.881 qpair failed and we were unable to recover it. 00:31:41.881 [2024-10-01 08:46:33.417583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.881 [2024-10-01 08:46:33.417593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.881 qpair failed and we were unable to recover it. 00:31:41.881 [2024-10-01 08:46:33.417913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.881 [2024-10-01 08:46:33.417923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.881 qpair failed and we were unable to recover it. 00:31:41.881 [2024-10-01 08:46:33.418231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.881 [2024-10-01 08:46:33.418241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.881 qpair failed and we were unable to recover it. 00:31:41.881 [2024-10-01 08:46:33.418525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.881 [2024-10-01 08:46:33.418535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.881 qpair failed and we were unable to recover it. 00:31:41.881 [2024-10-01 08:46:33.418839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.881 [2024-10-01 08:46:33.418849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.881 qpair failed and we were unable to recover it. 00:31:41.881 [2024-10-01 08:46:33.419155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.881 [2024-10-01 08:46:33.419165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.881 qpair failed and we were unable to recover it. 00:31:41.881 [2024-10-01 08:46:33.419470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.881 [2024-10-01 08:46:33.419479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.881 qpair failed and we were unable to recover it. 00:31:41.881 [2024-10-01 08:46:33.419664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.881 [2024-10-01 08:46:33.419673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.881 qpair failed and we were unable to recover it. 00:31:41.881 [2024-10-01 08:46:33.419992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.881 [2024-10-01 08:46:33.420010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.881 qpair failed and we were unable to recover it. 00:31:41.881 [2024-10-01 08:46:33.420284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.881 [2024-10-01 08:46:33.420294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.881 qpair failed and we were unable to recover it. 00:31:41.881 [2024-10-01 08:46:33.420616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.881 [2024-10-01 08:46:33.420627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.881 qpair failed and we were unable to recover it. 00:31:41.881 [2024-10-01 08:46:33.420933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.881 [2024-10-01 08:46:33.420944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.881 qpair failed and we were unable to recover it. 00:31:41.881 [2024-10-01 08:46:33.421238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.881 [2024-10-01 08:46:33.421249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.881 qpair failed and we were unable to recover it. 00:31:41.881 [2024-10-01 08:46:33.421545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.881 [2024-10-01 08:46:33.421556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.881 qpair failed and we were unable to recover it. 00:31:41.881 [2024-10-01 08:46:33.421855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.881 [2024-10-01 08:46:33.421866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.881 qpair failed and we were unable to recover it. 00:31:41.881 [2024-10-01 08:46:33.422197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.881 [2024-10-01 08:46:33.422208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.881 qpair failed and we were unable to recover it. 00:31:41.881 [2024-10-01 08:46:33.422424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.881 [2024-10-01 08:46:33.422435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.881 qpair failed and we were unable to recover it. 00:31:41.881 [2024-10-01 08:46:33.422740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.881 [2024-10-01 08:46:33.422750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.881 qpair failed and we were unable to recover it. 00:31:41.881 [2024-10-01 08:46:33.422935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.881 [2024-10-01 08:46:33.422945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.881 qpair failed and we were unable to recover it. 00:31:41.881 [2024-10-01 08:46:33.423271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.881 [2024-10-01 08:46:33.423282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.881 qpair failed and we were unable to recover it. 00:31:41.881 [2024-10-01 08:46:33.423585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.881 [2024-10-01 08:46:33.423595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.881 qpair failed and we were unable to recover it. 00:31:41.881 [2024-10-01 08:46:33.423880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.881 [2024-10-01 08:46:33.423890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.881 qpair failed and we were unable to recover it. 00:31:41.881 [2024-10-01 08:46:33.424164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.881 [2024-10-01 08:46:33.424175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.881 qpair failed and we were unable to recover it. 00:31:41.881 [2024-10-01 08:46:33.424473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.881 [2024-10-01 08:46:33.424482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.881 qpair failed and we were unable to recover it. 00:31:41.881 [2024-10-01 08:46:33.424764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.881 [2024-10-01 08:46:33.424773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.881 qpair failed and we were unable to recover it. 00:31:41.881 [2024-10-01 08:46:33.425101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.881 [2024-10-01 08:46:33.425111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.881 qpair failed and we were unable to recover it. 00:31:41.881 [2024-10-01 08:46:33.425383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.881 [2024-10-01 08:46:33.425395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.881 qpair failed and we were unable to recover it. 00:31:41.881 [2024-10-01 08:46:33.425703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.881 [2024-10-01 08:46:33.425713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.881 qpair failed and we were unable to recover it. 00:31:41.881 [2024-10-01 08:46:33.426048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.881 [2024-10-01 08:46:33.426059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.881 qpair failed and we were unable to recover it. 00:31:41.881 [2024-10-01 08:46:33.426426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.882 [2024-10-01 08:46:33.426437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.882 qpair failed and we were unable to recover it. 00:31:41.882 [2024-10-01 08:46:33.426767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.882 [2024-10-01 08:46:33.426777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.882 qpair failed and we were unable to recover it. 00:31:41.882 [2024-10-01 08:46:33.427055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.882 [2024-10-01 08:46:33.427065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.882 qpair failed and we were unable to recover it. 00:31:41.882 [2024-10-01 08:46:33.427403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.882 [2024-10-01 08:46:33.427413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.882 qpair failed and we were unable to recover it. 00:31:41.882 [2024-10-01 08:46:33.427718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.882 [2024-10-01 08:46:33.427728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.882 qpair failed and we were unable to recover it. 00:31:41.882 [2024-10-01 08:46:33.427985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.882 [2024-10-01 08:46:33.427997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.882 qpair failed and we were unable to recover it. 00:31:41.882 [2024-10-01 08:46:33.428350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.882 [2024-10-01 08:46:33.428360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.882 qpair failed and we were unable to recover it. 00:31:41.882 [2024-10-01 08:46:33.428666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.882 [2024-10-01 08:46:33.428675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.882 qpair failed and we were unable to recover it. 00:31:41.882 [2024-10-01 08:46:33.428861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.882 [2024-10-01 08:46:33.428871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.882 qpair failed and we were unable to recover it. 00:31:41.882 [2024-10-01 08:46:33.429196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.882 [2024-10-01 08:46:33.429206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.882 qpair failed and we were unable to recover it. 00:31:41.882 [2024-10-01 08:46:33.429484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.882 [2024-10-01 08:46:33.429495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.882 qpair failed and we were unable to recover it. 00:31:41.882 [2024-10-01 08:46:33.429698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.882 [2024-10-01 08:46:33.429709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.882 qpair failed and we were unable to recover it. 00:31:41.882 [2024-10-01 08:46:33.429990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.882 [2024-10-01 08:46:33.430005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.882 qpair failed and we were unable to recover it. 00:31:41.882 [2024-10-01 08:46:33.430311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.882 [2024-10-01 08:46:33.430321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.882 qpair failed and we were unable to recover it. 00:31:41.882 [2024-10-01 08:46:33.430626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.882 [2024-10-01 08:46:33.430636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.882 qpair failed and we were unable to recover it. 00:31:41.882 [2024-10-01 08:46:33.430937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.882 [2024-10-01 08:46:33.430948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.882 qpair failed and we were unable to recover it. 00:31:41.882 [2024-10-01 08:46:33.431079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.882 [2024-10-01 08:46:33.431090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.882 qpair failed and we were unable to recover it. 00:31:41.882 [2024-10-01 08:46:33.431382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.882 [2024-10-01 08:46:33.431393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.882 qpair failed and we were unable to recover it. 00:31:41.882 [2024-10-01 08:46:33.431696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.882 [2024-10-01 08:46:33.431707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.882 qpair failed and we were unable to recover it. 00:31:41.882 [2024-10-01 08:46:33.432004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.882 [2024-10-01 08:46:33.432016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.882 qpair failed and we were unable to recover it. 00:31:41.882 [2024-10-01 08:46:33.432286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.882 [2024-10-01 08:46:33.432296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.882 qpair failed and we were unable to recover it. 00:31:41.882 [2024-10-01 08:46:33.432686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.882 [2024-10-01 08:46:33.432695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.882 qpair failed and we were unable to recover it. 00:31:41.882 [2024-10-01 08:46:33.433004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.882 [2024-10-01 08:46:33.433015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.882 qpair failed and we were unable to recover it. 00:31:41.882 [2024-10-01 08:46:33.433352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.882 [2024-10-01 08:46:33.433362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.882 qpair failed and we were unable to recover it. 00:31:41.882 [2024-10-01 08:46:33.433641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.882 [2024-10-01 08:46:33.433656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.882 qpair failed and we were unable to recover it. 00:31:41.882 [2024-10-01 08:46:33.433983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.882 [2024-10-01 08:46:33.433992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.882 qpair failed and we were unable to recover it. 00:31:41.882 [2024-10-01 08:46:33.434176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.882 [2024-10-01 08:46:33.434187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.882 qpair failed and we were unable to recover it. 00:31:41.882 [2024-10-01 08:46:33.434536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.882 [2024-10-01 08:46:33.434546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.882 qpair failed and we were unable to recover it. 00:31:41.882 [2024-10-01 08:46:33.434885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.882 [2024-10-01 08:46:33.434895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.882 qpair failed and we were unable to recover it. 00:31:41.882 [2024-10-01 08:46:33.435234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.882 [2024-10-01 08:46:33.435244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.882 qpair failed and we were unable to recover it. 00:31:41.882 [2024-10-01 08:46:33.435525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.882 [2024-10-01 08:46:33.435535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.882 qpair failed and we were unable to recover it. 00:31:41.882 [2024-10-01 08:46:33.435815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.882 [2024-10-01 08:46:33.435825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.882 qpair failed and we were unable to recover it. 00:31:41.882 [2024-10-01 08:46:33.436130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.882 [2024-10-01 08:46:33.436139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.882 qpair failed and we were unable to recover it. 00:31:41.882 [2024-10-01 08:46:33.436450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.882 [2024-10-01 08:46:33.436461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.882 qpair failed and we were unable to recover it. 00:31:41.882 [2024-10-01 08:46:33.436738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.882 [2024-10-01 08:46:33.436749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.882 qpair failed and we were unable to recover it. 00:31:41.882 [2024-10-01 08:46:33.437052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.882 [2024-10-01 08:46:33.437063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.882 qpair failed and we were unable to recover it. 00:31:41.882 [2024-10-01 08:46:33.437381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.882 [2024-10-01 08:46:33.437391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.882 qpair failed and we were unable to recover it. 00:31:41.882 [2024-10-01 08:46:33.437672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.882 [2024-10-01 08:46:33.437682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.882 qpair failed and we were unable to recover it. 00:31:41.882 [2024-10-01 08:46:33.438004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.883 [2024-10-01 08:46:33.438015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.883 qpair failed and we were unable to recover it. 00:31:41.883 [2024-10-01 08:46:33.438294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.883 [2024-10-01 08:46:33.438305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.883 qpair failed and we were unable to recover it. 00:31:41.883 [2024-10-01 08:46:33.438612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.883 [2024-10-01 08:46:33.438621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.883 qpair failed and we were unable to recover it. 00:31:41.883 [2024-10-01 08:46:33.438878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.883 [2024-10-01 08:46:33.438888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.883 qpair failed and we were unable to recover it. 00:31:41.883 [2024-10-01 08:46:33.439118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.883 [2024-10-01 08:46:33.439129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.883 qpair failed and we were unable to recover it. 00:31:41.883 [2024-10-01 08:46:33.439469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.883 [2024-10-01 08:46:33.439479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.883 qpair failed and we were unable to recover it. 00:31:41.883 [2024-10-01 08:46:33.439780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.883 [2024-10-01 08:46:33.439790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.883 qpair failed and we were unable to recover it. 00:31:41.883 [2024-10-01 08:46:33.440091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.883 [2024-10-01 08:46:33.440102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.883 qpair failed and we were unable to recover it. 00:31:41.883 [2024-10-01 08:46:33.440430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.883 [2024-10-01 08:46:33.440440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.883 qpair failed and we were unable to recover it. 00:31:41.883 [2024-10-01 08:46:33.440745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.883 [2024-10-01 08:46:33.440755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.883 qpair failed and we were unable to recover it. 00:31:41.883 [2024-10-01 08:46:33.441036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.883 [2024-10-01 08:46:33.441046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.883 qpair failed and we were unable to recover it. 00:31:41.883 [2024-10-01 08:46:33.441161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.883 [2024-10-01 08:46:33.441171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.883 qpair failed and we were unable to recover it. 00:31:41.883 [2024-10-01 08:46:33.441429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.883 [2024-10-01 08:46:33.441439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.883 qpair failed and we were unable to recover it. 00:31:41.883 [2024-10-01 08:46:33.441627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.883 [2024-10-01 08:46:33.441638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.883 qpair failed and we were unable to recover it. 00:31:41.883 [2024-10-01 08:46:33.442013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.883 [2024-10-01 08:46:33.442023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.883 qpair failed and we were unable to recover it. 00:31:41.883 [2024-10-01 08:46:33.442318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.883 [2024-10-01 08:46:33.442328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.883 qpair failed and we were unable to recover it. 00:31:41.883 [2024-10-01 08:46:33.442600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.883 [2024-10-01 08:46:33.442611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.883 qpair failed and we were unable to recover it. 00:31:41.883 [2024-10-01 08:46:33.442921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.883 [2024-10-01 08:46:33.442931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.883 qpair failed and we were unable to recover it. 00:31:41.883 [2024-10-01 08:46:33.443214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.883 [2024-10-01 08:46:33.443225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.883 qpair failed and we were unable to recover it. 00:31:41.883 [2024-10-01 08:46:33.443577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.883 [2024-10-01 08:46:33.443587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.883 qpair failed and we were unable to recover it. 00:31:41.883 [2024-10-01 08:46:33.443893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.883 [2024-10-01 08:46:33.443903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.883 qpair failed and we were unable to recover it. 00:31:41.883 [2024-10-01 08:46:33.444223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.883 [2024-10-01 08:46:33.444234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.883 qpair failed and we were unable to recover it. 00:31:41.883 [2024-10-01 08:46:33.444512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.883 [2024-10-01 08:46:33.444522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.883 qpair failed and we were unable to recover it. 00:31:41.883 [2024-10-01 08:46:33.444844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.883 [2024-10-01 08:46:33.444854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.883 qpair failed and we were unable to recover it. 00:31:41.883 [2024-10-01 08:46:33.444928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.883 [2024-10-01 08:46:33.444938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.883 qpair failed and we were unable to recover it. 00:31:41.883 [2024-10-01 08:46:33.445199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.883 [2024-10-01 08:46:33.445210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.883 qpair failed and we were unable to recover it. 00:31:41.883 [2024-10-01 08:46:33.445527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.883 [2024-10-01 08:46:33.445537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.883 qpair failed and we were unable to recover it. 00:31:41.883 [2024-10-01 08:46:33.445721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.883 [2024-10-01 08:46:33.445733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.883 qpair failed and we were unable to recover it. 00:31:41.883 [2024-10-01 08:46:33.446023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.883 [2024-10-01 08:46:33.446033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.883 qpair failed and we were unable to recover it. 00:31:41.883 [2024-10-01 08:46:33.446370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.883 [2024-10-01 08:46:33.446381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.883 qpair failed and we were unable to recover it. 00:31:41.883 [2024-10-01 08:46:33.446660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.883 [2024-10-01 08:46:33.446669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.883 qpair failed and we were unable to recover it. 00:31:41.883 [2024-10-01 08:46:33.447034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.883 [2024-10-01 08:46:33.447044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.883 qpair failed and we were unable to recover it. 00:31:41.883 [2024-10-01 08:46:33.447384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.884 [2024-10-01 08:46:33.447393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.884 qpair failed and we were unable to recover it. 00:31:41.884 [2024-10-01 08:46:33.447608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.884 [2024-10-01 08:46:33.447619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.884 qpair failed and we were unable to recover it. 00:31:41.884 [2024-10-01 08:46:33.447856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.884 [2024-10-01 08:46:33.447866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.884 qpair failed and we were unable to recover it. 00:31:41.884 [2024-10-01 08:46:33.448193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.884 [2024-10-01 08:46:33.448202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.884 qpair failed and we were unable to recover it. 00:31:41.884 [2024-10-01 08:46:33.448492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.884 [2024-10-01 08:46:33.448502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.884 qpair failed and we were unable to recover it. 00:31:41.884 [2024-10-01 08:46:33.448781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.884 [2024-10-01 08:46:33.448791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.884 qpair failed and we were unable to recover it. 00:31:41.884 [2024-10-01 08:46:33.449087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.884 [2024-10-01 08:46:33.449098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.884 qpair failed and we were unable to recover it. 00:31:41.884 [2024-10-01 08:46:33.449430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.884 [2024-10-01 08:46:33.449440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.884 qpair failed and we were unable to recover it. 00:31:41.884 [2024-10-01 08:46:33.449627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.884 [2024-10-01 08:46:33.449639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.884 qpair failed and we were unable to recover it. 00:31:41.884 [2024-10-01 08:46:33.450001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.884 [2024-10-01 08:46:33.450011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.884 qpair failed and we were unable to recover it. 00:31:41.884 [2024-10-01 08:46:33.450204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.884 [2024-10-01 08:46:33.450215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.884 qpair failed and we were unable to recover it. 00:31:41.884 [2024-10-01 08:46:33.450585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.884 [2024-10-01 08:46:33.450595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.884 qpair failed and we were unable to recover it. 00:31:41.884 [2024-10-01 08:46:33.450911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.884 [2024-10-01 08:46:33.450921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.884 qpair failed and we were unable to recover it. 00:31:41.884 [2024-10-01 08:46:33.451214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.884 [2024-10-01 08:46:33.451224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.884 qpair failed and we were unable to recover it. 00:31:41.884 [2024-10-01 08:46:33.451505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.884 [2024-10-01 08:46:33.451516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.884 qpair failed and we were unable to recover it. 00:31:41.884 [2024-10-01 08:46:33.451828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.884 [2024-10-01 08:46:33.451837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.884 qpair failed and we were unable to recover it. 00:31:41.884 [2024-10-01 08:46:33.452141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.884 [2024-10-01 08:46:33.452151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.884 qpair failed and we were unable to recover it. 00:31:41.884 [2024-10-01 08:46:33.452486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.884 [2024-10-01 08:46:33.452497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.884 qpair failed and we were unable to recover it. 00:31:41.884 [2024-10-01 08:46:33.452755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.884 [2024-10-01 08:46:33.452766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.884 qpair failed and we were unable to recover it. 00:31:41.884 [2024-10-01 08:46:33.453066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.884 [2024-10-01 08:46:33.453077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.884 qpair failed and we were unable to recover it. 00:31:41.884 [2024-10-01 08:46:33.453356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.884 [2024-10-01 08:46:33.453366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.884 qpair failed and we were unable to recover it. 00:31:41.884 [2024-10-01 08:46:33.453633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.884 [2024-10-01 08:46:33.453644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.884 qpair failed and we were unable to recover it. 00:31:41.884 [2024-10-01 08:46:33.453949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.884 [2024-10-01 08:46:33.453959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.884 qpair failed and we were unable to recover it. 00:31:41.884 [2024-10-01 08:46:33.454319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.884 [2024-10-01 08:46:33.454330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.884 qpair failed and we were unable to recover it. 00:31:41.884 [2024-10-01 08:46:33.454590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.884 [2024-10-01 08:46:33.454600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.884 qpair failed and we were unable to recover it. 00:31:41.884 [2024-10-01 08:46:33.454903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.884 [2024-10-01 08:46:33.454914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.884 qpair failed and we were unable to recover it. 00:31:41.884 [2024-10-01 08:46:33.455078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.884 [2024-10-01 08:46:33.455090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.884 qpair failed and we were unable to recover it. 00:31:41.884 [2024-10-01 08:46:33.455410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.884 [2024-10-01 08:46:33.455420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.884 qpair failed and we were unable to recover it. 00:31:41.884 [2024-10-01 08:46:33.455699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.884 [2024-10-01 08:46:33.455708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.884 qpair failed and we were unable to recover it. 00:31:41.884 [2024-10-01 08:46:33.456056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.884 [2024-10-01 08:46:33.456066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.884 qpair failed and we were unable to recover it. 00:31:41.884 [2024-10-01 08:46:33.456339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.884 [2024-10-01 08:46:33.456349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.884 qpair failed and we were unable to recover it. 00:31:41.884 [2024-10-01 08:46:33.456653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.884 [2024-10-01 08:46:33.456663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.884 qpair failed and we were unable to recover it. 00:31:41.884 [2024-10-01 08:46:33.456953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.884 [2024-10-01 08:46:33.456963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.884 qpair failed and we were unable to recover it. 00:31:41.884 [2024-10-01 08:46:33.457122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.884 [2024-10-01 08:46:33.457134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.884 qpair failed and we were unable to recover it. 00:31:41.884 [2024-10-01 08:46:33.457458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.884 [2024-10-01 08:46:33.457468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.884 qpair failed and we were unable to recover it. 00:31:41.884 [2024-10-01 08:46:33.457784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.884 [2024-10-01 08:46:33.457794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.884 qpair failed and we were unable to recover it. 00:31:41.884 [2024-10-01 08:46:33.458146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.884 [2024-10-01 08:46:33.458158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.884 qpair failed and we were unable to recover it. 00:31:41.884 [2024-10-01 08:46:33.458443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.884 [2024-10-01 08:46:33.458453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.884 qpair failed and we were unable to recover it. 00:31:41.884 [2024-10-01 08:46:33.458617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.885 [2024-10-01 08:46:33.458627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.885 qpair failed and we were unable to recover it. 00:31:41.885 [2024-10-01 08:46:33.458838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.885 [2024-10-01 08:46:33.458847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.885 qpair failed and we were unable to recover it. 00:31:41.885 [2024-10-01 08:46:33.459219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.885 [2024-10-01 08:46:33.459229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.885 qpair failed and we were unable to recover it. 00:31:41.885 [2024-10-01 08:46:33.459536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.885 [2024-10-01 08:46:33.459545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.885 qpair failed and we were unable to recover it. 00:31:41.885 [2024-10-01 08:46:33.459852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.885 [2024-10-01 08:46:33.459863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.885 qpair failed and we were unable to recover it. 00:31:41.885 [2024-10-01 08:46:33.460151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.885 [2024-10-01 08:46:33.460161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.885 qpair failed and we were unable to recover it. 00:31:41.885 [2024-10-01 08:46:33.460487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.885 [2024-10-01 08:46:33.460497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.885 qpair failed and we were unable to recover it. 00:31:41.885 [2024-10-01 08:46:33.460803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.885 [2024-10-01 08:46:33.460813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.885 qpair failed and we were unable to recover it. 00:31:41.885 [2024-10-01 08:46:33.461118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.885 [2024-10-01 08:46:33.461128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.885 qpair failed and we were unable to recover it. 00:31:41.885 [2024-10-01 08:46:33.461465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.885 [2024-10-01 08:46:33.461476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.885 qpair failed and we were unable to recover it. 00:31:41.885 [2024-10-01 08:46:33.461737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.885 [2024-10-01 08:46:33.461748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.885 qpair failed and we were unable to recover it. 00:31:41.885 [2024-10-01 08:46:33.462052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.885 [2024-10-01 08:46:33.462062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.885 qpair failed and we were unable to recover it. 00:31:41.885 [2024-10-01 08:46:33.462353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.885 [2024-10-01 08:46:33.462363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.885 qpair failed and we were unable to recover it. 00:31:41.885 [2024-10-01 08:46:33.462676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.885 [2024-10-01 08:46:33.462686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.885 qpair failed and we were unable to recover it. 00:31:41.885 [2024-10-01 08:46:33.462990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.885 [2024-10-01 08:46:33.463005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.885 qpair failed and we were unable to recover it. 00:31:41.885 [2024-10-01 08:46:33.463279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.885 [2024-10-01 08:46:33.463289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.885 qpair failed and we were unable to recover it. 00:31:41.885 [2024-10-01 08:46:33.463614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.885 [2024-10-01 08:46:33.463623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.885 qpair failed and we were unable to recover it. 00:31:41.885 [2024-10-01 08:46:33.463904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.885 [2024-10-01 08:46:33.463914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.885 qpair failed and we were unable to recover it. 00:31:41.885 [2024-10-01 08:46:33.464218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.885 [2024-10-01 08:46:33.464229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.885 qpair failed and we were unable to recover it. 00:31:41.885 [2024-10-01 08:46:33.464543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.885 [2024-10-01 08:46:33.464552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.885 qpair failed and we were unable to recover it. 00:31:41.885 [2024-10-01 08:46:33.464863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.885 [2024-10-01 08:46:33.464872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.885 qpair failed and we were unable to recover it. 00:31:41.885 [2024-10-01 08:46:33.465163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.885 [2024-10-01 08:46:33.465173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.885 qpair failed and we were unable to recover it. 00:31:41.885 [2024-10-01 08:46:33.465502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.885 [2024-10-01 08:46:33.465512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.885 qpair failed and we were unable to recover it. 00:31:41.885 [2024-10-01 08:46:33.465786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.885 [2024-10-01 08:46:33.465796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.885 qpair failed and we were unable to recover it. 00:31:41.885 [2024-10-01 08:46:33.466133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.885 [2024-10-01 08:46:33.466143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.885 qpair failed and we were unable to recover it. 00:31:41.885 [2024-10-01 08:46:33.466415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.885 [2024-10-01 08:46:33.466427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.885 qpair failed and we were unable to recover it. 00:31:41.885 [2024-10-01 08:46:33.466693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.885 [2024-10-01 08:46:33.466705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.885 qpair failed and we were unable to recover it. 00:31:41.885 [2024-10-01 08:46:33.467008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.885 [2024-10-01 08:46:33.467019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.885 qpair failed and we were unable to recover it. 00:31:41.885 [2024-10-01 08:46:33.467339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.885 [2024-10-01 08:46:33.467349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.885 qpair failed and we were unable to recover it. 00:31:41.885 [2024-10-01 08:46:33.467655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.885 [2024-10-01 08:46:33.467664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.885 qpair failed and we were unable to recover it. 00:31:41.885 [2024-10-01 08:46:33.467914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.885 [2024-10-01 08:46:33.467924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.885 qpair failed and we were unable to recover it. 00:31:41.885 [2024-10-01 08:46:33.468193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.885 [2024-10-01 08:46:33.468204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.885 qpair failed and we were unable to recover it. 00:31:41.885 [2024-10-01 08:46:33.468523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.885 [2024-10-01 08:46:33.468533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.885 qpair failed and we were unable to recover it. 00:31:41.885 [2024-10-01 08:46:33.468864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.885 [2024-10-01 08:46:33.468874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.885 qpair failed and we were unable to recover it. 00:31:41.885 [2024-10-01 08:46:33.469132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.885 [2024-10-01 08:46:33.469142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.885 qpair failed and we were unable to recover it. 00:31:41.885 [2024-10-01 08:46:33.469432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.885 [2024-10-01 08:46:33.469442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.885 qpair failed and we were unable to recover it. 00:31:41.885 [2024-10-01 08:46:33.469806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.885 [2024-10-01 08:46:33.469815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.885 qpair failed and we were unable to recover it. 00:31:41.885 [2024-10-01 08:46:33.470090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.885 [2024-10-01 08:46:33.470100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.885 qpair failed and we were unable to recover it. 00:31:41.885 [2024-10-01 08:46:33.470428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.885 [2024-10-01 08:46:33.470438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.886 qpair failed and we were unable to recover it. 00:31:41.886 [2024-10-01 08:46:33.470734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.886 [2024-10-01 08:46:33.470744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.886 qpair failed and we were unable to recover it. 00:31:41.886 [2024-10-01 08:46:33.471019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.886 [2024-10-01 08:46:33.471029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.886 qpair failed and we were unable to recover it. 00:31:41.886 [2024-10-01 08:46:33.471334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.886 [2024-10-01 08:46:33.471344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.886 qpair failed and we were unable to recover it. 00:31:41.886 [2024-10-01 08:46:33.471674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.886 [2024-10-01 08:46:33.471684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.886 qpair failed and we were unable to recover it. 00:31:41.886 [2024-10-01 08:46:33.471961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.886 [2024-10-01 08:46:33.471970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.886 qpair failed and we were unable to recover it. 00:31:41.886 [2024-10-01 08:46:33.472250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.886 [2024-10-01 08:46:33.472261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.886 qpair failed and we were unable to recover it. 00:31:41.886 [2024-10-01 08:46:33.472462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.886 [2024-10-01 08:46:33.472473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.886 qpair failed and we were unable to recover it. 00:31:41.886 [2024-10-01 08:46:33.472803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.886 [2024-10-01 08:46:33.472813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.886 qpair failed and we were unable to recover it. 00:31:41.886 [2024-10-01 08:46:33.473111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.886 [2024-10-01 08:46:33.473122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.886 qpair failed and we were unable to recover it. 00:31:41.886 [2024-10-01 08:46:33.473458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.886 [2024-10-01 08:46:33.473469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.886 qpair failed and we were unable to recover it. 00:31:41.886 [2024-10-01 08:46:33.473792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.886 [2024-10-01 08:46:33.473801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.886 qpair failed and we were unable to recover it. 00:31:41.886 [2024-10-01 08:46:33.474108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.886 [2024-10-01 08:46:33.474118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.886 qpair failed and we were unable to recover it. 00:31:41.886 [2024-10-01 08:46:33.474427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.886 [2024-10-01 08:46:33.474436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.886 qpair failed and we were unable to recover it. 00:31:41.886 [2024-10-01 08:46:33.474745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.886 [2024-10-01 08:46:33.474755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.886 qpair failed and we were unable to recover it. 00:31:41.886 [2024-10-01 08:46:33.474984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.886 [2024-10-01 08:46:33.474997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.886 qpair failed and we were unable to recover it. 00:31:41.886 [2024-10-01 08:46:33.475231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.886 [2024-10-01 08:46:33.475241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.886 qpair failed and we were unable to recover it. 00:31:41.886 [2024-10-01 08:46:33.475520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.886 [2024-10-01 08:46:33.475530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.886 qpair failed and we were unable to recover it. 00:31:41.886 [2024-10-01 08:46:33.475833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.886 [2024-10-01 08:46:33.475843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.886 qpair failed and we were unable to recover it. 00:31:41.886 [2024-10-01 08:46:33.476180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.886 [2024-10-01 08:46:33.476190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.886 qpair failed and we were unable to recover it. 00:31:41.886 [2024-10-01 08:46:33.476513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.886 [2024-10-01 08:46:33.476522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.886 qpair failed and we were unable to recover it. 00:31:41.886 [2024-10-01 08:46:33.476804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.886 [2024-10-01 08:46:33.476813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.886 qpair failed and we were unable to recover it. 00:31:41.886 [2024-10-01 08:46:33.477110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.886 [2024-10-01 08:46:33.477120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.886 qpair failed and we were unable to recover it. 00:31:41.886 [2024-10-01 08:46:33.477379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.886 [2024-10-01 08:46:33.477388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.886 qpair failed and we were unable to recover it. 00:31:41.886 [2024-10-01 08:46:33.477708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.886 [2024-10-01 08:46:33.477718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.886 qpair failed and we were unable to recover it. 00:31:41.886 [2024-10-01 08:46:33.478000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.886 [2024-10-01 08:46:33.478010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.886 qpair failed and we were unable to recover it. 00:31:41.886 [2024-10-01 08:46:33.478196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.886 [2024-10-01 08:46:33.478206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.886 qpair failed and we were unable to recover it. 00:31:41.886 [2024-10-01 08:46:33.478467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.886 [2024-10-01 08:46:33.478477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.886 qpair failed and we were unable to recover it. 00:31:41.886 [2024-10-01 08:46:33.478764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.886 [2024-10-01 08:46:33.478774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.886 qpair failed and we were unable to recover it. 00:31:41.886 [2024-10-01 08:46:33.479086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.886 [2024-10-01 08:46:33.479095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.886 qpair failed and we were unable to recover it. 00:31:41.886 [2024-10-01 08:46:33.479398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.886 [2024-10-01 08:46:33.479407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.886 qpair failed and we were unable to recover it. 00:31:41.886 [2024-10-01 08:46:33.479603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.886 [2024-10-01 08:46:33.479613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.886 qpair failed and we were unable to recover it. 00:31:41.886 [2024-10-01 08:46:33.479790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.886 [2024-10-01 08:46:33.479801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.886 qpair failed and we were unable to recover it. 00:31:41.886 [2024-10-01 08:46:33.480098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.886 [2024-10-01 08:46:33.480108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.886 qpair failed and we were unable to recover it. 00:31:41.886 [2024-10-01 08:46:33.480441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.886 [2024-10-01 08:46:33.480451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.886 qpair failed and we were unable to recover it. 00:31:41.886 [2024-10-01 08:46:33.480717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.886 [2024-10-01 08:46:33.480726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.886 qpair failed and we were unable to recover it. 00:31:41.886 [2024-10-01 08:46:33.481026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.886 [2024-10-01 08:46:33.481036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.886 qpair failed and we were unable to recover it. 00:31:41.886 [2024-10-01 08:46:33.481368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.886 [2024-10-01 08:46:33.481377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.886 qpair failed and we were unable to recover it. 00:31:41.886 [2024-10-01 08:46:33.481636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.886 [2024-10-01 08:46:33.481645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.886 qpair failed and we were unable to recover it. 00:31:41.886 [2024-10-01 08:46:33.482009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.887 [2024-10-01 08:46:33.482019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.887 qpair failed and we were unable to recover it. 00:31:41.887 [2024-10-01 08:46:33.482320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.887 [2024-10-01 08:46:33.482330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.887 qpair failed and we were unable to recover it. 00:31:41.887 [2024-10-01 08:46:33.482613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.887 [2024-10-01 08:46:33.482623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.887 qpair failed and we were unable to recover it. 00:31:41.887 [2024-10-01 08:46:33.482896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.887 [2024-10-01 08:46:33.482905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.887 qpair failed and we were unable to recover it. 00:31:41.887 [2024-10-01 08:46:33.483187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.887 [2024-10-01 08:46:33.483198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.887 qpair failed and we were unable to recover it. 00:31:41.887 [2024-10-01 08:46:33.483469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.887 [2024-10-01 08:46:33.483479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.887 qpair failed and we were unable to recover it. 00:31:41.887 [2024-10-01 08:46:33.483783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.887 [2024-10-01 08:46:33.483793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.887 qpair failed and we were unable to recover it. 00:31:41.887 [2024-10-01 08:46:33.483979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.887 [2024-10-01 08:46:33.483988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.887 qpair failed and we were unable to recover it. 00:31:41.887 [2024-10-01 08:46:33.484269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.887 [2024-10-01 08:46:33.484279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.887 qpair failed and we were unable to recover it. 00:31:41.887 [2024-10-01 08:46:33.484601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.887 [2024-10-01 08:46:33.484612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.887 qpair failed and we were unable to recover it. 00:31:41.887 [2024-10-01 08:46:33.484893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.887 [2024-10-01 08:46:33.484903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.887 qpair failed and we were unable to recover it. 00:31:41.887 [2024-10-01 08:46:33.485225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.887 [2024-10-01 08:46:33.485236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.887 qpair failed and we were unable to recover it. 00:31:41.887 [2024-10-01 08:46:33.485518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.887 [2024-10-01 08:46:33.485528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.887 qpair failed and we were unable to recover it. 00:31:41.887 [2024-10-01 08:46:33.485846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.887 [2024-10-01 08:46:33.485855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.887 qpair failed and we were unable to recover it. 00:31:41.887 [2024-10-01 08:46:33.486161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.887 [2024-10-01 08:46:33.486171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.887 qpair failed and we were unable to recover it. 00:31:41.887 [2024-10-01 08:46:33.486425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.887 [2024-10-01 08:46:33.486435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.887 qpair failed and we were unable to recover it. 00:31:41.887 [2024-10-01 08:46:33.486731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.887 [2024-10-01 08:46:33.486743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.887 qpair failed and we were unable to recover it. 00:31:41.887 [2024-10-01 08:46:33.486959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.887 [2024-10-01 08:46:33.486969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.887 qpair failed and we were unable to recover it. 00:31:41.887 [2024-10-01 08:46:33.487282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.887 [2024-10-01 08:46:33.487292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.887 qpair failed and we were unable to recover it. 00:31:41.887 [2024-10-01 08:46:33.487605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.887 [2024-10-01 08:46:33.487614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.887 qpair failed and we were unable to recover it. 00:31:41.887 [2024-10-01 08:46:33.487920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.887 [2024-10-01 08:46:33.487930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.887 qpair failed and we were unable to recover it. 00:31:41.887 [2024-10-01 08:46:33.488218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.887 [2024-10-01 08:46:33.488234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.887 qpair failed and we were unable to recover it. 00:31:41.887 [2024-10-01 08:46:33.488566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.887 [2024-10-01 08:46:33.488575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.887 qpair failed and we were unable to recover it. 00:31:41.887 [2024-10-01 08:46:33.488890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.887 [2024-10-01 08:46:33.488900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.887 qpair failed and we were unable to recover it. 00:31:41.887 [2024-10-01 08:46:33.489108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.887 [2024-10-01 08:46:33.489118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.887 qpair failed and we were unable to recover it. 00:31:41.887 [2024-10-01 08:46:33.489452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.887 [2024-10-01 08:46:33.489462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.887 qpair failed and we were unable to recover it. 00:31:41.887 [2024-10-01 08:46:33.489786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.887 [2024-10-01 08:46:33.489797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.887 qpair failed and we were unable to recover it. 00:31:41.887 [2024-10-01 08:46:33.490050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.887 [2024-10-01 08:46:33.490060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.887 qpair failed and we were unable to recover it. 00:31:41.887 [2024-10-01 08:46:33.490376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.887 [2024-10-01 08:46:33.490386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.887 qpair failed and we were unable to recover it. 00:31:41.887 [2024-10-01 08:46:33.490709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.887 [2024-10-01 08:46:33.490719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.887 qpair failed and we were unable to recover it. 00:31:41.887 [2024-10-01 08:46:33.490913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.887 [2024-10-01 08:46:33.490923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.887 qpair failed and we were unable to recover it. 00:31:41.887 [2024-10-01 08:46:33.491259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.887 [2024-10-01 08:46:33.491270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.887 qpair failed and we were unable to recover it. 00:31:41.887 [2024-10-01 08:46:33.491558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.887 [2024-10-01 08:46:33.491568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.887 qpair failed and we were unable to recover it. 00:31:41.888 [2024-10-01 08:46:33.491744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.888 [2024-10-01 08:46:33.491755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.888 qpair failed and we were unable to recover it. 00:31:41.888 [2024-10-01 08:46:33.492063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.888 [2024-10-01 08:46:33.492074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.888 qpair failed and we were unable to recover it. 00:31:41.888 [2024-10-01 08:46:33.492397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.888 [2024-10-01 08:46:33.492406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.888 qpair failed and we were unable to recover it. 00:31:41.888 [2024-10-01 08:46:33.492714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.888 [2024-10-01 08:46:33.492724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.888 qpair failed and we were unable to recover it. 00:31:41.888 [2024-10-01 08:46:33.493004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.888 [2024-10-01 08:46:33.493015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.888 qpair failed and we were unable to recover it. 00:31:41.888 [2024-10-01 08:46:33.493283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.888 [2024-10-01 08:46:33.493293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.888 qpair failed and we were unable to recover it. 00:31:41.888 [2024-10-01 08:46:33.493632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.888 [2024-10-01 08:46:33.493642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.888 qpair failed and we were unable to recover it. 00:31:41.888 [2024-10-01 08:46:33.493977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.888 [2024-10-01 08:46:33.493987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.888 qpair failed and we were unable to recover it. 00:31:41.888 [2024-10-01 08:46:33.494294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.888 [2024-10-01 08:46:33.494305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.888 qpair failed and we were unable to recover it. 00:31:41.888 [2024-10-01 08:46:33.494608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.888 [2024-10-01 08:46:33.494618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.888 qpair failed and we were unable to recover it. 00:31:41.888 [2024-10-01 08:46:33.494949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.888 [2024-10-01 08:46:33.494959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.888 qpair failed and we were unable to recover it. 00:31:41.888 [2024-10-01 08:46:33.495221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.888 [2024-10-01 08:46:33.495231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.888 qpair failed and we were unable to recover it. 00:31:41.888 [2024-10-01 08:46:33.495522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.888 [2024-10-01 08:46:33.495532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.888 qpair failed and we were unable to recover it. 00:31:41.888 [2024-10-01 08:46:33.495812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.888 [2024-10-01 08:46:33.495822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.888 qpair failed and we were unable to recover it. 00:31:41.888 [2024-10-01 08:46:33.496127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.888 [2024-10-01 08:46:33.496137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.888 qpair failed and we were unable to recover it. 00:31:41.888 [2024-10-01 08:46:33.496450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.888 [2024-10-01 08:46:33.496461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.888 qpair failed and we were unable to recover it. 00:31:41.888 [2024-10-01 08:46:33.496769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.888 [2024-10-01 08:46:33.496779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.888 qpair failed and we were unable to recover it. 00:31:41.888 [2024-10-01 08:46:33.496975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.888 [2024-10-01 08:46:33.496986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.888 qpair failed and we were unable to recover it. 00:31:41.888 [2024-10-01 08:46:33.497322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.888 [2024-10-01 08:46:33.497332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.888 qpair failed and we were unable to recover it. 00:31:41.888 [2024-10-01 08:46:33.497602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.888 [2024-10-01 08:46:33.497612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.888 qpair failed and we were unable to recover it. 00:31:41.888 [2024-10-01 08:46:33.497944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.888 [2024-10-01 08:46:33.497954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.888 qpair failed and we were unable to recover it. 00:31:41.888 [2024-10-01 08:46:33.498243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.888 [2024-10-01 08:46:33.498253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.888 qpair failed and we were unable to recover it. 00:31:41.888 [2024-10-01 08:46:33.498529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.888 [2024-10-01 08:46:33.498539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.888 qpair failed and we were unable to recover it. 00:31:41.888 [2024-10-01 08:46:33.498816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.888 [2024-10-01 08:46:33.498826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.888 qpair failed and we were unable to recover it. 00:31:41.888 [2024-10-01 08:46:33.499131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.888 [2024-10-01 08:46:33.499144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.888 qpair failed and we were unable to recover it. 00:31:41.888 [2024-10-01 08:46:33.499519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.888 [2024-10-01 08:46:33.499530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.888 qpair failed and we were unable to recover it. 00:31:41.888 [2024-10-01 08:46:33.499831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.888 [2024-10-01 08:46:33.499841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.888 qpair failed and we were unable to recover it. 00:31:41.888 [2024-10-01 08:46:33.500130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.888 [2024-10-01 08:46:33.500141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.888 qpair failed and we were unable to recover it. 00:31:41.888 [2024-10-01 08:46:33.500461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.888 [2024-10-01 08:46:33.500470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.888 qpair failed and we were unable to recover it. 00:31:41.888 [2024-10-01 08:46:33.500775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.888 [2024-10-01 08:46:33.500785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.888 qpair failed and we were unable to recover it. 00:31:41.888 [2024-10-01 08:46:33.501099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.888 [2024-10-01 08:46:33.501110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.888 qpair failed and we were unable to recover it. 00:31:41.888 [2024-10-01 08:46:33.501387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.888 [2024-10-01 08:46:33.501396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.888 qpair failed and we were unable to recover it. 00:31:41.888 [2024-10-01 08:46:33.501733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.888 [2024-10-01 08:46:33.501743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.888 qpair failed and we were unable to recover it. 00:31:41.888 [2024-10-01 08:46:33.502028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.888 [2024-10-01 08:46:33.502038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.888 qpair failed and we were unable to recover it. 00:31:41.888 [2024-10-01 08:46:33.502312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.889 [2024-10-01 08:46:33.502322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.889 qpair failed and we were unable to recover it. 00:31:41.889 [2024-10-01 08:46:33.502638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.889 [2024-10-01 08:46:33.502648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.889 qpair failed and we were unable to recover it. 00:31:41.889 [2024-10-01 08:46:33.502951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.889 [2024-10-01 08:46:33.502961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.889 qpair failed and we were unable to recover it. 00:31:41.889 [2024-10-01 08:46:33.503262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.889 [2024-10-01 08:46:33.503272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.889 qpair failed and we were unable to recover it. 00:31:41.889 [2024-10-01 08:46:33.503582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.889 [2024-10-01 08:46:33.503591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.889 qpair failed and we were unable to recover it. 00:31:41.889 [2024-10-01 08:46:33.503870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.889 [2024-10-01 08:46:33.503880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.889 qpair failed and we were unable to recover it. 00:31:41.889 [2024-10-01 08:46:33.504163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.889 [2024-10-01 08:46:33.504173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.889 qpair failed and we were unable to recover it. 00:31:41.889 [2024-10-01 08:46:33.504456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.889 [2024-10-01 08:46:33.504465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.889 qpair failed and we were unable to recover it. 00:31:41.889 [2024-10-01 08:46:33.504768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.889 [2024-10-01 08:46:33.504778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.889 qpair failed and we were unable to recover it. 00:31:41.889 [2024-10-01 08:46:33.505105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.889 [2024-10-01 08:46:33.505116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.889 qpair failed and we were unable to recover it. 00:31:41.889 [2024-10-01 08:46:33.505445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.889 [2024-10-01 08:46:33.505456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.889 qpair failed and we were unable to recover it. 00:31:41.889 [2024-10-01 08:46:33.505756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.889 [2024-10-01 08:46:33.505766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.889 qpair failed and we were unable to recover it. 00:31:41.889 [2024-10-01 08:46:33.506041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.889 [2024-10-01 08:46:33.506051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.889 qpair failed and we were unable to recover it. 00:31:41.889 [2024-10-01 08:46:33.506317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.889 [2024-10-01 08:46:33.506328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.889 qpair failed and we were unable to recover it. 00:31:41.889 [2024-10-01 08:46:33.506627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.889 [2024-10-01 08:46:33.506637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.889 qpair failed and we were unable to recover it. 00:31:41.889 [2024-10-01 08:46:33.506831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.889 [2024-10-01 08:46:33.506841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.889 qpair failed and we were unable to recover it. 00:31:41.889 [2024-10-01 08:46:33.507108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.889 [2024-10-01 08:46:33.507118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.889 qpair failed and we were unable to recover it. 00:31:41.889 [2024-10-01 08:46:33.507416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.889 [2024-10-01 08:46:33.507436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.889 qpair failed and we were unable to recover it. 00:31:41.889 [2024-10-01 08:46:33.507769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.889 [2024-10-01 08:46:33.507779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.889 qpair failed and we were unable to recover it. 00:31:41.889 [2024-10-01 08:46:33.508039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.889 [2024-10-01 08:46:33.508049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.889 qpair failed and we were unable to recover it. 00:31:41.889 [2024-10-01 08:46:33.508353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.889 [2024-10-01 08:46:33.508363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.889 qpair failed and we were unable to recover it. 00:31:41.889 [2024-10-01 08:46:33.508650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.889 [2024-10-01 08:46:33.508659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.889 qpair failed and we were unable to recover it. 00:31:41.889 [2024-10-01 08:46:33.508961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.889 [2024-10-01 08:46:33.508970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.889 qpair failed and we were unable to recover it. 00:31:41.889 [2024-10-01 08:46:33.509264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.889 [2024-10-01 08:46:33.509283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.889 qpair failed and we were unable to recover it. 00:31:41.889 [2024-10-01 08:46:33.509608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.889 [2024-10-01 08:46:33.509618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.889 qpair failed and we were unable to recover it. 00:31:41.889 [2024-10-01 08:46:33.509927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.889 [2024-10-01 08:46:33.509937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.889 qpair failed and we were unable to recover it. 00:31:41.889 [2024-10-01 08:46:33.510148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.889 [2024-10-01 08:46:33.510158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.889 qpair failed and we were unable to recover it. 00:31:41.889 [2024-10-01 08:46:33.510482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.889 [2024-10-01 08:46:33.510492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.889 qpair failed and we were unable to recover it. 00:31:41.889 [2024-10-01 08:46:33.510681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.889 [2024-10-01 08:46:33.510699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.889 qpair failed and we were unable to recover it. 00:31:41.889 [2024-10-01 08:46:33.510859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.889 [2024-10-01 08:46:33.510869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.889 qpair failed and we were unable to recover it. 00:31:41.889 [2024-10-01 08:46:33.511206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.889 [2024-10-01 08:46:33.511216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.889 qpair failed and we were unable to recover it. 00:31:41.889 [2024-10-01 08:46:33.511523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.889 [2024-10-01 08:46:33.511532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.889 qpair failed and we were unable to recover it. 00:31:41.889 [2024-10-01 08:46:33.511839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.889 [2024-10-01 08:46:33.511849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.889 qpair failed and we were unable to recover it. 00:31:41.889 [2024-10-01 08:46:33.512141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.889 [2024-10-01 08:46:33.512152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.889 qpair failed and we were unable to recover it. 00:31:41.889 [2024-10-01 08:46:33.512366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.889 [2024-10-01 08:46:33.512376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.889 qpair failed and we were unable to recover it. 00:31:41.889 [2024-10-01 08:46:33.512559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.889 [2024-10-01 08:46:33.512570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.889 qpair failed and we were unable to recover it. 00:31:41.889 [2024-10-01 08:46:33.512900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.889 [2024-10-01 08:46:33.512910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.889 qpair failed and we were unable to recover it. 00:31:41.890 [2024-10-01 08:46:33.513175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.890 [2024-10-01 08:46:33.513186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.890 qpair failed and we were unable to recover it. 00:31:41.890 [2024-10-01 08:46:33.513364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.890 [2024-10-01 08:46:33.513375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.890 qpair failed and we were unable to recover it. 00:31:41.890 [2024-10-01 08:46:33.513670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.890 [2024-10-01 08:46:33.513680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.890 qpair failed and we were unable to recover it. 00:31:41.890 [2024-10-01 08:46:33.513991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.890 [2024-10-01 08:46:33.514004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.890 qpair failed and we were unable to recover it. 00:31:41.890 [2024-10-01 08:46:33.514278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.890 [2024-10-01 08:46:33.514288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.890 qpair failed and we were unable to recover it. 00:31:41.890 [2024-10-01 08:46:33.514606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.890 [2024-10-01 08:46:33.514615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.890 qpair failed and we were unable to recover it. 00:31:41.890 [2024-10-01 08:46:33.514930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.890 [2024-10-01 08:46:33.514940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.890 qpair failed and we were unable to recover it. 00:31:41.890 [2024-10-01 08:46:33.515231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.890 [2024-10-01 08:46:33.515242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.890 qpair failed and we were unable to recover it. 00:31:41.890 [2024-10-01 08:46:33.515443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.890 [2024-10-01 08:46:33.515452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.890 qpair failed and we were unable to recover it. 00:31:41.890 [2024-10-01 08:46:33.515747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.890 [2024-10-01 08:46:33.515756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.890 qpair failed and we were unable to recover it. 00:31:41.890 [2024-10-01 08:46:33.516064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.890 [2024-10-01 08:46:33.516074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.890 qpair failed and we were unable to recover it. 00:31:41.890 [2024-10-01 08:46:33.516399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.890 [2024-10-01 08:46:33.516409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.890 qpair failed and we were unable to recover it. 00:31:41.890 [2024-10-01 08:46:33.516712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.890 [2024-10-01 08:46:33.516722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.890 qpair failed and we were unable to recover it. 00:31:41.890 [2024-10-01 08:46:33.517005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.890 [2024-10-01 08:46:33.517015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.890 qpair failed and we were unable to recover it. 00:31:41.890 [2024-10-01 08:46:33.517299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.890 [2024-10-01 08:46:33.517309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.890 qpair failed and we were unable to recover it. 00:31:41.890 [2024-10-01 08:46:33.517625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.890 [2024-10-01 08:46:33.517635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.890 qpair failed and we were unable to recover it. 00:31:41.890 [2024-10-01 08:46:33.517941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.890 [2024-10-01 08:46:33.517951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.890 qpair failed and we were unable to recover it. 00:31:41.890 [2024-10-01 08:46:33.518245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.890 [2024-10-01 08:46:33.518255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.890 qpair failed and we were unable to recover it. 00:31:41.890 [2024-10-01 08:46:33.518555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.890 [2024-10-01 08:46:33.518565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.890 qpair failed and we were unable to recover it. 00:31:41.890 [2024-10-01 08:46:33.518875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.890 [2024-10-01 08:46:33.518884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.890 qpair failed and we were unable to recover it. 00:31:41.890 [2024-10-01 08:46:33.519166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.890 [2024-10-01 08:46:33.519176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.890 qpair failed and we were unable to recover it. 00:31:41.890 [2024-10-01 08:46:33.519486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.890 [2024-10-01 08:46:33.519498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.890 qpair failed and we were unable to recover it. 00:31:41.890 [2024-10-01 08:46:33.519802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.890 [2024-10-01 08:46:33.519812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.890 qpair failed and we were unable to recover it. 00:31:41.890 [2024-10-01 08:46:33.520098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.890 [2024-10-01 08:46:33.520109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.890 qpair failed and we were unable to recover it. 00:31:41.890 [2024-10-01 08:46:33.520312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.890 [2024-10-01 08:46:33.520321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.890 qpair failed and we were unable to recover it. 00:31:41.890 [2024-10-01 08:46:33.520646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.890 [2024-10-01 08:46:33.520656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.890 qpair failed and we were unable to recover it. 00:31:41.890 [2024-10-01 08:46:33.520932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.890 [2024-10-01 08:46:33.520941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.890 qpair failed and we were unable to recover it. 00:31:41.890 [2024-10-01 08:46:33.521206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.890 [2024-10-01 08:46:33.521218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.890 qpair failed and we were unable to recover it. 00:31:41.890 [2024-10-01 08:46:33.521592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.890 [2024-10-01 08:46:33.521602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.890 qpair failed and we were unable to recover it. 00:31:41.890 [2024-10-01 08:46:33.521935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.890 [2024-10-01 08:46:33.521944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.890 qpair failed and we were unable to recover it. 00:31:41.890 [2024-10-01 08:46:33.522224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.890 [2024-10-01 08:46:33.522234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.890 qpair failed and we were unable to recover it. 00:31:41.890 [2024-10-01 08:46:33.522543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.890 [2024-10-01 08:46:33.522553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.890 qpair failed and we were unable to recover it. 00:31:41.890 [2024-10-01 08:46:33.522904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.890 [2024-10-01 08:46:33.522916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.890 qpair failed and we were unable to recover it. 00:31:41.890 [2024-10-01 08:46:33.523242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.890 [2024-10-01 08:46:33.523253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.890 qpair failed and we were unable to recover it. 00:31:41.890 [2024-10-01 08:46:33.523554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.890 [2024-10-01 08:46:33.523564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.890 qpair failed and we were unable to recover it. 00:31:41.890 [2024-10-01 08:46:33.523848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.890 [2024-10-01 08:46:33.523858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.890 qpair failed and we were unable to recover it. 00:31:41.891 [2024-10-01 08:46:33.524165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.891 [2024-10-01 08:46:33.524176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.891 qpair failed and we were unable to recover it. 00:31:41.891 [2024-10-01 08:46:33.524374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.891 [2024-10-01 08:46:33.524385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.891 qpair failed and we were unable to recover it. 00:31:41.891 [2024-10-01 08:46:33.524692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.891 [2024-10-01 08:46:33.524702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.891 qpair failed and we were unable to recover it. 00:31:41.891 [2024-10-01 08:46:33.524983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.891 [2024-10-01 08:46:33.524997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.891 qpair failed and we were unable to recover it. 00:31:41.891 [2024-10-01 08:46:33.525305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.891 [2024-10-01 08:46:33.525314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.891 qpair failed and we were unable to recover it. 00:31:41.891 [2024-10-01 08:46:33.525606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.891 [2024-10-01 08:46:33.525617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.891 qpair failed and we were unable to recover it. 00:31:41.891 [2024-10-01 08:46:33.525809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.891 [2024-10-01 08:46:33.525819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.891 qpair failed and we were unable to recover it. 00:31:41.891 [2024-10-01 08:46:33.526083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.891 [2024-10-01 08:46:33.526093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.891 qpair failed and we were unable to recover it. 00:31:41.891 [2024-10-01 08:46:33.526417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.891 [2024-10-01 08:46:33.526427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.891 qpair failed and we were unable to recover it. 00:31:41.891 [2024-10-01 08:46:33.526734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.891 [2024-10-01 08:46:33.526744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.891 qpair failed and we were unable to recover it. 00:31:41.891 [2024-10-01 08:46:33.527035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.891 [2024-10-01 08:46:33.527045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.891 qpair failed and we were unable to recover it. 00:31:41.891 [2024-10-01 08:46:33.527335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.891 [2024-10-01 08:46:33.527344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.891 qpair failed and we were unable to recover it. 00:31:41.891 [2024-10-01 08:46:33.527640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.891 [2024-10-01 08:46:33.527652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.891 qpair failed and we were unable to recover it. 00:31:41.891 [2024-10-01 08:46:33.527951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.891 [2024-10-01 08:46:33.527961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.891 qpair failed and we were unable to recover it. 00:31:41.891 [2024-10-01 08:46:33.528264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.891 [2024-10-01 08:46:33.528274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.891 qpair failed and we were unable to recover it. 00:31:41.891 [2024-10-01 08:46:33.528588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.891 [2024-10-01 08:46:33.528598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.891 qpair failed and we were unable to recover it. 00:31:41.891 [2024-10-01 08:46:33.528878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.891 [2024-10-01 08:46:33.528887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.891 qpair failed and we were unable to recover it. 00:31:41.891 [2024-10-01 08:46:33.529174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.891 [2024-10-01 08:46:33.529184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.891 qpair failed and we were unable to recover it. 00:31:41.891 [2024-10-01 08:46:33.529491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.891 [2024-10-01 08:46:33.529501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.891 qpair failed and we were unable to recover it. 00:31:41.891 [2024-10-01 08:46:33.529808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.891 [2024-10-01 08:46:33.529817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.891 qpair failed and we were unable to recover it. 00:31:41.891 [2024-10-01 08:46:33.530098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.891 [2024-10-01 08:46:33.530108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.891 qpair failed and we were unable to recover it. 00:31:41.891 [2024-10-01 08:46:33.530421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.891 [2024-10-01 08:46:33.530431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.891 qpair failed and we were unable to recover it. 00:31:41.891 [2024-10-01 08:46:33.530703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.891 [2024-10-01 08:46:33.530713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.891 qpair failed and we were unable to recover it. 00:31:41.891 [2024-10-01 08:46:33.530927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.891 [2024-10-01 08:46:33.530937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.891 qpair failed and we were unable to recover it. 00:31:41.891 [2024-10-01 08:46:33.531234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.891 [2024-10-01 08:46:33.531245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.891 qpair failed and we were unable to recover it. 00:31:41.891 [2024-10-01 08:46:33.531544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.891 [2024-10-01 08:46:33.531554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.891 qpair failed and we were unable to recover it. 00:31:41.891 [2024-10-01 08:46:33.531892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.891 [2024-10-01 08:46:33.531903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.891 qpair failed and we were unable to recover it. 00:31:41.891 [2024-10-01 08:46:33.532204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.891 [2024-10-01 08:46:33.532214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.891 qpair failed and we were unable to recover it. 00:31:41.891 [2024-10-01 08:46:33.532509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.891 [2024-10-01 08:46:33.532518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.891 qpair failed and we were unable to recover it. 00:31:41.891 [2024-10-01 08:46:33.532722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.891 [2024-10-01 08:46:33.532732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.891 qpair failed and we were unable to recover it. 00:31:41.891 [2024-10-01 08:46:33.533000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.891 [2024-10-01 08:46:33.533011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.891 qpair failed and we were unable to recover it. 00:31:41.891 [2024-10-01 08:46:33.533339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.891 [2024-10-01 08:46:33.533348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.891 qpair failed and we were unable to recover it. 00:31:41.891 [2024-10-01 08:46:33.533627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.891 [2024-10-01 08:46:33.533636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.891 qpair failed and we were unable to recover it. 00:31:41.891 [2024-10-01 08:46:33.533899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.891 [2024-10-01 08:46:33.533908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.891 qpair failed and we were unable to recover it. 00:31:41.891 [2024-10-01 08:46:33.534216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.891 [2024-10-01 08:46:33.534226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.891 qpair failed and we were unable to recover it. 00:31:41.891 [2024-10-01 08:46:33.534588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.891 [2024-10-01 08:46:33.534598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.891 qpair failed and we were unable to recover it. 00:31:41.891 [2024-10-01 08:46:33.534853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.892 [2024-10-01 08:46:33.534864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.892 qpair failed and we were unable to recover it. 00:31:41.892 [2024-10-01 08:46:33.535160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.892 [2024-10-01 08:46:33.535170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.892 qpair failed and we were unable to recover it. 00:31:41.892 [2024-10-01 08:46:33.535469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.892 [2024-10-01 08:46:33.535480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.892 qpair failed and we were unable to recover it. 00:31:41.892 [2024-10-01 08:46:33.535791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.892 [2024-10-01 08:46:33.535801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.892 qpair failed and we were unable to recover it. 00:31:41.892 [2024-10-01 08:46:33.536133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.892 [2024-10-01 08:46:33.536145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.892 qpair failed and we were unable to recover it. 00:31:41.892 [2024-10-01 08:46:33.536472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.892 [2024-10-01 08:46:33.536482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.892 qpair failed and we were unable to recover it. 00:31:41.892 [2024-10-01 08:46:33.536667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.892 [2024-10-01 08:46:33.536677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.892 qpair failed and we were unable to recover it. 00:31:41.892 [2024-10-01 08:46:33.537069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.892 [2024-10-01 08:46:33.537079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.892 qpair failed and we were unable to recover it. 00:31:41.892 [2024-10-01 08:46:33.537274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.892 [2024-10-01 08:46:33.537284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.892 qpair failed and we were unable to recover it. 00:31:41.892 [2024-10-01 08:46:33.537548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.892 [2024-10-01 08:46:33.537558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.892 qpair failed and we were unable to recover it. 00:31:41.892 [2024-10-01 08:46:33.537886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.892 [2024-10-01 08:46:33.537895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.892 qpair failed and we were unable to recover it. 00:31:41.892 [2024-10-01 08:46:33.538177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.892 [2024-10-01 08:46:33.538188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.892 qpair failed and we were unable to recover it. 00:31:41.892 [2024-10-01 08:46:33.538453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.892 [2024-10-01 08:46:33.538462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.892 qpair failed and we were unable to recover it. 00:31:41.892 [2024-10-01 08:46:33.538766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.892 [2024-10-01 08:46:33.538776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.892 qpair failed and we were unable to recover it. 00:31:41.892 [2024-10-01 08:46:33.539110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.892 [2024-10-01 08:46:33.539121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.892 qpair failed and we were unable to recover it. 00:31:41.892 [2024-10-01 08:46:33.539425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.892 [2024-10-01 08:46:33.539435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.892 qpair failed and we were unable to recover it. 00:31:41.892 [2024-10-01 08:46:33.539775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.892 [2024-10-01 08:46:33.539786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.892 qpair failed and we were unable to recover it. 00:31:41.892 [2024-10-01 08:46:33.540120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.892 [2024-10-01 08:46:33.540132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.892 qpair failed and we were unable to recover it. 00:31:41.892 [2024-10-01 08:46:33.540440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.892 [2024-10-01 08:46:33.540451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.892 qpair failed and we were unable to recover it. 00:31:41.892 [2024-10-01 08:46:33.540753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.892 [2024-10-01 08:46:33.540763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.892 qpair failed and we were unable to recover it. 00:31:41.892 [2024-10-01 08:46:33.541090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.892 [2024-10-01 08:46:33.541100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.892 qpair failed and we were unable to recover it. 00:31:41.892 [2024-10-01 08:46:33.541410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.892 [2024-10-01 08:46:33.541420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.892 qpair failed and we were unable to recover it. 00:31:41.892 [2024-10-01 08:46:33.541747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.892 [2024-10-01 08:46:33.541757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.892 qpair failed and we were unable to recover it. 00:31:41.892 [2024-10-01 08:46:33.542065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.892 [2024-10-01 08:46:33.542075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.892 qpair failed and we were unable to recover it. 00:31:41.892 [2024-10-01 08:46:33.542406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.892 [2024-10-01 08:46:33.542417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.892 qpair failed and we were unable to recover it. 00:31:41.892 [2024-10-01 08:46:33.542738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.892 [2024-10-01 08:46:33.542747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.892 qpair failed and we were unable to recover it. 00:31:41.892 [2024-10-01 08:46:33.543056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.892 [2024-10-01 08:46:33.543066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.892 qpair failed and we were unable to recover it. 00:31:41.892 [2024-10-01 08:46:33.543382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.892 [2024-10-01 08:46:33.543393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.892 qpair failed and we were unable to recover it. 00:31:41.892 [2024-10-01 08:46:33.543699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.892 [2024-10-01 08:46:33.543709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.892 qpair failed and we were unable to recover it. 00:31:41.892 [2024-10-01 08:46:33.543873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.892 [2024-10-01 08:46:33.543883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.892 qpair failed and we were unable to recover it. 00:31:41.892 [2024-10-01 08:46:33.544204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.892 [2024-10-01 08:46:33.544214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.892 qpair failed and we were unable to recover it. 00:31:41.892 [2024-10-01 08:46:33.544535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.892 [2024-10-01 08:46:33.544545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.892 qpair failed and we were unable to recover it. 00:31:41.892 [2024-10-01 08:46:33.544870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.892 [2024-10-01 08:46:33.544880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.892 qpair failed and we were unable to recover it. 00:31:41.893 [2024-10-01 08:46:33.545156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.893 [2024-10-01 08:46:33.545167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.893 qpair failed and we were unable to recover it. 00:31:41.893 [2024-10-01 08:46:33.545380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.893 [2024-10-01 08:46:33.545390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.893 qpair failed and we were unable to recover it. 00:31:41.893 [2024-10-01 08:46:33.545718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.893 [2024-10-01 08:46:33.545727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.893 qpair failed and we were unable to recover it. 00:31:41.893 [2024-10-01 08:46:33.546026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.893 [2024-10-01 08:46:33.546036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.893 qpair failed and we were unable to recover it. 00:31:41.893 [2024-10-01 08:46:33.546426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.893 [2024-10-01 08:46:33.546436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.893 qpair failed and we were unable to recover it. 00:31:41.893 [2024-10-01 08:46:33.546739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.893 [2024-10-01 08:46:33.546751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.893 qpair failed and we were unable to recover it. 00:31:41.893 [2024-10-01 08:46:33.547036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.893 [2024-10-01 08:46:33.547046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.893 qpair failed and we were unable to recover it. 00:31:41.893 [2024-10-01 08:46:33.547340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.893 [2024-10-01 08:46:33.547350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.893 qpair failed and we were unable to recover it. 00:31:41.893 [2024-10-01 08:46:33.547538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.893 [2024-10-01 08:46:33.547547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.893 qpair failed and we were unable to recover it. 00:31:41.893 [2024-10-01 08:46:33.547862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.893 [2024-10-01 08:46:33.547872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.893 qpair failed and we were unable to recover it. 00:31:41.893 [2024-10-01 08:46:33.548171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.893 [2024-10-01 08:46:33.548183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.893 qpair failed and we were unable to recover it. 00:31:41.893 [2024-10-01 08:46:33.548468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.893 [2024-10-01 08:46:33.548480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.893 qpair failed and we were unable to recover it. 00:31:41.893 [2024-10-01 08:46:33.548794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.893 [2024-10-01 08:46:33.548804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.893 qpair failed and we were unable to recover it. 00:31:41.893 [2024-10-01 08:46:33.549013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.893 [2024-10-01 08:46:33.549024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.893 qpair failed and we were unable to recover it. 00:31:41.893 [2024-10-01 08:46:33.549354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.893 [2024-10-01 08:46:33.549364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.893 qpair failed and we were unable to recover it. 00:31:41.893 [2024-10-01 08:46:33.549674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.893 [2024-10-01 08:46:33.549683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.893 qpair failed and we were unable to recover it. 00:31:41.893 [2024-10-01 08:46:33.549988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.893 [2024-10-01 08:46:33.550001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.893 qpair failed and we were unable to recover it. 00:31:41.893 [2024-10-01 08:46:33.550311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.893 [2024-10-01 08:46:33.550321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.893 qpair failed and we were unable to recover it. 00:31:41.893 [2024-10-01 08:46:33.550640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.893 [2024-10-01 08:46:33.550650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.893 qpair failed and we were unable to recover it. 00:31:41.893 [2024-10-01 08:46:33.550959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.893 [2024-10-01 08:46:33.550969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.893 qpair failed and we were unable to recover it. 00:31:41.893 [2024-10-01 08:46:33.551309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.893 [2024-10-01 08:46:33.551320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.893 qpair failed and we were unable to recover it. 00:31:41.893 [2024-10-01 08:46:33.551585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.893 [2024-10-01 08:46:33.551594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.893 qpair failed and we were unable to recover it. 00:31:41.893 [2024-10-01 08:46:33.551912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.893 [2024-10-01 08:46:33.551922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.893 qpair failed and we were unable to recover it. 00:31:41.893 [2024-10-01 08:46:33.552225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.893 [2024-10-01 08:46:33.552235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.893 qpair failed and we were unable to recover it. 00:31:41.893 [2024-10-01 08:46:33.552557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.893 [2024-10-01 08:46:33.552566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.893 qpair failed and we were unable to recover it. 00:31:41.893 [2024-10-01 08:46:33.552861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.893 [2024-10-01 08:46:33.552871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.893 qpair failed and we were unable to recover it. 00:31:41.893 [2024-10-01 08:46:33.553178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.893 [2024-10-01 08:46:33.553188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.893 qpair failed and we were unable to recover it. 00:31:41.893 [2024-10-01 08:46:33.553495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.893 [2024-10-01 08:46:33.553506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.893 qpair failed and we were unable to recover it. 00:31:41.893 [2024-10-01 08:46:33.553810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.893 [2024-10-01 08:46:33.553819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.893 qpair failed and we were unable to recover it. 00:31:41.893 [2024-10-01 08:46:33.554104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.893 [2024-10-01 08:46:33.554114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.893 qpair failed and we were unable to recover it. 00:31:41.893 [2024-10-01 08:46:33.554424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.893 [2024-10-01 08:46:33.554434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.893 qpair failed and we were unable to recover it. 00:31:41.893 [2024-10-01 08:46:33.554741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.893 [2024-10-01 08:46:33.554752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.893 qpair failed and we were unable to recover it. 00:31:41.893 [2024-10-01 08:46:33.555044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.893 [2024-10-01 08:46:33.555054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.893 qpair failed and we were unable to recover it. 00:31:41.893 [2024-10-01 08:46:33.555311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.893 [2024-10-01 08:46:33.555320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.893 qpair failed and we were unable to recover it. 00:31:41.893 [2024-10-01 08:46:33.555646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.893 [2024-10-01 08:46:33.555657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.893 qpair failed and we were unable to recover it. 00:31:41.893 [2024-10-01 08:46:33.555978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.893 [2024-10-01 08:46:33.555988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.893 qpair failed and we were unable to recover it. 00:31:41.893 [2024-10-01 08:46:33.556314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.894 [2024-10-01 08:46:33.556324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.894 qpair failed and we were unable to recover it. 00:31:41.894 [2024-10-01 08:46:33.556629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.894 [2024-10-01 08:46:33.556638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.894 qpair failed and we were unable to recover it. 00:31:41.894 [2024-10-01 08:46:33.556973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.894 [2024-10-01 08:46:33.556983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.894 qpair failed and we were unable to recover it. 00:31:41.894 [2024-10-01 08:46:33.557180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.894 [2024-10-01 08:46:33.557190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.894 qpair failed and we were unable to recover it. 00:31:41.894 [2024-10-01 08:46:33.557492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.894 [2024-10-01 08:46:33.557502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.894 qpair failed and we were unable to recover it. 00:31:41.894 [2024-10-01 08:46:33.557808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.894 [2024-10-01 08:46:33.557818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.894 qpair failed and we were unable to recover it. 00:31:41.894 [2024-10-01 08:46:33.558147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.894 [2024-10-01 08:46:33.558157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.894 qpair failed and we were unable to recover it. 00:31:41.894 [2024-10-01 08:46:33.558455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.894 [2024-10-01 08:46:33.558464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.894 qpair failed and we were unable to recover it. 00:31:41.894 [2024-10-01 08:46:33.559326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.894 [2024-10-01 08:46:33.559349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.894 qpair failed and we were unable to recover it. 00:31:41.894 [2024-10-01 08:46:33.559696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.894 [2024-10-01 08:46:33.559707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.894 qpair failed and we were unable to recover it. 00:31:41.894 [2024-10-01 08:46:33.560011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.894 [2024-10-01 08:46:33.560023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.894 qpair failed and we were unable to recover it. 00:31:41.894 [2024-10-01 08:46:33.560334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.894 [2024-10-01 08:46:33.560346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.894 qpair failed and we were unable to recover it. 00:31:41.894 [2024-10-01 08:46:33.560681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.894 [2024-10-01 08:46:33.560691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.894 qpair failed and we were unable to recover it. 00:31:41.894 [2024-10-01 08:46:33.560993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.894 [2024-10-01 08:46:33.561008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.894 qpair failed and we were unable to recover it. 00:31:41.894 [2024-10-01 08:46:33.561292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.894 [2024-10-01 08:46:33.561302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.894 qpair failed and we were unable to recover it. 00:31:41.894 [2024-10-01 08:46:33.561533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.894 [2024-10-01 08:46:33.561542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.894 qpair failed and we were unable to recover it. 00:31:41.894 [2024-10-01 08:46:33.561903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.894 [2024-10-01 08:46:33.561916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.894 qpair failed and we were unable to recover it. 00:31:41.894 [2024-10-01 08:46:33.562277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.894 [2024-10-01 08:46:33.562288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.894 qpair failed and we were unable to recover it. 00:31:41.894 [2024-10-01 08:46:33.562564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.894 [2024-10-01 08:46:33.562574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.894 qpair failed and we were unable to recover it. 00:31:41.894 [2024-10-01 08:46:33.562821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.894 [2024-10-01 08:46:33.562831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.894 qpair failed and we were unable to recover it. 00:31:41.894 [2024-10-01 08:46:33.563128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.894 [2024-10-01 08:46:33.563139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.894 qpair failed and we were unable to recover it. 00:31:41.894 [2024-10-01 08:46:33.563454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.894 [2024-10-01 08:46:33.563464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.894 qpair failed and we were unable to recover it. 00:31:41.894 [2024-10-01 08:46:33.563769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.894 [2024-10-01 08:46:33.563779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.894 qpair failed and we were unable to recover it. 00:31:41.894 [2024-10-01 08:46:33.564084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.894 [2024-10-01 08:46:33.564094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.894 qpair failed and we were unable to recover it. 00:31:41.894 [2024-10-01 08:46:33.564420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.894 [2024-10-01 08:46:33.564430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.894 qpair failed and we were unable to recover it. 00:31:41.894 [2024-10-01 08:46:33.564706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.894 [2024-10-01 08:46:33.564716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.894 qpair failed and we were unable to recover it. 00:31:41.894 [2024-10-01 08:46:33.565027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.894 [2024-10-01 08:46:33.565037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.894 qpair failed and we were unable to recover it. 00:31:41.894 [2024-10-01 08:46:33.565266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.894 [2024-10-01 08:46:33.565276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.894 qpair failed and we were unable to recover it. 00:31:41.894 [2024-10-01 08:46:33.565475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.894 [2024-10-01 08:46:33.565485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.894 qpair failed and we were unable to recover it. 00:31:41.894 [2024-10-01 08:46:33.565778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.894 [2024-10-01 08:46:33.565788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.894 qpair failed and we were unable to recover it. 00:31:41.894 [2024-10-01 08:46:33.566022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.894 [2024-10-01 08:46:33.566032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.894 qpair failed and we were unable to recover it. 00:31:41.894 [2024-10-01 08:46:33.566361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.894 [2024-10-01 08:46:33.566371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.894 qpair failed and we were unable to recover it. 00:31:41.894 [2024-10-01 08:46:33.566570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.894 [2024-10-01 08:46:33.566582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.894 qpair failed and we were unable to recover it. 00:31:41.894 [2024-10-01 08:46:33.566889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.894 [2024-10-01 08:46:33.566899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.894 qpair failed and we were unable to recover it. 00:31:41.894 [2024-10-01 08:46:33.567113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.894 [2024-10-01 08:46:33.567124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.894 qpair failed and we were unable to recover it. 00:31:41.894 [2024-10-01 08:46:33.567440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.894 [2024-10-01 08:46:33.567450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.894 qpair failed and we were unable to recover it. 00:31:41.894 [2024-10-01 08:46:33.567765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.894 [2024-10-01 08:46:33.567776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.894 qpair failed and we were unable to recover it. 00:31:41.895 [2024-10-01 08:46:33.567948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.895 [2024-10-01 08:46:33.567959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.895 qpair failed and we were unable to recover it. 00:31:41.895 [2024-10-01 08:46:33.568296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.895 [2024-10-01 08:46:33.568307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.895 qpair failed and we were unable to recover it. 00:31:41.895 [2024-10-01 08:46:33.568631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.895 [2024-10-01 08:46:33.568642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.895 qpair failed and we were unable to recover it. 00:31:41.895 [2024-10-01 08:46:33.568944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.895 [2024-10-01 08:46:33.568954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.895 qpair failed and we were unable to recover it. 00:31:41.895 [2024-10-01 08:46:33.569234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.895 [2024-10-01 08:46:33.569244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.895 qpair failed and we were unable to recover it. 00:31:41.895 [2024-10-01 08:46:33.569552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.895 [2024-10-01 08:46:33.569561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.895 qpair failed and we were unable to recover it. 00:31:41.895 [2024-10-01 08:46:33.569859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.895 [2024-10-01 08:46:33.569870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.895 qpair failed and we were unable to recover it. 00:31:41.895 [2024-10-01 08:46:33.570061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.895 [2024-10-01 08:46:33.570072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.895 qpair failed and we were unable to recover it. 00:31:41.895 [2024-10-01 08:46:33.570434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.895 [2024-10-01 08:46:33.570445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.895 qpair failed and we were unable to recover it. 00:31:41.895 [2024-10-01 08:46:33.570726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.895 [2024-10-01 08:46:33.570736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.895 qpair failed and we were unable to recover it. 00:31:41.895 [2024-10-01 08:46:33.571041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.895 [2024-10-01 08:46:33.571052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.895 qpair failed and we were unable to recover it. 00:31:41.895 [2024-10-01 08:46:33.571285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.895 [2024-10-01 08:46:33.571294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.895 qpair failed and we were unable to recover it. 00:31:41.895 [2024-10-01 08:46:33.571597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.895 [2024-10-01 08:46:33.571613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.895 qpair failed and we were unable to recover it. 00:31:41.895 [2024-10-01 08:46:33.571912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.895 [2024-10-01 08:46:33.571922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.895 qpair failed and we were unable to recover it. 00:31:41.895 [2024-10-01 08:46:33.572199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.895 [2024-10-01 08:46:33.572210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.895 qpair failed and we were unable to recover it. 00:31:41.895 [2024-10-01 08:46:33.572527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.895 [2024-10-01 08:46:33.572538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.895 qpair failed and we were unable to recover it. 00:31:41.895 [2024-10-01 08:46:33.572911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.895 [2024-10-01 08:46:33.572922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.895 qpair failed and we were unable to recover it. 00:31:41.895 [2024-10-01 08:46:33.573739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.895 [2024-10-01 08:46:33.573760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.895 qpair failed and we were unable to recover it. 00:31:41.895 [2024-10-01 08:46:33.574056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.895 [2024-10-01 08:46:33.574067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.895 qpair failed and we were unable to recover it. 00:31:41.895 [2024-10-01 08:46:33.574431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.895 [2024-10-01 08:46:33.574441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.895 qpair failed and we were unable to recover it. 00:31:41.895 [2024-10-01 08:46:33.574653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.895 [2024-10-01 08:46:33.574664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.895 qpair failed and we were unable to recover it. 00:31:41.895 [2024-10-01 08:46:33.574942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.895 [2024-10-01 08:46:33.574952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.895 qpair failed and we were unable to recover it. 00:31:41.895 [2024-10-01 08:46:33.575271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.895 [2024-10-01 08:46:33.575281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.895 qpair failed and we were unable to recover it. 00:31:41.895 [2024-10-01 08:46:33.575573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.895 [2024-10-01 08:46:33.575591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.895 qpair failed and we were unable to recover it. 00:31:41.895 [2024-10-01 08:46:33.575921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.895 [2024-10-01 08:46:33.575931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.895 qpair failed and we were unable to recover it. 00:31:41.895 [2024-10-01 08:46:33.576195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.895 [2024-10-01 08:46:33.576206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.895 qpair failed and we were unable to recover it. 00:31:41.895 [2024-10-01 08:46:33.576477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.895 [2024-10-01 08:46:33.576487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.895 qpair failed and we were unable to recover it. 00:31:41.895 [2024-10-01 08:46:33.576798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.895 [2024-10-01 08:46:33.576809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.895 qpair failed and we were unable to recover it. 00:31:41.895 [2024-10-01 08:46:33.577121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.895 [2024-10-01 08:46:33.577131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.895 qpair failed and we were unable to recover it. 00:31:41.895 [2024-10-01 08:46:33.577443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.895 [2024-10-01 08:46:33.577453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.895 qpair failed and we were unable to recover it. 00:31:41.895 [2024-10-01 08:46:33.577745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.895 [2024-10-01 08:46:33.577757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.895 qpair failed and we were unable to recover it. 00:31:41.895 [2024-10-01 08:46:33.578053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.895 [2024-10-01 08:46:33.578063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.895 qpair failed and we were unable to recover it. 00:31:41.895 [2024-10-01 08:46:33.578393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.895 [2024-10-01 08:46:33.578403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.895 qpair failed and we were unable to recover it. 00:31:41.895 [2024-10-01 08:46:33.578714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.895 [2024-10-01 08:46:33.578724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.895 qpair failed and we were unable to recover it. 00:31:41.895 [2024-10-01 08:46:33.579044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.895 [2024-10-01 08:46:33.579054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.895 qpair failed and we were unable to recover it. 00:31:41.895 [2024-10-01 08:46:33.579380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.895 [2024-10-01 08:46:33.579390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.895 qpair failed and we were unable to recover it. 00:31:41.895 [2024-10-01 08:46:33.579743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.895 [2024-10-01 08:46:33.579753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.895 qpair failed and we were unable to recover it. 00:31:41.895 [2024-10-01 08:46:33.580044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.895 [2024-10-01 08:46:33.580054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.895 qpair failed and we were unable to recover it. 00:31:41.895 [2024-10-01 08:46:33.580381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.896 [2024-10-01 08:46:33.580391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.896 qpair failed and we were unable to recover it. 00:31:41.896 [2024-10-01 08:46:33.580697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.896 [2024-10-01 08:46:33.580707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.896 qpair failed and we were unable to recover it. 00:31:41.896 [2024-10-01 08:46:33.580999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.896 [2024-10-01 08:46:33.581010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.896 qpair failed and we were unable to recover it. 00:31:41.896 [2024-10-01 08:46:33.581270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.896 [2024-10-01 08:46:33.581280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.896 qpair failed and we were unable to recover it. 00:31:41.896 [2024-10-01 08:46:33.581516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.896 [2024-10-01 08:46:33.581526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.896 qpair failed and we were unable to recover it. 00:31:41.896 [2024-10-01 08:46:33.581852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.896 [2024-10-01 08:46:33.581863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.896 qpair failed and we were unable to recover it. 00:31:41.896 [2024-10-01 08:46:33.582169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.896 [2024-10-01 08:46:33.582179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.896 qpair failed and we were unable to recover it. 00:31:41.896 [2024-10-01 08:46:33.582537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.896 [2024-10-01 08:46:33.582546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.896 qpair failed and we were unable to recover it. 00:31:41.896 [2024-10-01 08:46:33.582873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.896 [2024-10-01 08:46:33.582882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.896 qpair failed and we were unable to recover it. 00:31:41.896 [2024-10-01 08:46:33.583195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.896 [2024-10-01 08:46:33.583208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.896 qpair failed and we were unable to recover it. 00:31:41.896 [2024-10-01 08:46:33.583501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.896 [2024-10-01 08:46:33.583511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.896 qpair failed and we were unable to recover it. 00:31:41.896 [2024-10-01 08:46:33.583825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.896 [2024-10-01 08:46:33.583836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.896 qpair failed and we were unable to recover it. 00:31:41.896 [2024-10-01 08:46:33.584149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.896 [2024-10-01 08:46:33.584160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.896 qpair failed and we were unable to recover it. 00:31:41.896 [2024-10-01 08:46:33.584451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.896 [2024-10-01 08:46:33.584461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.896 qpair failed and we were unable to recover it. 00:31:41.896 [2024-10-01 08:46:33.584649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.896 [2024-10-01 08:46:33.584659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.896 qpair failed and we were unable to recover it. 00:31:41.896 [2024-10-01 08:46:33.584860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.896 [2024-10-01 08:46:33.584869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.896 qpair failed and we were unable to recover it. 00:31:41.896 [2024-10-01 08:46:33.585188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.896 [2024-10-01 08:46:33.585198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.896 qpair failed and we were unable to recover it. 00:31:41.896 [2024-10-01 08:46:33.585525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.896 [2024-10-01 08:46:33.585535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.896 qpair failed and we were unable to recover it. 00:31:41.896 [2024-10-01 08:46:33.585876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.896 [2024-10-01 08:46:33.585886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.896 qpair failed and we were unable to recover it. 00:31:41.896 [2024-10-01 08:46:33.586078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.896 [2024-10-01 08:46:33.586091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.896 qpair failed and we were unable to recover it. 00:31:41.896 [2024-10-01 08:46:33.586410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.896 [2024-10-01 08:46:33.586420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.896 qpair failed and we were unable to recover it. 00:31:41.896 [2024-10-01 08:46:33.586747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.896 [2024-10-01 08:46:33.586757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.896 qpair failed and we were unable to recover it. 00:31:41.896 [2024-10-01 08:46:33.587035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.896 [2024-10-01 08:46:33.587047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.896 qpair failed and we were unable to recover it. 00:31:41.896 [2024-10-01 08:46:33.587340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.896 [2024-10-01 08:46:33.587350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.896 qpair failed and we were unable to recover it. 00:31:41.896 [2024-10-01 08:46:33.587490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.896 [2024-10-01 08:46:33.587499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.896 qpair failed and we were unable to recover it. 00:31:41.896 [2024-10-01 08:46:33.587882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.896 [2024-10-01 08:46:33.587891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.896 qpair failed and we were unable to recover it. 00:31:41.896 [2024-10-01 08:46:33.588209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.896 [2024-10-01 08:46:33.588220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.896 qpair failed and we were unable to recover it. 00:31:41.896 [2024-10-01 08:46:33.588419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.896 [2024-10-01 08:46:33.588429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.896 qpair failed and we were unable to recover it. 00:31:41.896 [2024-10-01 08:46:33.588708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.896 [2024-10-01 08:46:33.588718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.896 qpair failed and we were unable to recover it. 00:31:41.896 [2024-10-01 08:46:33.588917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.896 [2024-10-01 08:46:33.588927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.896 qpair failed and we were unable to recover it. 00:31:41.896 [2024-10-01 08:46:33.589238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.896 [2024-10-01 08:46:33.589248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.896 qpair failed and we were unable to recover it. 00:31:41.896 [2024-10-01 08:46:33.589440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.896 [2024-10-01 08:46:33.589450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.896 qpair failed and we were unable to recover it. 00:31:41.896 [2024-10-01 08:46:33.589772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.896 [2024-10-01 08:46:33.589781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.896 qpair failed and we were unable to recover it. 00:31:41.896 [2024-10-01 08:46:33.590072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.896 [2024-10-01 08:46:33.590082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.896 qpair failed and we were unable to recover it. 00:31:41.896 [2024-10-01 08:46:33.590282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.896 [2024-10-01 08:46:33.590292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.896 qpair failed and we were unable to recover it. 00:31:41.896 [2024-10-01 08:46:33.590582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.896 [2024-10-01 08:46:33.590592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.896 qpair failed and we were unable to recover it. 00:31:41.896 [2024-10-01 08:46:33.590880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.896 [2024-10-01 08:46:33.590970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.896 qpair failed and we were unable to recover it. 00:31:41.896 [2024-10-01 08:46:33.591255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.896 [2024-10-01 08:46:33.591266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.896 qpair failed and we were unable to recover it. 00:31:41.896 [2024-10-01 08:46:33.591618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.896 [2024-10-01 08:46:33.591629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.897 qpair failed and we were unable to recover it. 00:31:41.897 [2024-10-01 08:46:33.591957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.897 [2024-10-01 08:46:33.591967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.897 qpair failed and we were unable to recover it. 00:31:41.897 [2024-10-01 08:46:33.592286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.897 [2024-10-01 08:46:33.592297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.897 qpair failed and we were unable to recover it. 00:31:41.897 [2024-10-01 08:46:33.592619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.897 [2024-10-01 08:46:33.592629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.897 qpair failed and we were unable to recover it. 00:31:41.897 [2024-10-01 08:46:33.592925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.897 [2024-10-01 08:46:33.592935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.897 qpair failed and we were unable to recover it. 00:31:41.897 [2024-10-01 08:46:33.593151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.897 [2024-10-01 08:46:33.593162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.897 qpair failed and we were unable to recover it. 00:31:41.897 [2024-10-01 08:46:33.593374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.897 [2024-10-01 08:46:33.593384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.897 qpair failed and we were unable to recover it. 00:31:41.897 [2024-10-01 08:46:33.593728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.897 [2024-10-01 08:46:33.593738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.897 qpair failed and we were unable to recover it. 00:31:41.897 [2024-10-01 08:46:33.594035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.897 [2024-10-01 08:46:33.594045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.897 qpair failed and we were unable to recover it. 00:31:41.897 [2024-10-01 08:46:33.594331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.897 [2024-10-01 08:46:33.594340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.897 qpair failed and we were unable to recover it. 00:31:41.897 [2024-10-01 08:46:33.594548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.897 [2024-10-01 08:46:33.594559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.897 qpair failed and we were unable to recover it. 00:31:41.897 [2024-10-01 08:46:33.594775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.897 [2024-10-01 08:46:33.594785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.897 qpair failed and we were unable to recover it. 00:31:41.897 [2024-10-01 08:46:33.595088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.897 [2024-10-01 08:46:33.595104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.897 qpair failed and we were unable to recover it. 00:31:41.897 [2024-10-01 08:46:33.595336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.897 [2024-10-01 08:46:33.595346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.897 qpair failed and we were unable to recover it. 00:31:41.897 [2024-10-01 08:46:33.595631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.897 [2024-10-01 08:46:33.595640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.897 qpair failed and we were unable to recover it. 00:31:41.897 [2024-10-01 08:46:33.595854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.897 [2024-10-01 08:46:33.595865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.897 qpair failed and we were unable to recover it. 00:31:41.897 [2024-10-01 08:46:33.596166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.897 [2024-10-01 08:46:33.596176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.897 qpair failed and we were unable to recover it. 00:31:41.897 [2024-10-01 08:46:33.596471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.897 [2024-10-01 08:46:33.596488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.897 qpair failed and we were unable to recover it. 00:31:41.897 [2024-10-01 08:46:33.596688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.897 [2024-10-01 08:46:33.596698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.897 qpair failed and we were unable to recover it. 00:31:41.897 [2024-10-01 08:46:33.596808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.897 [2024-10-01 08:46:33.596817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.897 qpair failed and we were unable to recover it. 00:31:41.897 [2024-10-01 08:46:33.596846] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdd7ed0 (9): Bad file descriptor 00:31:41.897 Read completed with error (sct=0, sc=8) 00:31:41.897 starting I/O failed 00:31:41.897 Read completed with error (sct=0, sc=8) 00:31:41.897 starting I/O failed 00:31:41.897 Read completed with error (sct=0, sc=8) 00:31:41.897 starting I/O failed 00:31:41.897 Read completed with error (sct=0, sc=8) 00:31:41.897 starting I/O failed 00:31:41.897 Read completed with error (sct=0, sc=8) 00:31:41.897 starting I/O failed 00:31:41.897 Read completed with error (sct=0, sc=8) 00:31:41.897 starting I/O failed 00:31:41.897 Read completed with error (sct=0, sc=8) 00:31:41.897 starting I/O failed 00:31:41.897 Read completed with error (sct=0, sc=8) 00:31:41.897 starting I/O failed 00:31:41.897 Write completed with error (sct=0, sc=8) 00:31:41.897 starting I/O failed 00:31:41.897 Write completed with error (sct=0, sc=8) 00:31:41.897 starting I/O failed 00:31:41.897 Read completed with error (sct=0, sc=8) 00:31:41.897 starting I/O failed 00:31:41.897 Write completed with error (sct=0, sc=8) 00:31:41.897 starting I/O failed 00:31:41.897 Read completed with error (sct=0, sc=8) 00:31:41.897 starting I/O failed 00:31:41.897 Write completed with error (sct=0, sc=8) 00:31:41.897 starting I/O failed 00:31:41.897 Write completed with error (sct=0, sc=8) 00:31:41.897 starting I/O failed 00:31:41.897 Read completed with error (sct=0, sc=8) 00:31:41.897 starting I/O failed 00:31:41.897 Read completed with error (sct=0, sc=8) 00:31:41.897 starting I/O failed 00:31:41.897 Read completed with error (sct=0, sc=8) 00:31:41.897 starting I/O failed 00:31:41.897 Write completed with error (sct=0, sc=8) 00:31:41.897 starting I/O failed 00:31:41.897 Read completed with error (sct=0, sc=8) 00:31:41.897 starting I/O failed 00:31:41.897 Write completed with error (sct=0, sc=8) 00:31:41.897 starting I/O failed 00:31:41.897 Read completed with error (sct=0, sc=8) 00:31:41.897 starting I/O failed 00:31:41.897 Write completed with error (sct=0, sc=8) 00:31:41.897 starting I/O failed 00:31:41.897 Write completed with error (sct=0, sc=8) 00:31:41.897 starting I/O failed 00:31:41.897 Write completed with error (sct=0, sc=8) 00:31:41.897 starting I/O failed 00:31:41.897 Read completed with error (sct=0, sc=8) 00:31:41.897 starting I/O failed 00:31:41.897 Read completed with error (sct=0, sc=8) 00:31:41.897 starting I/O failed 00:31:41.897 Write completed with error (sct=0, sc=8) 00:31:41.897 starting I/O failed 00:31:41.897 Read completed with error (sct=0, sc=8) 00:31:41.897 starting I/O failed 00:31:41.897 Write completed with error (sct=0, sc=8) 00:31:41.897 starting I/O failed 00:31:41.897 Write completed with error (sct=0, sc=8) 00:31:41.897 starting I/O failed 00:31:41.897 Read completed with error (sct=0, sc=8) 00:31:41.897 starting I/O failed 00:31:41.897 [2024-10-01 08:46:33.597258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:41.897 [2024-10-01 08:46:33.597641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.897 [2024-10-01 08:46:33.597721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:41.897 qpair failed and we were unable to recover it. 00:31:41.897 [2024-10-01 08:46:33.598151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.897 [2024-10-01 08:46:33.598186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda0000b90 with addr=10.0.0.2, port=4420 00:31:41.897 qpair failed and we were unable to recover it. 00:31:41.897 [2024-10-01 08:46:33.598475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.898 [2024-10-01 08:46:33.598486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.898 qpair failed and we were unable to recover it. 00:31:41.898 [2024-10-01 08:46:33.598812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.898 [2024-10-01 08:46:33.598821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.898 qpair failed and we were unable to recover it. 00:31:41.898 [2024-10-01 08:46:33.599101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.898 [2024-10-01 08:46:33.599111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.898 qpair failed and we were unable to recover it. 00:31:41.898 [2024-10-01 08:46:33.599459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.898 [2024-10-01 08:46:33.599469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.898 qpair failed and we were unable to recover it. 00:31:41.898 [2024-10-01 08:46:33.599751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.898 [2024-10-01 08:46:33.599761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.898 qpair failed and we were unable to recover it. 00:31:41.898 [2024-10-01 08:46:33.600062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.898 [2024-10-01 08:46:33.600072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.898 qpair failed and we were unable to recover it. 00:31:41.898 [2024-10-01 08:46:33.600415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.898 [2024-10-01 08:46:33.600425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.898 qpair failed and we were unable to recover it. 00:31:41.898 [2024-10-01 08:46:33.600737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.898 [2024-10-01 08:46:33.600747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.898 qpair failed and we were unable to recover it. 00:31:41.898 [2024-10-01 08:46:33.601052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.898 [2024-10-01 08:46:33.601062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.898 qpair failed and we were unable to recover it. 00:31:41.898 [2024-10-01 08:46:33.601392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.898 [2024-10-01 08:46:33.601402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.898 qpair failed and we were unable to recover it. 00:31:41.898 [2024-10-01 08:46:33.601740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.898 [2024-10-01 08:46:33.601749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.898 qpair failed and we were unable to recover it. 00:31:41.898 [2024-10-01 08:46:33.602040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.898 [2024-10-01 08:46:33.602050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.898 qpair failed and we were unable to recover it. 00:31:41.898 [2024-10-01 08:46:33.602373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.898 [2024-10-01 08:46:33.602382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.898 qpair failed and we were unable to recover it. 00:31:41.898 [2024-10-01 08:46:33.602703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.898 [2024-10-01 08:46:33.602712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.898 qpair failed and we were unable to recover it. 00:31:41.898 [2024-10-01 08:46:33.603024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.898 [2024-10-01 08:46:33.603034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.898 qpair failed and we were unable to recover it. 00:31:41.898 [2024-10-01 08:46:33.603312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.898 [2024-10-01 08:46:33.603321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.898 qpair failed and we were unable to recover it. 00:31:41.898 [2024-10-01 08:46:33.603602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.898 [2024-10-01 08:46:33.603612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.898 qpair failed and we were unable to recover it. 00:31:41.898 [2024-10-01 08:46:33.603899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.898 [2024-10-01 08:46:33.603908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.898 qpair failed and we were unable to recover it. 00:31:41.898 [2024-10-01 08:46:33.604119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.898 [2024-10-01 08:46:33.604130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.898 qpair failed and we were unable to recover it. 00:31:41.898 [2024-10-01 08:46:33.604441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.898 [2024-10-01 08:46:33.604451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.898 qpair failed and we were unable to recover it. 00:31:41.898 [2024-10-01 08:46:33.604750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.898 [2024-10-01 08:46:33.604760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.898 qpair failed and we were unable to recover it. 00:31:41.898 [2024-10-01 08:46:33.605065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.898 [2024-10-01 08:46:33.605075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.898 qpair failed and we were unable to recover it. 00:31:41.898 [2024-10-01 08:46:33.605383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.898 [2024-10-01 08:46:33.605392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.898 qpair failed and we were unable to recover it. 00:31:41.898 [2024-10-01 08:46:33.605692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.898 [2024-10-01 08:46:33.605702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.898 qpair failed and we were unable to recover it. 00:31:41.898 [2024-10-01 08:46:33.605971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.898 [2024-10-01 08:46:33.605980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.898 qpair failed and we were unable to recover it. 00:31:41.898 [2024-10-01 08:46:33.606313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.898 [2024-10-01 08:46:33.606323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.898 qpair failed and we were unable to recover it. 00:31:41.898 [2024-10-01 08:46:33.606606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.898 [2024-10-01 08:46:33.606616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.898 qpair failed and we were unable to recover it. 00:31:41.898 [2024-10-01 08:46:33.606917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.898 [2024-10-01 08:46:33.606927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.898 qpair failed and we were unable to recover it. 00:31:41.898 [2024-10-01 08:46:33.607219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.898 [2024-10-01 08:46:33.607229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.898 qpair failed and we were unable to recover it. 00:31:41.898 [2024-10-01 08:46:33.607551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.898 [2024-10-01 08:46:33.607560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.898 qpair failed and we were unable to recover it. 00:31:41.898 [2024-10-01 08:46:33.607838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.898 [2024-10-01 08:46:33.607847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.898 qpair failed and we were unable to recover it. 00:31:41.898 [2024-10-01 08:46:33.608161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.898 [2024-10-01 08:46:33.608172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.898 qpair failed and we were unable to recover it. 00:31:41.898 [2024-10-01 08:46:33.608379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.898 [2024-10-01 08:46:33.608388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.898 qpair failed and we were unable to recover it. 00:31:41.898 [2024-10-01 08:46:33.608688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.898 [2024-10-01 08:46:33.608698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.898 qpair failed and we were unable to recover it. 00:31:41.898 [2024-10-01 08:46:33.609040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.898 [2024-10-01 08:46:33.609050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.898 qpair failed and we were unable to recover it. 00:31:41.898 [2024-10-01 08:46:33.609367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.898 [2024-10-01 08:46:33.609377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.898 qpair failed and we were unable to recover it. 00:31:41.898 [2024-10-01 08:46:33.609705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.898 [2024-10-01 08:46:33.609715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.898 qpair failed and we were unable to recover it. 00:31:41.898 [2024-10-01 08:46:33.610025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.898 [2024-10-01 08:46:33.610035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.898 qpair failed and we were unable to recover it. 00:31:41.898 [2024-10-01 08:46:33.610347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.898 [2024-10-01 08:46:33.610356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.898 qpair failed and we were unable to recover it. 00:31:41.899 [2024-10-01 08:46:33.610667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.899 [2024-10-01 08:46:33.610676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.899 qpair failed and we were unable to recover it. 00:31:41.899 [2024-10-01 08:46:33.610943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.899 [2024-10-01 08:46:33.610953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.899 qpair failed and we were unable to recover it. 00:31:41.899 [2024-10-01 08:46:33.611299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.899 [2024-10-01 08:46:33.611309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.899 qpair failed and we were unable to recover it. 00:31:41.899 [2024-10-01 08:46:33.611625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.899 [2024-10-01 08:46:33.611634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.899 qpair failed and we were unable to recover it. 00:31:41.899 [2024-10-01 08:46:33.611972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.899 [2024-10-01 08:46:33.611982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.899 qpair failed and we were unable to recover it. 00:31:41.899 [2024-10-01 08:46:33.612279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.899 [2024-10-01 08:46:33.612290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.899 qpair failed and we were unable to recover it. 00:31:41.899 [2024-10-01 08:46:33.612578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.899 [2024-10-01 08:46:33.612588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.899 qpair failed and we were unable to recover it. 00:31:41.899 [2024-10-01 08:46:33.612791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.899 [2024-10-01 08:46:33.612801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.899 qpair failed and we were unable to recover it. 00:31:41.899 [2024-10-01 08:46:33.613110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.899 [2024-10-01 08:46:33.613121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.899 qpair failed and we were unable to recover it. 00:31:41.899 [2024-10-01 08:46:33.613324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.899 [2024-10-01 08:46:33.613334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.899 qpair failed and we were unable to recover it. 00:31:41.899 [2024-10-01 08:46:33.613526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.899 [2024-10-01 08:46:33.613536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.899 qpair failed and we were unable to recover it. 00:31:41.899 [2024-10-01 08:46:33.613797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.899 [2024-10-01 08:46:33.613807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.899 qpair failed and we were unable to recover it. 00:31:41.899 [2024-10-01 08:46:33.614139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.899 [2024-10-01 08:46:33.614149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.899 qpair failed and we were unable to recover it. 00:31:41.899 [2024-10-01 08:46:33.614328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.899 [2024-10-01 08:46:33.614337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.899 qpair failed and we were unable to recover it. 00:31:41.899 [2024-10-01 08:46:33.614721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.899 [2024-10-01 08:46:33.614730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.899 qpair failed and we were unable to recover it. 00:31:41.899 [2024-10-01 08:46:33.615001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.899 [2024-10-01 08:46:33.615011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.899 qpair failed and we were unable to recover it. 00:31:41.899 [2024-10-01 08:46:33.615194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.899 [2024-10-01 08:46:33.615205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.899 qpair failed and we were unable to recover it. 00:31:41.899 [2024-10-01 08:46:33.615566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.899 [2024-10-01 08:46:33.615576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.899 qpair failed and we were unable to recover it. 00:31:41.899 [2024-10-01 08:46:33.615886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.899 [2024-10-01 08:46:33.615895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.899 qpair failed and we were unable to recover it. 00:31:41.899 [2024-10-01 08:46:33.616211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.899 [2024-10-01 08:46:33.616221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.899 qpair failed and we were unable to recover it. 00:31:41.899 [2024-10-01 08:46:33.616533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.899 [2024-10-01 08:46:33.616543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.899 qpair failed and we were unable to recover it. 00:31:41.899 [2024-10-01 08:46:33.616850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.899 [2024-10-01 08:46:33.616859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.899 qpair failed and we were unable to recover it. 00:31:41.899 [2024-10-01 08:46:33.617171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.899 [2024-10-01 08:46:33.617182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.899 qpair failed and we were unable to recover it. 00:31:41.899 [2024-10-01 08:46:33.617492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.899 [2024-10-01 08:46:33.617501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.899 qpair failed and we were unable to recover it. 00:31:41.899 [2024-10-01 08:46:33.617809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.899 [2024-10-01 08:46:33.617819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.899 qpair failed and we were unable to recover it. 00:31:41.899 [2024-10-01 08:46:33.618038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.899 [2024-10-01 08:46:33.618050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.899 qpair failed and we were unable to recover it. 00:31:41.899 [2024-10-01 08:46:33.618360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.899 [2024-10-01 08:46:33.618370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.899 qpair failed and we were unable to recover it. 00:31:41.899 [2024-10-01 08:46:33.618681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.899 [2024-10-01 08:46:33.618690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.899 qpair failed and we were unable to recover it. 00:31:41.899 [2024-10-01 08:46:33.619018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.899 [2024-10-01 08:46:33.619028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.899 qpair failed and we were unable to recover it. 00:31:41.899 [2024-10-01 08:46:33.619382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.899 [2024-10-01 08:46:33.619392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.899 qpair failed and we were unable to recover it. 00:31:41.899 [2024-10-01 08:46:33.619699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.899 [2024-10-01 08:46:33.619709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.899 qpair failed and we were unable to recover it. 00:31:41.899 [2024-10-01 08:46:33.619892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.899 [2024-10-01 08:46:33.619902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.899 qpair failed and we were unable to recover it. 00:31:41.899 [2024-10-01 08:46:33.620216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.899 [2024-10-01 08:46:33.620226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.899 qpair failed and we were unable to recover it. 00:31:41.899 [2024-10-01 08:46:33.620540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.899 [2024-10-01 08:46:33.620549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.899 qpair failed and we were unable to recover it. 00:31:41.899 [2024-10-01 08:46:33.620885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.899 [2024-10-01 08:46:33.620895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.899 qpair failed and we were unable to recover it. 00:31:41.899 [2024-10-01 08:46:33.621247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.899 [2024-10-01 08:46:33.621258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.899 qpair failed and we were unable to recover it. 00:31:41.899 [2024-10-01 08:46:33.621519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.899 [2024-10-01 08:46:33.621529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.899 qpair failed and we were unable to recover it. 00:31:41.899 [2024-10-01 08:46:33.621681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.899 [2024-10-01 08:46:33.621692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.899 qpair failed and we were unable to recover it. 00:31:41.899 [2024-10-01 08:46:33.622035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.900 [2024-10-01 08:46:33.622052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.900 qpair failed and we were unable to recover it. 00:31:41.900 [2024-10-01 08:46:33.622334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.900 [2024-10-01 08:46:33.622343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.900 qpair failed and we were unable to recover it. 00:31:41.900 [2024-10-01 08:46:33.622661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.900 [2024-10-01 08:46:33.622670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.900 qpair failed and we were unable to recover it. 00:31:41.900 [2024-10-01 08:46:33.622842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.900 [2024-10-01 08:46:33.622851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.900 qpair failed and we were unable to recover it. 00:31:41.900 [2024-10-01 08:46:33.623067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.900 [2024-10-01 08:46:33.623077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.900 qpair failed and we were unable to recover it. 00:31:41.900 [2024-10-01 08:46:33.623399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.900 [2024-10-01 08:46:33.623408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.900 qpair failed and we were unable to recover it. 00:31:41.900 [2024-10-01 08:46:33.623716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.900 [2024-10-01 08:46:33.623725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.900 qpair failed and we were unable to recover it. 00:31:41.900 [2024-10-01 08:46:33.624026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.900 [2024-10-01 08:46:33.624036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.900 qpair failed and we were unable to recover it. 00:31:41.900 [2024-10-01 08:46:33.624368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.900 [2024-10-01 08:46:33.624378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.900 qpair failed and we were unable to recover it. 00:31:41.900 [2024-10-01 08:46:33.624706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.900 [2024-10-01 08:46:33.624716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.900 qpair failed and we were unable to recover it. 00:31:41.900 [2024-10-01 08:46:33.624894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.900 [2024-10-01 08:46:33.624905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.900 qpair failed and we were unable to recover it. 00:31:41.900 [2024-10-01 08:46:33.625215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.900 [2024-10-01 08:46:33.625226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.900 qpair failed and we were unable to recover it. 00:31:41.900 [2024-10-01 08:46:33.625549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.900 [2024-10-01 08:46:33.625559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.900 qpair failed and we were unable to recover it. 00:31:41.900 [2024-10-01 08:46:33.625805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.900 [2024-10-01 08:46:33.625815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.900 qpair failed and we were unable to recover it. 00:31:41.900 [2024-10-01 08:46:33.626119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.900 [2024-10-01 08:46:33.626129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.900 qpair failed and we were unable to recover it. 00:31:41.900 [2024-10-01 08:46:33.626427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.900 [2024-10-01 08:46:33.626445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.900 qpair failed and we were unable to recover it. 00:31:41.900 [2024-10-01 08:46:33.626760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.900 [2024-10-01 08:46:33.626769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.900 qpair failed and we were unable to recover it. 00:31:41.900 [2024-10-01 08:46:33.627056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.900 [2024-10-01 08:46:33.627066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.900 qpair failed and we were unable to recover it. 00:31:41.900 [2024-10-01 08:46:33.627379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.900 [2024-10-01 08:46:33.627389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.900 qpair failed and we were unable to recover it. 00:31:41.900 [2024-10-01 08:46:33.627701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.900 [2024-10-01 08:46:33.627711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.900 qpair failed and we were unable to recover it. 00:31:41.900 [2024-10-01 08:46:33.627982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.900 [2024-10-01 08:46:33.627992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.900 qpair failed and we were unable to recover it. 00:31:41.900 [2024-10-01 08:46:33.628203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.900 [2024-10-01 08:46:33.628213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.900 qpair failed and we were unable to recover it. 00:31:41.900 [2024-10-01 08:46:33.628528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.900 [2024-10-01 08:46:33.628537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.900 qpair failed and we were unable to recover it. 00:31:41.900 [2024-10-01 08:46:33.628744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.900 [2024-10-01 08:46:33.628754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.900 qpair failed and we were unable to recover it. 00:31:41.900 [2024-10-01 08:46:33.629098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.900 [2024-10-01 08:46:33.629108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.900 qpair failed and we were unable to recover it. 00:31:41.900 [2024-10-01 08:46:33.629388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.900 [2024-10-01 08:46:33.629398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.900 qpair failed and we were unable to recover it. 00:31:41.900 [2024-10-01 08:46:33.629682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.900 [2024-10-01 08:46:33.629693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.900 qpair failed and we were unable to recover it. 00:31:41.900 [2024-10-01 08:46:33.629876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.900 [2024-10-01 08:46:33.629886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.900 qpair failed and we were unable to recover it. 00:31:41.900 [2024-10-01 08:46:33.630211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.900 [2024-10-01 08:46:33.630224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.900 qpair failed and we were unable to recover it. 00:31:41.900 [2024-10-01 08:46:33.630532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.900 [2024-10-01 08:46:33.630542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.900 qpair failed and we were unable to recover it. 00:31:41.900 [2024-10-01 08:46:33.630862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.900 [2024-10-01 08:46:33.630872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.900 qpair failed and we were unable to recover it. 00:31:41.900 [2024-10-01 08:46:33.631195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.900 [2024-10-01 08:46:33.631204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.900 qpair failed and we were unable to recover it. 00:31:41.900 [2024-10-01 08:46:33.631496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.900 [2024-10-01 08:46:33.631505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.900 qpair failed and we were unable to recover it. 00:31:41.900 [2024-10-01 08:46:33.631690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.900 [2024-10-01 08:46:33.631701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.900 qpair failed and we were unable to recover it. 00:31:41.900 [2024-10-01 08:46:33.631895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.900 [2024-10-01 08:46:33.631906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.900 qpair failed and we were unable to recover it. 00:31:41.900 [2024-10-01 08:46:33.632301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.900 [2024-10-01 08:46:33.632311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.900 qpair failed and we were unable to recover it. 00:31:41.900 [2024-10-01 08:46:33.632605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.900 [2024-10-01 08:46:33.632615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.900 qpair failed and we were unable to recover it. 00:31:41.900 [2024-10-01 08:46:33.632921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.900 [2024-10-01 08:46:33.632931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.900 qpair failed and we were unable to recover it. 00:31:41.901 [2024-10-01 08:46:33.633241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.901 [2024-10-01 08:46:33.633251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.901 qpair failed and we were unable to recover it. 00:31:41.901 [2024-10-01 08:46:33.633584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.901 [2024-10-01 08:46:33.633595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.901 qpair failed and we were unable to recover it. 00:31:41.901 [2024-10-01 08:46:33.633899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.901 [2024-10-01 08:46:33.633909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.901 qpair failed and we were unable to recover it. 00:31:41.901 [2024-10-01 08:46:33.634199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.901 [2024-10-01 08:46:33.634210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.901 qpair failed and we were unable to recover it. 00:31:41.901 [2024-10-01 08:46:33.634542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.901 [2024-10-01 08:46:33.634552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.901 qpair failed and we were unable to recover it. 00:31:41.901 [2024-10-01 08:46:33.634855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.901 [2024-10-01 08:46:33.634865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.901 qpair failed and we were unable to recover it. 00:31:41.901 [2024-10-01 08:46:33.635156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.901 [2024-10-01 08:46:33.635166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.901 qpair failed and we were unable to recover it. 00:31:41.901 [2024-10-01 08:46:33.635505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.901 [2024-10-01 08:46:33.635516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.901 qpair failed and we were unable to recover it. 00:31:41.901 [2024-10-01 08:46:33.635678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.901 [2024-10-01 08:46:33.635689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.901 qpair failed and we were unable to recover it. 00:31:41.901 [2024-10-01 08:46:33.635985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.901 [2024-10-01 08:46:33.636004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.901 qpair failed and we were unable to recover it. 00:31:41.901 [2024-10-01 08:46:33.636337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.901 [2024-10-01 08:46:33.636348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.901 qpair failed and we were unable to recover it. 00:31:41.901 [2024-10-01 08:46:33.636609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.901 [2024-10-01 08:46:33.636618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.901 qpair failed and we were unable to recover it. 00:31:41.901 [2024-10-01 08:46:33.636889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.901 [2024-10-01 08:46:33.636898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.901 qpair failed and we were unable to recover it. 00:31:41.901 [2024-10-01 08:46:33.637144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.901 [2024-10-01 08:46:33.637154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.901 qpair failed and we were unable to recover it. 00:31:41.901 [2024-10-01 08:46:33.637441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.901 [2024-10-01 08:46:33.637451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.901 qpair failed and we were unable to recover it. 00:31:41.901 [2024-10-01 08:46:33.637631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.901 [2024-10-01 08:46:33.637640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.901 qpair failed and we were unable to recover it. 00:31:41.901 [2024-10-01 08:46:33.637801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.901 [2024-10-01 08:46:33.637811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.901 qpair failed and we were unable to recover it. 00:31:41.901 [2024-10-01 08:46:33.638133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.901 [2024-10-01 08:46:33.638146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.901 qpair failed and we were unable to recover it. 00:31:41.901 [2024-10-01 08:46:33.638376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.901 [2024-10-01 08:46:33.638385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.901 qpair failed and we were unable to recover it. 00:31:41.901 [2024-10-01 08:46:33.638717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.901 [2024-10-01 08:46:33.638726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.901 qpair failed and we were unable to recover it. 00:31:41.901 [2024-10-01 08:46:33.638934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.901 [2024-10-01 08:46:33.638943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.901 qpair failed and we were unable to recover it. 00:31:41.901 [2024-10-01 08:46:33.639242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.901 [2024-10-01 08:46:33.639252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.901 qpair failed and we were unable to recover it. 00:31:41.901 [2024-10-01 08:46:33.639553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.901 [2024-10-01 08:46:33.639563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.901 qpair failed and we were unable to recover it. 00:31:41.901 [2024-10-01 08:46:33.639886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.901 [2024-10-01 08:46:33.639895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.901 qpair failed and we were unable to recover it. 00:31:41.901 [2024-10-01 08:46:33.640210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.901 [2024-10-01 08:46:33.640221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.901 qpair failed and we were unable to recover it. 00:31:41.901 [2024-10-01 08:46:33.640482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.901 [2024-10-01 08:46:33.640492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.901 qpair failed and we were unable to recover it. 00:31:41.901 [2024-10-01 08:46:33.640675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.901 [2024-10-01 08:46:33.640686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.901 qpair failed and we were unable to recover it. 00:31:41.901 [2024-10-01 08:46:33.641003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.901 [2024-10-01 08:46:33.641014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.901 qpair failed and we were unable to recover it. 00:31:41.901 [2024-10-01 08:46:33.641323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.901 [2024-10-01 08:46:33.641333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.901 qpair failed and we were unable to recover it. 00:31:41.901 [2024-10-01 08:46:33.641494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.901 [2024-10-01 08:46:33.641505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.901 qpair failed and we were unable to recover it. 00:31:41.901 [2024-10-01 08:46:33.641788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.901 [2024-10-01 08:46:33.641798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.901 qpair failed and we were unable to recover it. 00:31:41.901 [2024-10-01 08:46:33.642097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.901 [2024-10-01 08:46:33.642107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.901 qpair failed and we were unable to recover it. 00:31:41.901 [2024-10-01 08:46:33.642378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.901 [2024-10-01 08:46:33.642389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.901 qpair failed and we were unable to recover it. 00:31:41.901 [2024-10-01 08:46:33.642711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.901 [2024-10-01 08:46:33.642721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.901 qpair failed and we were unable to recover it. 00:31:41.901 [2024-10-01 08:46:33.643014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.901 [2024-10-01 08:46:33.643025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.901 qpair failed and we were unable to recover it. 00:31:41.902 [2024-10-01 08:46:33.643343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.902 [2024-10-01 08:46:33.643352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.902 qpair failed and we were unable to recover it. 00:31:41.902 [2024-10-01 08:46:33.643650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.902 [2024-10-01 08:46:33.643659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.902 qpair failed and we were unable to recover it. 00:31:41.902 [2024-10-01 08:46:33.643988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.902 [2024-10-01 08:46:33.644001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.902 qpair failed and we were unable to recover it. 00:31:41.902 [2024-10-01 08:46:33.644173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.902 [2024-10-01 08:46:33.644183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.902 qpair failed and we were unable to recover it. 00:31:41.902 [2024-10-01 08:46:33.644513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.902 [2024-10-01 08:46:33.644522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.902 qpair failed and we were unable to recover it. 00:31:41.902 [2024-10-01 08:46:33.644849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.902 [2024-10-01 08:46:33.644860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.902 qpair failed and we were unable to recover it. 00:31:41.902 [2024-10-01 08:46:33.645141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.902 [2024-10-01 08:46:33.645152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.902 qpair failed and we were unable to recover it. 00:31:41.902 [2024-10-01 08:46:33.645460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.902 [2024-10-01 08:46:33.645470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.902 qpair failed and we were unable to recover it. 00:31:41.902 [2024-10-01 08:46:33.645803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.902 [2024-10-01 08:46:33.645813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.902 qpair failed and we were unable to recover it. 00:31:41.902 [2024-10-01 08:46:33.646114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.902 [2024-10-01 08:46:33.646124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.902 qpair failed and we were unable to recover it. 00:31:41.902 [2024-10-01 08:46:33.646409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.902 [2024-10-01 08:46:33.646418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.902 qpair failed and we were unable to recover it. 00:31:41.902 [2024-10-01 08:46:33.646727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.902 [2024-10-01 08:46:33.646737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.902 qpair failed and we were unable to recover it. 00:31:41.902 [2024-10-01 08:46:33.646923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.902 [2024-10-01 08:46:33.646934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.902 qpair failed and we were unable to recover it. 00:31:41.902 [2024-10-01 08:46:33.647335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.902 [2024-10-01 08:46:33.647345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.902 qpair failed and we were unable to recover it. 00:31:41.902 [2024-10-01 08:46:33.647510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.902 [2024-10-01 08:46:33.647520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.902 qpair failed and we were unable to recover it. 00:31:41.902 [2024-10-01 08:46:33.647882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.902 [2024-10-01 08:46:33.647891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.902 qpair failed and we were unable to recover it. 00:31:41.902 [2024-10-01 08:46:33.648219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.902 [2024-10-01 08:46:33.648229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.902 qpair failed and we were unable to recover it. 00:31:41.902 [2024-10-01 08:46:33.648420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.902 [2024-10-01 08:46:33.648430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.902 qpair failed and we were unable to recover it. 00:31:41.902 [2024-10-01 08:46:33.648742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.902 [2024-10-01 08:46:33.648752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.902 qpair failed and we were unable to recover it. 00:31:41.902 [2024-10-01 08:46:33.649056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.902 [2024-10-01 08:46:33.649066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.902 qpair failed and we were unable to recover it. 00:31:41.902 [2024-10-01 08:46:33.649361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.902 [2024-10-01 08:46:33.649371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.902 qpair failed and we were unable to recover it. 00:31:41.902 [2024-10-01 08:46:33.649676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.902 [2024-10-01 08:46:33.649687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.902 qpair failed and we were unable to recover it. 00:31:41.902 [2024-10-01 08:46:33.650012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.902 [2024-10-01 08:46:33.650022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.902 qpair failed and we were unable to recover it. 00:31:41.902 [2024-10-01 08:46:33.650344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.902 [2024-10-01 08:46:33.650356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.902 qpair failed and we were unable to recover it. 00:31:41.902 [2024-10-01 08:46:33.650639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.902 [2024-10-01 08:46:33.650649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.902 qpair failed and we were unable to recover it. 00:31:41.902 [2024-10-01 08:46:33.650954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.902 [2024-10-01 08:46:33.650964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.902 qpair failed and we were unable to recover it. 00:31:41.902 [2024-10-01 08:46:33.651293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.902 [2024-10-01 08:46:33.651303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.902 qpair failed and we were unable to recover it. 00:31:41.902 [2024-10-01 08:46:33.651606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.902 [2024-10-01 08:46:33.651615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.902 qpair failed and we were unable to recover it. 00:31:41.902 [2024-10-01 08:46:33.651892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.902 [2024-10-01 08:46:33.651901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.902 qpair failed and we were unable to recover it. 00:31:41.902 [2024-10-01 08:46:33.652188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.902 [2024-10-01 08:46:33.652198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.902 qpair failed and we were unable to recover it. 00:31:41.902 [2024-10-01 08:46:33.652511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.902 [2024-10-01 08:46:33.652521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.902 qpair failed and we were unable to recover it. 00:31:41.902 [2024-10-01 08:46:33.652708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.902 [2024-10-01 08:46:33.652717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.902 qpair failed and we were unable to recover it. 00:31:41.902 [2024-10-01 08:46:33.653022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.902 [2024-10-01 08:46:33.653032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.902 qpair failed and we were unable to recover it. 00:31:41.902 [2024-10-01 08:46:33.653322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.902 [2024-10-01 08:46:33.653332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.902 qpair failed and we were unable to recover it. 00:31:41.902 [2024-10-01 08:46:33.653494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.902 [2024-10-01 08:46:33.653503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.902 qpair failed and we were unable to recover it. 00:31:41.902 [2024-10-01 08:46:33.653812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.902 [2024-10-01 08:46:33.653822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.902 qpair failed and we were unable to recover it. 00:31:41.902 [2024-10-01 08:46:33.654137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.902 [2024-10-01 08:46:33.654147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.902 qpair failed and we were unable to recover it. 00:31:41.902 [2024-10-01 08:46:33.654451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.902 [2024-10-01 08:46:33.654461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.902 qpair failed and we were unable to recover it. 00:31:41.902 [2024-10-01 08:46:33.654787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.902 [2024-10-01 08:46:33.654797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.903 qpair failed and we were unable to recover it. 00:31:41.903 [2024-10-01 08:46:33.655091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.903 [2024-10-01 08:46:33.655101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.903 qpair failed and we were unable to recover it. 00:31:41.903 [2024-10-01 08:46:33.655472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.903 [2024-10-01 08:46:33.655482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.903 qpair failed and we were unable to recover it. 00:31:41.903 [2024-10-01 08:46:33.655775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.903 [2024-10-01 08:46:33.655785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.903 qpair failed and we were unable to recover it. 00:31:41.903 [2024-10-01 08:46:33.656113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.903 [2024-10-01 08:46:33.656123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.903 qpair failed and we were unable to recover it. 00:31:41.903 [2024-10-01 08:46:33.656315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.903 [2024-10-01 08:46:33.656325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.903 qpair failed and we were unable to recover it. 00:31:41.903 [2024-10-01 08:46:33.656604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.903 [2024-10-01 08:46:33.656613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.903 qpair failed and we were unable to recover it. 00:31:41.903 [2024-10-01 08:46:33.656891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.903 [2024-10-01 08:46:33.656900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.903 qpair failed and we were unable to recover it. 00:31:41.903 [2024-10-01 08:46:33.657177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.903 [2024-10-01 08:46:33.657187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.903 qpair failed and we were unable to recover it. 00:31:41.903 [2024-10-01 08:46:33.657511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.903 [2024-10-01 08:46:33.657523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.903 qpair failed and we were unable to recover it. 00:31:41.903 [2024-10-01 08:46:33.657824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.903 [2024-10-01 08:46:33.657834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.903 qpair failed and we were unable to recover it. 00:31:41.903 [2024-10-01 08:46:33.658209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.903 [2024-10-01 08:46:33.658218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.903 qpair failed and we were unable to recover it. 00:31:41.903 [2024-10-01 08:46:33.658554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.903 [2024-10-01 08:46:33.658568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.903 qpair failed and we were unable to recover it. 00:31:41.903 [2024-10-01 08:46:33.658849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.903 [2024-10-01 08:46:33.658859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.903 qpair failed and we were unable to recover it. 00:31:41.903 [2024-10-01 08:46:33.659172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.903 [2024-10-01 08:46:33.659182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.903 qpair failed and we were unable to recover it. 00:31:41.903 [2024-10-01 08:46:33.659516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.903 [2024-10-01 08:46:33.659526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.903 qpair failed and we were unable to recover it. 00:31:41.903 [2024-10-01 08:46:33.659839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.903 [2024-10-01 08:46:33.659848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.903 qpair failed and we were unable to recover it. 00:31:41.903 [2024-10-01 08:46:33.660152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.903 [2024-10-01 08:46:33.660165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.903 qpair failed and we were unable to recover it. 00:31:41.903 [2024-10-01 08:46:33.660316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.903 [2024-10-01 08:46:33.660326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.903 qpair failed and we were unable to recover it. 00:31:41.903 [2024-10-01 08:46:33.660610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.903 [2024-10-01 08:46:33.660620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.903 qpair failed and we were unable to recover it. 00:31:41.903 [2024-10-01 08:46:33.660950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.903 [2024-10-01 08:46:33.660960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.903 qpair failed and we were unable to recover it. 00:31:41.903 [2024-10-01 08:46:33.661224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.903 [2024-10-01 08:46:33.661234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.903 qpair failed and we were unable to recover it. 00:31:41.903 [2024-10-01 08:46:33.661571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.903 [2024-10-01 08:46:33.661582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.903 qpair failed and we were unable to recover it. 00:31:41.903 [2024-10-01 08:46:33.661871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.903 [2024-10-01 08:46:33.661881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.903 qpair failed and we were unable to recover it. 00:31:41.903 [2024-10-01 08:46:33.662166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.903 [2024-10-01 08:46:33.662177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.903 qpair failed and we were unable to recover it. 00:31:41.903 [2024-10-01 08:46:33.662333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.903 [2024-10-01 08:46:33.662342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.903 qpair failed and we were unable to recover it. 00:31:41.903 [2024-10-01 08:46:33.662677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.903 [2024-10-01 08:46:33.662687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.903 qpair failed and we were unable to recover it. 00:31:41.903 [2024-10-01 08:46:33.662894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.903 [2024-10-01 08:46:33.662903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.903 qpair failed and we were unable to recover it. 00:31:41.903 [2024-10-01 08:46:33.663200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.903 [2024-10-01 08:46:33.663210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.903 qpair failed and we were unable to recover it. 00:31:41.903 [2024-10-01 08:46:33.663487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.903 [2024-10-01 08:46:33.663498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.903 qpair failed and we were unable to recover it. 00:31:41.903 [2024-10-01 08:46:33.663687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.903 [2024-10-01 08:46:33.663697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.903 qpair failed and we were unable to recover it. 00:31:41.903 [2024-10-01 08:46:33.664040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.903 [2024-10-01 08:46:33.664051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.903 qpair failed and we were unable to recover it. 00:31:41.903 [2024-10-01 08:46:33.664320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.903 [2024-10-01 08:46:33.664330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.903 qpair failed and we were unable to recover it. 00:31:41.903 [2024-10-01 08:46:33.664538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.903 [2024-10-01 08:46:33.664548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.903 qpair failed and we were unable to recover it. 00:31:41.903 [2024-10-01 08:46:33.664865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.903 [2024-10-01 08:46:33.664876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.903 qpair failed and we were unable to recover it. 00:31:41.903 [2024-10-01 08:46:33.665187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.903 [2024-10-01 08:46:33.665198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.903 qpair failed and we were unable to recover it. 00:31:41.903 [2024-10-01 08:46:33.665489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.903 [2024-10-01 08:46:33.665499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.903 qpair failed and we were unable to recover it. 00:31:41.903 [2024-10-01 08:46:33.665798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.903 [2024-10-01 08:46:33.665809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.903 qpair failed and we were unable to recover it. 00:31:41.903 [2024-10-01 08:46:33.666127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.903 [2024-10-01 08:46:33.666138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.903 qpair failed and we were unable to recover it. 00:31:41.903 [2024-10-01 08:46:33.666436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.903 [2024-10-01 08:46:33.666446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.903 qpair failed and we were unable to recover it. 00:31:41.904 [2024-10-01 08:46:33.666758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.904 [2024-10-01 08:46:33.666769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.904 qpair failed and we were unable to recover it. 00:31:41.904 [2024-10-01 08:46:33.667071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.904 [2024-10-01 08:46:33.667082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.904 qpair failed and we were unable to recover it. 00:31:41.904 [2024-10-01 08:46:33.667384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.904 [2024-10-01 08:46:33.667394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.904 qpair failed and we were unable to recover it. 00:31:41.904 [2024-10-01 08:46:33.667698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.904 [2024-10-01 08:46:33.667708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.904 qpair failed and we were unable to recover it. 00:31:41.904 [2024-10-01 08:46:33.667888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.904 [2024-10-01 08:46:33.667898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.904 qpair failed and we were unable to recover it. 00:31:41.904 [2024-10-01 08:46:33.668200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.904 [2024-10-01 08:46:33.668211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.904 qpair failed and we were unable to recover it. 00:31:41.904 [2024-10-01 08:46:33.668504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.904 [2024-10-01 08:46:33.668514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.904 qpair failed and we were unable to recover it. 00:31:41.904 [2024-10-01 08:46:33.668794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.904 [2024-10-01 08:46:33.668804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.904 qpair failed and we were unable to recover it. 00:31:41.904 [2024-10-01 08:46:33.669148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.904 [2024-10-01 08:46:33.669158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.904 qpair failed and we were unable to recover it. 00:31:41.904 [2024-10-01 08:46:33.669479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.904 [2024-10-01 08:46:33.669489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.904 qpair failed and we were unable to recover it. 00:31:41.904 [2024-10-01 08:46:33.669796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.904 [2024-10-01 08:46:33.669806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.904 qpair failed and we were unable to recover it. 00:31:41.904 [2024-10-01 08:46:33.670113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.904 [2024-10-01 08:46:33.670123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.904 qpair failed and we were unable to recover it. 00:31:41.904 [2024-10-01 08:46:33.670406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.904 [2024-10-01 08:46:33.670416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.904 qpair failed and we were unable to recover it. 00:31:41.904 [2024-10-01 08:46:33.670720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.904 [2024-10-01 08:46:33.670732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.904 qpair failed and we were unable to recover it. 00:31:41.904 [2024-10-01 08:46:33.671029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.904 [2024-10-01 08:46:33.671039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.904 qpair failed and we were unable to recover it. 00:31:41.904 [2024-10-01 08:46:33.671219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.904 [2024-10-01 08:46:33.671229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.904 qpair failed and we were unable to recover it. 00:31:41.904 [2024-10-01 08:46:33.671588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.904 [2024-10-01 08:46:33.671598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.904 qpair failed and we were unable to recover it. 00:31:41.904 [2024-10-01 08:46:33.671911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.904 [2024-10-01 08:46:33.671920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.904 qpair failed and we were unable to recover it. 00:31:41.904 [2024-10-01 08:46:33.672214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.904 [2024-10-01 08:46:33.672224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.904 qpair failed and we were unable to recover it. 00:31:41.904 [2024-10-01 08:46:33.672426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.904 [2024-10-01 08:46:33.672436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.904 qpair failed and we were unable to recover it. 00:31:41.904 [2024-10-01 08:46:33.672772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.904 [2024-10-01 08:46:33.672782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.904 qpair failed and we were unable to recover it. 00:31:41.904 [2024-10-01 08:46:33.672992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.904 [2024-10-01 08:46:33.673006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.904 qpair failed and we were unable to recover it. 00:31:41.904 [2024-10-01 08:46:33.673206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.904 [2024-10-01 08:46:33.673216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.904 qpair failed and we were unable to recover it. 00:31:41.904 [2024-10-01 08:46:33.673417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.904 [2024-10-01 08:46:33.673428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.904 qpair failed and we were unable to recover it. 00:31:41.904 [2024-10-01 08:46:33.673744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.904 [2024-10-01 08:46:33.673754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.904 qpair failed and we were unable to recover it. 00:31:41.904 [2024-10-01 08:46:33.673949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.904 [2024-10-01 08:46:33.673959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.904 qpair failed and we were unable to recover it. 00:31:41.904 [2024-10-01 08:46:33.674256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.904 [2024-10-01 08:46:33.674266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.904 qpair failed and we were unable to recover it. 00:31:41.904 [2024-10-01 08:46:33.674586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.904 [2024-10-01 08:46:33.674596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.904 qpair failed and we were unable to recover it. 00:31:41.904 [2024-10-01 08:46:33.674903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.904 [2024-10-01 08:46:33.674913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.904 qpair failed and we were unable to recover it. 00:31:41.904 [2024-10-01 08:46:33.675304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.904 [2024-10-01 08:46:33.675315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.904 qpair failed and we were unable to recover it. 00:31:41.904 [2024-10-01 08:46:33.675620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.904 [2024-10-01 08:46:33.675631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.904 qpair failed and we were unable to recover it. 00:31:41.904 [2024-10-01 08:46:33.675933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.904 [2024-10-01 08:46:33.675944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.904 qpair failed and we were unable to recover it. 00:31:41.904 [2024-10-01 08:46:33.676258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.904 [2024-10-01 08:46:33.676269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.904 qpair failed and we were unable to recover it. 00:31:41.904 [2024-10-01 08:46:33.676603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.904 [2024-10-01 08:46:33.676613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.904 qpair failed and we were unable to recover it. 00:31:41.904 [2024-10-01 08:46:33.676929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.904 [2024-10-01 08:46:33.676940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.904 qpair failed and we were unable to recover it. 00:31:41.904 [2024-10-01 08:46:33.677306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.904 [2024-10-01 08:46:33.677316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.904 qpair failed and we were unable to recover it. 00:31:41.904 [2024-10-01 08:46:33.677623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.904 [2024-10-01 08:46:33.677632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.904 qpair failed and we were unable to recover it. 00:31:41.904 [2024-10-01 08:46:33.677938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.904 [2024-10-01 08:46:33.677947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.904 qpair failed and we were unable to recover it. 00:31:41.905 [2024-10-01 08:46:33.678280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.905 [2024-10-01 08:46:33.678290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.905 qpair failed and we were unable to recover it. 00:31:41.905 [2024-10-01 08:46:33.678472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.905 [2024-10-01 08:46:33.678481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.905 qpair failed and we were unable to recover it. 00:31:41.905 [2024-10-01 08:46:33.678837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.905 [2024-10-01 08:46:33.678851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.905 qpair failed and we were unable to recover it. 00:31:41.905 [2024-10-01 08:46:33.679159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.905 [2024-10-01 08:46:33.679170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.905 qpair failed and we were unable to recover it. 00:31:41.905 [2024-10-01 08:46:33.679494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.905 [2024-10-01 08:46:33.679505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.905 qpair failed and we were unable to recover it. 00:31:41.905 [2024-10-01 08:46:33.679694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.905 [2024-10-01 08:46:33.679705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.905 qpair failed and we were unable to recover it. 00:31:41.905 [2024-10-01 08:46:33.680003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.905 [2024-10-01 08:46:33.680014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.905 qpair failed and we were unable to recover it. 00:31:41.905 [2024-10-01 08:46:33.680206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.905 [2024-10-01 08:46:33.680217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.905 qpair failed and we were unable to recover it. 00:31:41.905 [2024-10-01 08:46:33.680524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.905 [2024-10-01 08:46:33.680535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.905 qpair failed and we were unable to recover it. 00:31:41.905 [2024-10-01 08:46:33.680827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.905 [2024-10-01 08:46:33.680836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.905 qpair failed and we were unable to recover it. 00:31:41.905 [2024-10-01 08:46:33.681169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.905 [2024-10-01 08:46:33.681180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.905 qpair failed and we were unable to recover it. 00:31:41.905 [2024-10-01 08:46:33.681441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.905 [2024-10-01 08:46:33.681452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.905 qpair failed and we were unable to recover it. 00:31:41.905 [2024-10-01 08:46:33.681789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.905 [2024-10-01 08:46:33.681799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.905 qpair failed and we were unable to recover it. 00:31:41.905 [2024-10-01 08:46:33.682105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.905 [2024-10-01 08:46:33.682115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.905 qpair failed and we were unable to recover it. 00:31:41.905 [2024-10-01 08:46:33.682448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.905 [2024-10-01 08:46:33.682457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.905 qpair failed and we were unable to recover it. 00:31:41.905 [2024-10-01 08:46:33.682630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.905 [2024-10-01 08:46:33.682641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.905 qpair failed and we were unable to recover it. 00:31:41.905 [2024-10-01 08:46:33.682963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.905 [2024-10-01 08:46:33.682973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.905 qpair failed and we were unable to recover it. 00:31:41.905 [2024-10-01 08:46:33.683305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.905 [2024-10-01 08:46:33.683315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.905 qpair failed and we were unable to recover it. 00:31:41.905 [2024-10-01 08:46:33.683594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.905 [2024-10-01 08:46:33.683604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.905 qpair failed and we were unable to recover it. 00:31:41.905 [2024-10-01 08:46:33.683927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.905 [2024-10-01 08:46:33.683937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.905 qpair failed and we were unable to recover it. 00:31:41.905 [2024-10-01 08:46:33.684235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.905 [2024-10-01 08:46:33.684245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.905 qpair failed and we were unable to recover it. 00:31:41.905 [2024-10-01 08:46:33.684530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.905 [2024-10-01 08:46:33.684540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.905 qpair failed and we were unable to recover it. 00:31:41.905 [2024-10-01 08:46:33.684839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.905 [2024-10-01 08:46:33.684849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.905 qpair failed and we were unable to recover it. 00:31:41.905 [2024-10-01 08:46:33.685115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.905 [2024-10-01 08:46:33.685126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.905 qpair failed and we were unable to recover it. 00:31:41.905 [2024-10-01 08:46:33.685427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.905 [2024-10-01 08:46:33.685438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.905 qpair failed and we were unable to recover it. 00:31:41.905 [2024-10-01 08:46:33.685670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.905 [2024-10-01 08:46:33.685680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.905 qpair failed and we were unable to recover it. 00:31:41.905 [2024-10-01 08:46:33.685968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.905 [2024-10-01 08:46:33.685978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.905 qpair failed and we were unable to recover it. 00:31:41.905 [2024-10-01 08:46:33.686988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.905 [2024-10-01 08:46:33.687020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:41.905 qpair failed and we were unable to recover it. 00:31:42.182 [2024-10-01 08:46:33.687335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.182 [2024-10-01 08:46:33.687348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.182 qpair failed and we were unable to recover it. 00:31:42.182 [2024-10-01 08:46:33.688161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.182 [2024-10-01 08:46:33.688182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.182 qpair failed and we were unable to recover it. 00:31:42.182 [2024-10-01 08:46:33.688535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.182 [2024-10-01 08:46:33.688547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.182 qpair failed and we were unable to recover it. 00:31:42.182 [2024-10-01 08:46:33.689429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.182 [2024-10-01 08:46:33.689451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.182 qpair failed and we were unable to recover it. 00:31:42.182 [2024-10-01 08:46:33.689793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.182 [2024-10-01 08:46:33.689804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.182 qpair failed and we were unable to recover it. 00:31:42.182 [2024-10-01 08:46:33.690086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.182 [2024-10-01 08:46:33.690097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.182 qpair failed and we were unable to recover it. 00:31:42.182 [2024-10-01 08:46:33.690373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.183 [2024-10-01 08:46:33.690383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.183 qpair failed and we were unable to recover it. 00:31:42.183 [2024-10-01 08:46:33.690572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.183 [2024-10-01 08:46:33.690582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.183 qpair failed and we were unable to recover it. 00:31:42.183 [2024-10-01 08:46:33.690749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.183 [2024-10-01 08:46:33.690759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.183 qpair failed and we were unable to recover it. 00:31:42.183 [2024-10-01 08:46:33.691066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.183 [2024-10-01 08:46:33.691077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.183 qpair failed and we were unable to recover it. 00:31:42.183 [2024-10-01 08:46:33.691185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.183 [2024-10-01 08:46:33.691195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.183 qpair failed and we were unable to recover it. 00:31:42.183 [2024-10-01 08:46:33.691514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.183 [2024-10-01 08:46:33.691524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.183 qpair failed and we were unable to recover it. 00:31:42.183 [2024-10-01 08:46:33.691820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.183 [2024-10-01 08:46:33.691830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.183 qpair failed and we were unable to recover it. 00:31:42.183 [2024-10-01 08:46:33.692112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.183 [2024-10-01 08:46:33.692122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.183 qpair failed and we were unable to recover it. 00:31:42.183 [2024-10-01 08:46:33.692334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.183 [2024-10-01 08:46:33.692343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.183 qpair failed and we were unable to recover it. 00:31:42.183 [2024-10-01 08:46:33.692606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.183 [2024-10-01 08:46:33.692619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.183 qpair failed and we were unable to recover it. 00:31:42.183 [2024-10-01 08:46:33.692804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.183 [2024-10-01 08:46:33.692814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.183 qpair failed and we were unable to recover it. 00:31:42.183 [2024-10-01 08:46:33.693191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.183 [2024-10-01 08:46:33.693204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.183 qpair failed and we were unable to recover it. 00:31:42.183 [2024-10-01 08:46:33.693499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.183 [2024-10-01 08:46:33.693511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.183 qpair failed and we were unable to recover it. 00:31:42.183 [2024-10-01 08:46:33.693791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.183 [2024-10-01 08:46:33.693802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.183 qpair failed and we were unable to recover it. 00:31:42.183 [2024-10-01 08:46:33.694118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.183 [2024-10-01 08:46:33.694129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.183 qpair failed and we were unable to recover it. 00:31:42.183 [2024-10-01 08:46:33.694983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.183 [2024-10-01 08:46:33.695009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.183 qpair failed and we were unable to recover it. 00:31:42.183 [2024-10-01 08:46:33.695316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.183 [2024-10-01 08:46:33.695328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.183 qpair failed and we were unable to recover it. 00:31:42.183 [2024-10-01 08:46:33.695612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.183 [2024-10-01 08:46:33.695622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.183 qpair failed and we were unable to recover it. 00:31:42.183 [2024-10-01 08:46:33.695907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.183 [2024-10-01 08:46:33.695918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.183 qpair failed and we were unable to recover it. 00:31:42.183 [2024-10-01 08:46:33.696221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.183 [2024-10-01 08:46:33.696232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.183 qpair failed and we were unable to recover it. 00:31:42.183 [2024-10-01 08:46:33.696522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.183 [2024-10-01 08:46:33.696532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.183 qpair failed and we were unable to recover it. 00:31:42.183 [2024-10-01 08:46:33.696836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.183 [2024-10-01 08:46:33.696847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.183 qpair failed and we were unable to recover it. 00:31:42.183 [2024-10-01 08:46:33.697108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.183 [2024-10-01 08:46:33.697119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.183 qpair failed and we were unable to recover it. 00:31:42.183 [2024-10-01 08:46:33.697951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.183 [2024-10-01 08:46:33.697971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.183 qpair failed and we were unable to recover it. 00:31:42.183 [2024-10-01 08:46:33.698273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.183 [2024-10-01 08:46:33.698284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.183 qpair failed and we were unable to recover it. 00:31:42.183 [2024-10-01 08:46:33.698547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.183 [2024-10-01 08:46:33.698558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.183 qpair failed and we were unable to recover it. 00:31:42.183 [2024-10-01 08:46:33.699294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.183 [2024-10-01 08:46:33.699317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.183 qpair failed and we were unable to recover it. 00:31:42.183 [2024-10-01 08:46:33.699545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.183 [2024-10-01 08:46:33.699556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.183 qpair failed and we were unable to recover it. 00:31:42.184 [2024-10-01 08:46:33.699808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.184 [2024-10-01 08:46:33.699818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.184 qpair failed and we were unable to recover it. 00:31:42.184 [2024-10-01 08:46:33.700136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.184 [2024-10-01 08:46:33.700146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.184 qpair failed and we were unable to recover it. 00:31:42.184 [2024-10-01 08:46:33.700352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.184 [2024-10-01 08:46:33.700362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.184 qpair failed and we were unable to recover it. 00:31:42.184 [2024-10-01 08:46:33.700691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.184 [2024-10-01 08:46:33.700700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.184 qpair failed and we were unable to recover it. 00:31:42.184 [2024-10-01 08:46:33.701028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.184 [2024-10-01 08:46:33.701039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.184 qpair failed and we were unable to recover it. 00:31:42.184 [2024-10-01 08:46:33.701344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.184 [2024-10-01 08:46:33.701354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.184 qpair failed and we were unable to recover it. 00:31:42.184 [2024-10-01 08:46:33.701660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.184 [2024-10-01 08:46:33.701670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.184 qpair failed and we were unable to recover it. 00:31:42.184 [2024-10-01 08:46:33.702000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.184 [2024-10-01 08:46:33.702011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.184 qpair failed and we were unable to recover it. 00:31:42.184 [2024-10-01 08:46:33.702299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.184 [2024-10-01 08:46:33.702308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.184 qpair failed and we were unable to recover it. 00:31:42.184 [2024-10-01 08:46:33.702633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.184 [2024-10-01 08:46:33.702643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.184 qpair failed and we were unable to recover it. 00:31:42.184 [2024-10-01 08:46:33.702929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.184 [2024-10-01 08:46:33.702939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.184 qpair failed and we were unable to recover it. 00:31:42.184 [2024-10-01 08:46:33.703249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.184 [2024-10-01 08:46:33.703260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.184 qpair failed and we were unable to recover it. 00:31:42.184 [2024-10-01 08:46:33.703568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.184 [2024-10-01 08:46:33.703577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.184 qpair failed and we were unable to recover it. 00:31:42.184 [2024-10-01 08:46:33.703851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.184 [2024-10-01 08:46:33.703861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.184 qpair failed and we were unable to recover it. 00:31:42.184 [2024-10-01 08:46:33.704058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.184 [2024-10-01 08:46:33.704070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.184 qpair failed and we were unable to recover it. 00:31:42.184 [2024-10-01 08:46:33.704402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.184 [2024-10-01 08:46:33.704412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.184 qpair failed and we were unable to recover it. 00:31:42.184 [2024-10-01 08:46:33.704750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.184 [2024-10-01 08:46:33.704760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.184 qpair failed and we were unable to recover it. 00:31:42.184 [2024-10-01 08:46:33.705062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.184 [2024-10-01 08:46:33.705073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.184 qpair failed and we were unable to recover it. 00:31:42.184 [2024-10-01 08:46:33.705358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.184 [2024-10-01 08:46:33.705368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.184 qpair failed and we were unable to recover it. 00:31:42.184 [2024-10-01 08:46:33.705714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.184 [2024-10-01 08:46:33.705724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.184 qpair failed and we were unable to recover it. 00:31:42.184 [2024-10-01 08:46:33.706020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.184 [2024-10-01 08:46:33.706032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.184 qpair failed and we were unable to recover it. 00:31:42.184 [2024-10-01 08:46:33.706330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.184 [2024-10-01 08:46:33.706340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.184 qpair failed and we were unable to recover it. 00:31:42.184 [2024-10-01 08:46:33.706616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.184 [2024-10-01 08:46:33.706626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.184 qpair failed and we were unable to recover it. 00:31:42.184 [2024-10-01 08:46:33.706861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.184 [2024-10-01 08:46:33.706871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.184 qpair failed and we were unable to recover it. 00:31:42.184 [2024-10-01 08:46:33.707203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.184 [2024-10-01 08:46:33.707214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.184 qpair failed and we were unable to recover it. 00:31:42.184 [2024-10-01 08:46:33.707486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.184 [2024-10-01 08:46:33.707496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.184 qpair failed and we were unable to recover it. 00:31:42.184 [2024-10-01 08:46:33.707820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.184 [2024-10-01 08:46:33.707830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.184 qpair failed and we were unable to recover it. 00:31:42.184 [2024-10-01 08:46:33.708115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.184 [2024-10-01 08:46:33.708126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.184 qpair failed and we were unable to recover it. 00:31:42.184 [2024-10-01 08:46:33.708454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.185 [2024-10-01 08:46:33.708464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.185 qpair failed and we were unable to recover it. 00:31:42.185 [2024-10-01 08:46:33.708768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.185 [2024-10-01 08:46:33.708778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.185 qpair failed and we were unable to recover it. 00:31:42.185 [2024-10-01 08:46:33.708978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.185 [2024-10-01 08:46:33.708988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.185 qpair failed and we were unable to recover it. 00:31:42.185 [2024-10-01 08:46:33.709302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.185 [2024-10-01 08:46:33.709313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.185 qpair failed and we were unable to recover it. 00:31:42.185 [2024-10-01 08:46:33.709584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.185 [2024-10-01 08:46:33.709594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.185 qpair failed and we were unable to recover it. 00:31:42.185 [2024-10-01 08:46:33.709910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.185 [2024-10-01 08:46:33.709920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.185 qpair failed and we were unable to recover it. 00:31:42.185 [2024-10-01 08:46:33.710242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.185 [2024-10-01 08:46:33.710252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.185 qpair failed and we were unable to recover it. 00:31:42.185 [2024-10-01 08:46:33.710566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.185 [2024-10-01 08:46:33.710577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.185 qpair failed and we were unable to recover it. 00:31:42.185 [2024-10-01 08:46:33.710903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.185 [2024-10-01 08:46:33.710913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.185 qpair failed and we were unable to recover it. 00:31:42.185 [2024-10-01 08:46:33.711215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.185 [2024-10-01 08:46:33.711225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.185 qpair failed and we were unable to recover it. 00:31:42.185 [2024-10-01 08:46:33.711419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.185 [2024-10-01 08:46:33.711436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.185 qpair failed and we were unable to recover it. 00:31:42.185 [2024-10-01 08:46:33.711721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.185 [2024-10-01 08:46:33.711731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.185 qpair failed and we were unable to recover it. 00:31:42.185 [2024-10-01 08:46:33.712025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.185 [2024-10-01 08:46:33.712036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.185 qpair failed and we were unable to recover it. 00:31:42.185 [2024-10-01 08:46:33.712376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.185 [2024-10-01 08:46:33.712387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.185 qpair failed and we were unable to recover it. 00:31:42.185 [2024-10-01 08:46:33.712721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.185 [2024-10-01 08:46:33.712731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.185 qpair failed and we were unable to recover it. 00:31:42.185 [2024-10-01 08:46:33.713061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.185 [2024-10-01 08:46:33.713072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.185 qpair failed and we were unable to recover it. 00:31:42.185 [2024-10-01 08:46:33.713327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.185 [2024-10-01 08:46:33.713337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.185 qpair failed and we were unable to recover it. 00:31:42.185 [2024-10-01 08:46:33.713654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.185 [2024-10-01 08:46:33.713664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.185 qpair failed and we were unable to recover it. 00:31:42.185 [2024-10-01 08:46:33.713975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.185 [2024-10-01 08:46:33.713985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.185 qpair failed and we were unable to recover it. 00:31:42.185 [2024-10-01 08:46:33.714317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.185 [2024-10-01 08:46:33.714327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.185 qpair failed and we were unable to recover it. 00:31:42.185 [2024-10-01 08:46:33.714631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.185 [2024-10-01 08:46:33.714641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.185 qpair failed and we were unable to recover it. 00:31:42.185 [2024-10-01 08:46:33.714845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.185 [2024-10-01 08:46:33.714857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.185 qpair failed and we were unable to recover it. 00:31:42.185 [2024-10-01 08:46:33.715213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.185 [2024-10-01 08:46:33.715224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.185 qpair failed and we were unable to recover it. 00:31:42.185 [2024-10-01 08:46:33.715422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.185 [2024-10-01 08:46:33.715433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.185 qpair failed and we were unable to recover it. 00:31:42.185 [2024-10-01 08:46:33.715710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.185 [2024-10-01 08:46:33.715720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.185 qpair failed and we were unable to recover it. 00:31:42.185 [2024-10-01 08:46:33.715920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.185 [2024-10-01 08:46:33.715929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.185 qpair failed and we were unable to recover it. 00:31:42.185 [2024-10-01 08:46:33.716324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.185 [2024-10-01 08:46:33.716334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.185 qpair failed and we were unable to recover it. 00:31:42.185 [2024-10-01 08:46:33.716623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.185 [2024-10-01 08:46:33.716633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.186 qpair failed and we were unable to recover it. 00:31:42.186 [2024-10-01 08:46:33.716951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.186 [2024-10-01 08:46:33.716962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.186 qpair failed and we were unable to recover it. 00:31:42.186 [2024-10-01 08:46:33.717198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.186 [2024-10-01 08:46:33.717209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.186 qpair failed and we were unable to recover it. 00:31:42.186 [2024-10-01 08:46:33.717517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.186 [2024-10-01 08:46:33.717527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.186 qpair failed and we were unable to recover it. 00:31:42.186 [2024-10-01 08:46:33.717859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.186 [2024-10-01 08:46:33.717870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.186 qpair failed and we were unable to recover it. 00:31:42.186 [2024-10-01 08:46:33.718223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.186 [2024-10-01 08:46:33.718234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.186 qpair failed and we were unable to recover it. 00:31:42.186 [2024-10-01 08:46:33.718524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.186 [2024-10-01 08:46:33.718534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.186 qpair failed and we were unable to recover it. 00:31:42.186 [2024-10-01 08:46:33.718859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.186 [2024-10-01 08:46:33.718870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.186 qpair failed and we were unable to recover it. 00:31:42.186 [2024-10-01 08:46:33.719184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.186 [2024-10-01 08:46:33.719194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.186 qpair failed and we were unable to recover it. 00:31:42.186 [2024-10-01 08:46:33.719495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.186 [2024-10-01 08:46:33.719505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.186 qpair failed and we were unable to recover it. 00:31:42.186 [2024-10-01 08:46:33.719801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.186 [2024-10-01 08:46:33.719811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.186 qpair failed and we were unable to recover it. 00:31:42.186 [2024-10-01 08:46:33.720118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.186 [2024-10-01 08:46:33.720128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.186 qpair failed and we were unable to recover it. 00:31:42.186 [2024-10-01 08:46:33.720436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.186 [2024-10-01 08:46:33.720446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.186 qpair failed and we were unable to recover it. 00:31:42.186 [2024-10-01 08:46:33.720756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.186 [2024-10-01 08:46:33.720767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.186 qpair failed and we were unable to recover it. 00:31:42.186 [2024-10-01 08:46:33.721031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.186 [2024-10-01 08:46:33.721042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.186 qpair failed and we were unable to recover it. 00:31:42.186 [2024-10-01 08:46:33.721347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.186 [2024-10-01 08:46:33.721357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.186 qpair failed and we were unable to recover it. 00:31:42.186 [2024-10-01 08:46:33.721637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.186 [2024-10-01 08:46:33.721647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.186 qpair failed and we were unable to recover it. 00:31:42.186 [2024-10-01 08:46:33.721992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.186 [2024-10-01 08:46:33.722008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.186 qpair failed and we were unable to recover it. 00:31:42.186 [2024-10-01 08:46:33.722307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.186 [2024-10-01 08:46:33.722317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.186 qpair failed and we were unable to recover it. 00:31:42.186 [2024-10-01 08:46:33.722591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.186 [2024-10-01 08:46:33.722601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.186 qpair failed and we were unable to recover it. 00:31:42.186 [2024-10-01 08:46:33.722931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.186 [2024-10-01 08:46:33.722941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.186 qpair failed and we were unable to recover it. 00:31:42.186 [2024-10-01 08:46:33.723242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.186 [2024-10-01 08:46:33.723253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.186 qpair failed and we were unable to recover it. 00:31:42.186 [2024-10-01 08:46:33.723555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.186 [2024-10-01 08:46:33.723566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.186 qpair failed and we were unable to recover it. 00:31:42.186 [2024-10-01 08:46:33.723899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.186 [2024-10-01 08:46:33.723911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.186 qpair failed and we were unable to recover it. 00:31:42.186 [2024-10-01 08:46:33.724221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.186 [2024-10-01 08:46:33.724232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.186 qpair failed and we were unable to recover it. 00:31:42.186 [2024-10-01 08:46:33.724627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.186 [2024-10-01 08:46:33.724637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.186 qpair failed and we were unable to recover it. 00:31:42.186 [2024-10-01 08:46:33.724922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.186 [2024-10-01 08:46:33.724931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.186 qpair failed and we were unable to recover it. 00:31:42.186 [2024-10-01 08:46:33.725247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.186 [2024-10-01 08:46:33.725258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.186 qpair failed and we were unable to recover it. 00:31:42.187 [2024-10-01 08:46:33.725573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.187 [2024-10-01 08:46:33.725583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.187 qpair failed and we were unable to recover it. 00:31:42.187 [2024-10-01 08:46:33.725844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.187 [2024-10-01 08:46:33.725853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.187 qpair failed and we were unable to recover it. 00:31:42.187 [2024-10-01 08:46:33.726185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.187 [2024-10-01 08:46:33.726196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.187 qpair failed and we were unable to recover it. 00:31:42.187 [2024-10-01 08:46:33.726490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.187 [2024-10-01 08:46:33.726501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.187 qpair failed and we were unable to recover it. 00:31:42.187 [2024-10-01 08:46:33.726835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.187 [2024-10-01 08:46:33.726845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.187 qpair failed and we were unable to recover it. 00:31:42.187 [2024-10-01 08:46:33.727032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.187 [2024-10-01 08:46:33.727044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.187 qpair failed and we were unable to recover it. 00:31:42.187 [2024-10-01 08:46:33.727450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.187 [2024-10-01 08:46:33.727460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.187 qpair failed and we were unable to recover it. 00:31:42.187 [2024-10-01 08:46:33.727736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.187 [2024-10-01 08:46:33.727746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.187 qpair failed and we were unable to recover it. 00:31:42.187 [2024-10-01 08:46:33.727926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.187 [2024-10-01 08:46:33.727936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.187 qpair failed and we were unable to recover it. 00:31:42.187 [2024-10-01 08:46:33.728158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.187 [2024-10-01 08:46:33.728169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.187 qpair failed and we were unable to recover it. 00:31:42.187 [2024-10-01 08:46:33.728528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.187 [2024-10-01 08:46:33.728538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.187 qpair failed and we were unable to recover it. 00:31:42.187 [2024-10-01 08:46:33.728898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.187 [2024-10-01 08:46:33.728908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.187 qpair failed and we were unable to recover it. 00:31:42.187 [2024-10-01 08:46:33.729152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.187 [2024-10-01 08:46:33.729162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.187 qpair failed and we were unable to recover it. 00:31:42.187 [2024-10-01 08:46:33.729526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.187 [2024-10-01 08:46:33.729537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.187 qpair failed and we were unable to recover it. 00:31:42.187 [2024-10-01 08:46:33.729827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.187 [2024-10-01 08:46:33.729838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.187 qpair failed and we were unable to recover it. 00:31:42.187 [2024-10-01 08:46:33.730170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.187 [2024-10-01 08:46:33.730181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.187 qpair failed and we were unable to recover it. 00:31:42.187 [2024-10-01 08:46:33.730476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.187 [2024-10-01 08:46:33.730486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.187 qpair failed and we were unable to recover it. 00:31:42.187 [2024-10-01 08:46:33.730650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.187 [2024-10-01 08:46:33.730662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.187 qpair failed and we were unable to recover it. 00:31:42.187 [2024-10-01 08:46:33.730962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.187 [2024-10-01 08:46:33.730972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.187 qpair failed and we were unable to recover it. 00:31:42.187 [2024-10-01 08:46:33.731187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.187 [2024-10-01 08:46:33.731197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.187 qpair failed and we were unable to recover it. 00:31:42.187 [2024-10-01 08:46:33.731486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.187 [2024-10-01 08:46:33.731496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.187 qpair failed and we were unable to recover it. 00:31:42.187 [2024-10-01 08:46:33.731712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.187 [2024-10-01 08:46:33.731724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.187 qpair failed and we were unable to recover it. 00:31:42.187 [2024-10-01 08:46:33.731902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.187 [2024-10-01 08:46:33.731913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.187 qpair failed and we were unable to recover it. 00:31:42.187 [2024-10-01 08:46:33.732082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.187 [2024-10-01 08:46:33.732094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.187 qpair failed and we were unable to recover it. 00:31:42.187 [2024-10-01 08:46:33.732311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.187 [2024-10-01 08:46:33.732321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.187 qpair failed and we were unable to recover it. 00:31:42.187 [2024-10-01 08:46:33.732578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.187 [2024-10-01 08:46:33.732589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.187 qpair failed and we were unable to recover it. 00:31:42.187 [2024-10-01 08:46:33.732686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.187 [2024-10-01 08:46:33.732696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.187 qpair failed and we were unable to recover it. 00:31:42.187 [2024-10-01 08:46:33.733023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.187 [2024-10-01 08:46:33.733033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.187 qpair failed and we were unable to recover it. 00:31:42.187 [2024-10-01 08:46:33.733350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.187 [2024-10-01 08:46:33.733360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.188 qpair failed and we were unable to recover it. 00:31:42.188 [2024-10-01 08:46:33.733668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.188 [2024-10-01 08:46:33.733677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.188 qpair failed and we were unable to recover it. 00:31:42.188 [2024-10-01 08:46:33.733861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.188 [2024-10-01 08:46:33.733871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.188 qpair failed and we were unable to recover it. 00:31:42.188 [2024-10-01 08:46:33.734101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.188 [2024-10-01 08:46:33.734112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.188 qpair failed and we were unable to recover it. 00:31:42.188 [2024-10-01 08:46:33.734460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.188 [2024-10-01 08:46:33.734470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.188 qpair failed and we were unable to recover it. 00:31:42.188 [2024-10-01 08:46:33.734664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.188 [2024-10-01 08:46:33.734674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.188 qpair failed and we were unable to recover it. 00:31:42.188 [2024-10-01 08:46:33.735048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.188 [2024-10-01 08:46:33.735061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.188 qpair failed and we were unable to recover it. 00:31:42.188 [2024-10-01 08:46:33.735395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.188 [2024-10-01 08:46:33.735405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.188 qpair failed and we were unable to recover it. 00:31:42.188 [2024-10-01 08:46:33.735725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.188 [2024-10-01 08:46:33.735735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.188 qpair failed and we were unable to recover it. 00:31:42.188 [2024-10-01 08:46:33.736009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.188 [2024-10-01 08:46:33.736020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.188 qpair failed and we were unable to recover it. 00:31:42.188 [2024-10-01 08:46:33.736305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.188 [2024-10-01 08:46:33.736316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.188 qpair failed and we were unable to recover it. 00:31:42.188 [2024-10-01 08:46:33.736603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.188 [2024-10-01 08:46:33.736613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.188 qpair failed and we were unable to recover it. 00:31:42.188 [2024-10-01 08:46:33.737528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.188 [2024-10-01 08:46:33.737551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.188 qpair failed and we were unable to recover it. 00:31:42.188 [2024-10-01 08:46:33.737899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.188 [2024-10-01 08:46:33.737910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.188 qpair failed and we were unable to recover it. 00:31:42.188 [2024-10-01 08:46:33.738237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.188 [2024-10-01 08:46:33.738248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.188 qpair failed and we were unable to recover it. 00:31:42.188 [2024-10-01 08:46:33.738534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.188 [2024-10-01 08:46:33.738543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.188 qpair failed and we were unable to recover it. 00:31:42.188 [2024-10-01 08:46:33.738872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.188 [2024-10-01 08:46:33.738882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.188 qpair failed and we were unable to recover it. 00:31:42.188 [2024-10-01 08:46:33.739206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.188 [2024-10-01 08:46:33.739216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.188 qpair failed and we were unable to recover it. 00:31:42.188 [2024-10-01 08:46:33.739440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.188 [2024-10-01 08:46:33.739450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.188 qpair failed and we were unable to recover it. 00:31:42.188 [2024-10-01 08:46:33.739748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.188 [2024-10-01 08:46:33.739757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.188 qpair failed and we were unable to recover it. 00:31:42.188 [2024-10-01 08:46:33.740058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.188 [2024-10-01 08:46:33.740069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.188 qpair failed and we were unable to recover it. 00:31:42.188 [2024-10-01 08:46:33.740419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.188 [2024-10-01 08:46:33.740429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.188 qpair failed and we were unable to recover it. 00:31:42.188 [2024-10-01 08:46:33.740723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.188 [2024-10-01 08:46:33.740733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.188 qpair failed and we were unable to recover it. 00:31:42.188 [2024-10-01 08:46:33.741024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.188 [2024-10-01 08:46:33.741034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.188 qpair failed and we were unable to recover it. 00:31:42.188 [2024-10-01 08:46:33.741247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.188 [2024-10-01 08:46:33.741257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.188 qpair failed and we were unable to recover it. 00:31:42.188 [2024-10-01 08:46:33.741418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.188 [2024-10-01 08:46:33.741427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.188 qpair failed and we were unable to recover it. 00:31:42.188 [2024-10-01 08:46:33.741626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.188 [2024-10-01 08:46:33.741636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.189 qpair failed and we were unable to recover it. 00:31:42.189 [2024-10-01 08:46:33.741968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.189 [2024-10-01 08:46:33.741979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.189 qpair failed and we were unable to recover it. 00:31:42.189 [2024-10-01 08:46:33.742326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.189 [2024-10-01 08:46:33.742337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.189 qpair failed and we were unable to recover it. 00:31:42.189 [2024-10-01 08:46:33.742649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.189 [2024-10-01 08:46:33.742659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.189 qpair failed and we were unable to recover it. 00:31:42.189 [2024-10-01 08:46:33.742877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.189 [2024-10-01 08:46:33.742888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.189 qpair failed and we were unable to recover it. 00:31:42.189 [2024-10-01 08:46:33.743209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.189 [2024-10-01 08:46:33.743219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.189 qpair failed and we were unable to recover it. 00:31:42.189 [2024-10-01 08:46:33.743515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.189 [2024-10-01 08:46:33.743524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.189 qpair failed and we were unable to recover it. 00:31:42.189 [2024-10-01 08:46:33.743849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.189 [2024-10-01 08:46:33.743860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.189 qpair failed and we were unable to recover it. 00:31:42.189 [2024-10-01 08:46:33.744163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.189 [2024-10-01 08:46:33.744173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.189 qpair failed and we were unable to recover it. 00:31:42.189 [2024-10-01 08:46:33.744385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.189 [2024-10-01 08:46:33.744395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.189 qpair failed and we were unable to recover it. 00:31:42.189 [2024-10-01 08:46:33.744711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.189 [2024-10-01 08:46:33.744721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.189 qpair failed and we were unable to recover it. 00:31:42.189 [2024-10-01 08:46:33.745043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.189 [2024-10-01 08:46:33.745053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.189 qpair failed and we were unable to recover it. 00:31:42.189 [2024-10-01 08:46:33.745241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.189 [2024-10-01 08:46:33.745251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.189 qpair failed and we were unable to recover it. 00:31:42.189 [2024-10-01 08:46:33.745538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.189 [2024-10-01 08:46:33.745548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.189 qpair failed and we were unable to recover it. 00:31:42.189 [2024-10-01 08:46:33.745850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.189 [2024-10-01 08:46:33.745860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.189 qpair failed and we were unable to recover it. 00:31:42.189 [2024-10-01 08:46:33.746051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.189 [2024-10-01 08:46:33.746062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.189 qpair failed and we were unable to recover it. 00:31:42.189 [2024-10-01 08:46:33.746355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.189 [2024-10-01 08:46:33.746365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.189 qpair failed and we were unable to recover it. 00:31:42.189 [2024-10-01 08:46:33.746673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.189 [2024-10-01 08:46:33.746682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.189 qpair failed and we were unable to recover it. 00:31:42.189 [2024-10-01 08:46:33.747022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.189 [2024-10-01 08:46:33.747032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.189 qpair failed and we were unable to recover it. 00:31:42.189 [2024-10-01 08:46:33.747340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.189 [2024-10-01 08:46:33.747351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.189 qpair failed and we were unable to recover it. 00:31:42.189 [2024-10-01 08:46:33.747544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.189 [2024-10-01 08:46:33.747553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.189 qpair failed and we were unable to recover it. 00:31:42.189 [2024-10-01 08:46:33.747867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.189 [2024-10-01 08:46:33.747879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.189 qpair failed and we were unable to recover it. 00:31:42.189 [2024-10-01 08:46:33.748081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.189 [2024-10-01 08:46:33.748092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.189 qpair failed and we were unable to recover it. 00:31:42.189 [2024-10-01 08:46:33.748393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.189 [2024-10-01 08:46:33.748403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.189 qpair failed and we were unable to recover it. 00:31:42.189 [2024-10-01 08:46:33.748670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.189 [2024-10-01 08:46:33.748680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.189 qpair failed and we were unable to recover it. 00:31:42.189 [2024-10-01 08:46:33.748909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.189 [2024-10-01 08:46:33.748919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.189 qpair failed and we were unable to recover it. 00:31:42.189 [2024-10-01 08:46:33.749261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.189 [2024-10-01 08:46:33.749271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.189 qpair failed and we were unable to recover it. 00:31:42.190 [2024-10-01 08:46:33.749603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.190 [2024-10-01 08:46:33.749613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.190 qpair failed and we were unable to recover it. 00:31:42.190 [2024-10-01 08:46:33.749905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.190 [2024-10-01 08:46:33.749915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.190 qpair failed and we were unable to recover it. 00:31:42.190 [2024-10-01 08:46:33.750145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.190 [2024-10-01 08:46:33.750156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.190 qpair failed and we were unable to recover it. 00:31:42.190 [2024-10-01 08:46:33.750358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.190 [2024-10-01 08:46:33.750368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.190 qpair failed and we were unable to recover it. 00:31:42.190 [2024-10-01 08:46:33.750702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.190 [2024-10-01 08:46:33.750713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.190 qpair failed and we were unable to recover it. 00:31:42.190 [2024-10-01 08:46:33.751038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.190 [2024-10-01 08:46:33.751048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.190 qpair failed and we were unable to recover it. 00:31:42.190 [2024-10-01 08:46:33.751239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.190 [2024-10-01 08:46:33.751250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.190 qpair failed and we were unable to recover it. 00:31:42.190 [2024-10-01 08:46:33.751475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.190 [2024-10-01 08:46:33.751485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.190 qpair failed and we were unable to recover it. 00:31:42.190 [2024-10-01 08:46:33.751768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.190 [2024-10-01 08:46:33.751778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.190 qpair failed and we were unable to recover it. 00:31:42.190 [2024-10-01 08:46:33.752001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.190 [2024-10-01 08:46:33.752011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.190 qpair failed and we were unable to recover it. 00:31:42.190 [2024-10-01 08:46:33.752335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.190 [2024-10-01 08:46:33.752345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.190 qpair failed and we were unable to recover it. 00:31:42.190 [2024-10-01 08:46:33.752548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.190 [2024-10-01 08:46:33.752558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.190 qpair failed and we were unable to recover it. 00:31:42.190 [2024-10-01 08:46:33.752758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.190 [2024-10-01 08:46:33.752769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.190 qpair failed and we were unable to recover it. 00:31:42.190 [2024-10-01 08:46:33.753066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.190 [2024-10-01 08:46:33.753077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.190 qpair failed and we were unable to recover it. 00:31:42.190 [2024-10-01 08:46:33.753413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.190 [2024-10-01 08:46:33.753423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.190 qpair failed and we were unable to recover it. 00:31:42.190 [2024-10-01 08:46:33.753763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.190 [2024-10-01 08:46:33.753773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.190 qpair failed and we were unable to recover it. 00:31:42.190 [2024-10-01 08:46:33.754060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.190 [2024-10-01 08:46:33.754070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.190 qpair failed and we were unable to recover it. 00:31:42.190 [2024-10-01 08:46:33.754407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.190 [2024-10-01 08:46:33.754417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.190 qpair failed and we were unable to recover it. 00:31:42.190 [2024-10-01 08:46:33.754693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.190 [2024-10-01 08:46:33.754702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.190 qpair failed and we were unable to recover it. 00:31:42.190 [2024-10-01 08:46:33.755013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.190 [2024-10-01 08:46:33.755024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.190 qpair failed and we were unable to recover it. 00:31:42.190 [2024-10-01 08:46:33.755314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.190 [2024-10-01 08:46:33.755323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.190 qpair failed and we were unable to recover it. 00:31:42.190 [2024-10-01 08:46:33.755629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.190 [2024-10-01 08:46:33.755642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.190 qpair failed and we were unable to recover it. 00:31:42.190 [2024-10-01 08:46:33.755955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.190 [2024-10-01 08:46:33.755965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.190 qpair failed and we were unable to recover it. 00:31:42.190 [2024-10-01 08:46:33.756247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.190 [2024-10-01 08:46:33.756257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.190 qpair failed and we were unable to recover it. 00:31:42.190 [2024-10-01 08:46:33.756565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.190 [2024-10-01 08:46:33.756574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.190 qpair failed and we were unable to recover it. 00:31:42.190 [2024-10-01 08:46:33.756886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.190 [2024-10-01 08:46:33.756896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.190 qpair failed and we were unable to recover it. 00:31:42.190 [2024-10-01 08:46:33.757194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.190 [2024-10-01 08:46:33.757204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.190 qpair failed and we were unable to recover it. 00:31:42.190 [2024-10-01 08:46:33.757390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.190 [2024-10-01 08:46:33.757401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.190 qpair failed and we were unable to recover it. 00:31:42.190 [2024-10-01 08:46:33.757597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.190 [2024-10-01 08:46:33.757607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.190 qpair failed and we were unable to recover it. 00:31:42.190 [2024-10-01 08:46:33.757882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.190 [2024-10-01 08:46:33.757892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.190 qpair failed and we were unable to recover it. 00:31:42.190 [2024-10-01 08:46:33.758218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.191 [2024-10-01 08:46:33.758228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.191 qpair failed and we were unable to recover it. 00:31:42.191 [2024-10-01 08:46:33.758538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.191 [2024-10-01 08:46:33.758548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.191 qpair failed and we were unable to recover it. 00:31:42.191 [2024-10-01 08:46:33.758877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.191 [2024-10-01 08:46:33.758887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.191 qpair failed and we were unable to recover it. 00:31:42.191 [2024-10-01 08:46:33.759110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.191 [2024-10-01 08:46:33.759120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.191 qpair failed and we were unable to recover it. 00:31:42.191 [2024-10-01 08:46:33.759394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.191 [2024-10-01 08:46:33.759403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.191 qpair failed and we were unable to recover it. 00:31:42.191 [2024-10-01 08:46:33.759727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.191 [2024-10-01 08:46:33.759737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.191 qpair failed and we were unable to recover it. 00:31:42.191 [2024-10-01 08:46:33.760069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.191 [2024-10-01 08:46:33.760079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.191 qpair failed and we were unable to recover it. 00:31:42.191 [2024-10-01 08:46:33.760400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.191 [2024-10-01 08:46:33.760411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.191 qpair failed and we were unable to recover it. 00:31:42.191 [2024-10-01 08:46:33.760635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.191 [2024-10-01 08:46:33.760644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.191 qpair failed and we were unable to recover it. 00:31:42.191 [2024-10-01 08:46:33.760848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.191 [2024-10-01 08:46:33.760858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.191 qpair failed and we were unable to recover it. 00:31:42.191 [2024-10-01 08:46:33.761176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.191 [2024-10-01 08:46:33.761186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.191 qpair failed and we were unable to recover it. 00:31:42.191 [2024-10-01 08:46:33.761462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.191 [2024-10-01 08:46:33.761471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.191 qpair failed and we were unable to recover it. 00:31:42.191 [2024-10-01 08:46:33.761815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.191 [2024-10-01 08:46:33.761825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.191 qpair failed and we were unable to recover it. 00:31:42.191 [2024-10-01 08:46:33.762127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.191 [2024-10-01 08:46:33.762137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.191 qpair failed and we were unable to recover it. 00:31:42.191 [2024-10-01 08:46:33.762395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.191 [2024-10-01 08:46:33.762406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.191 qpair failed and we were unable to recover it. 00:31:42.191 [2024-10-01 08:46:33.762618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.191 [2024-10-01 08:46:33.762627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.191 qpair failed and we were unable to recover it. 00:31:42.191 [2024-10-01 08:46:33.762908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.191 [2024-10-01 08:46:33.762917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.191 qpair failed and we were unable to recover it. 00:31:42.191 [2024-10-01 08:46:33.763242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.191 [2024-10-01 08:46:33.763252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.191 qpair failed and we were unable to recover it. 00:31:42.191 [2024-10-01 08:46:33.763447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.191 [2024-10-01 08:46:33.763456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.191 qpair failed and we were unable to recover it. 00:31:42.191 [2024-10-01 08:46:33.763780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.191 [2024-10-01 08:46:33.763789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.191 qpair failed and we were unable to recover it. 00:31:42.191 [2024-10-01 08:46:33.764118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.191 [2024-10-01 08:46:33.764128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.191 qpair failed and we were unable to recover it. 00:31:42.191 [2024-10-01 08:46:33.764427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.191 [2024-10-01 08:46:33.764438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.191 qpair failed and we were unable to recover it. 00:31:42.191 [2024-10-01 08:46:33.764744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.191 [2024-10-01 08:46:33.764754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.191 qpair failed and we were unable to recover it. 00:31:42.191 [2024-10-01 08:46:33.765063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.191 [2024-10-01 08:46:33.765073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.191 qpair failed and we were unable to recover it. 00:31:42.191 [2024-10-01 08:46:33.765407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.191 [2024-10-01 08:46:33.765417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.191 qpair failed and we were unable to recover it. 00:31:42.191 [2024-10-01 08:46:33.765726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.191 [2024-10-01 08:46:33.765736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.191 qpair failed and we were unable to recover it. 00:31:42.191 [2024-10-01 08:46:33.766067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.191 [2024-10-01 08:46:33.766077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.191 qpair failed and we were unable to recover it. 00:31:42.191 [2024-10-01 08:46:33.766446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.191 [2024-10-01 08:46:33.766455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.191 qpair failed and we were unable to recover it. 00:31:42.191 [2024-10-01 08:46:33.766646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.191 [2024-10-01 08:46:33.766656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.191 qpair failed and we were unable to recover it. 00:31:42.191 [2024-10-01 08:46:33.766838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.191 [2024-10-01 08:46:33.766849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.192 qpair failed and we were unable to recover it. 00:31:42.192 [2024-10-01 08:46:33.767147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.192 [2024-10-01 08:46:33.767157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.192 qpair failed and we were unable to recover it. 00:31:42.192 [2024-10-01 08:46:33.767482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.192 [2024-10-01 08:46:33.767491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.192 qpair failed and we were unable to recover it. 00:31:42.192 [2024-10-01 08:46:33.767824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.192 [2024-10-01 08:46:33.767835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.192 qpair failed and we were unable to recover it. 00:31:42.192 [2024-10-01 08:46:33.768136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.192 [2024-10-01 08:46:33.768146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.192 qpair failed and we were unable to recover it. 00:31:42.192 [2024-10-01 08:46:33.768443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.192 [2024-10-01 08:46:33.768452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.192 qpair failed and we were unable to recover it. 00:31:42.192 [2024-10-01 08:46:33.768748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.192 [2024-10-01 08:46:33.768758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.192 qpair failed and we were unable to recover it. 00:31:42.192 [2024-10-01 08:46:33.769057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.192 [2024-10-01 08:46:33.769068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.192 qpair failed and we were unable to recover it. 00:31:42.192 [2024-10-01 08:46:33.769400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.192 [2024-10-01 08:46:33.769409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.192 qpair failed and we were unable to recover it. 00:31:42.192 [2024-10-01 08:46:33.769689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.192 [2024-10-01 08:46:33.769699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.192 qpair failed and we were unable to recover it. 00:31:42.192 [2024-10-01 08:46:33.770017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.192 [2024-10-01 08:46:33.770027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.192 qpair failed and we were unable to recover it. 00:31:42.192 [2024-10-01 08:46:33.770407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.192 [2024-10-01 08:46:33.770417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.192 qpair failed and we were unable to recover it. 00:31:42.192 [2024-10-01 08:46:33.770724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.192 [2024-10-01 08:46:33.770734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.192 qpair failed and we were unable to recover it. 00:31:42.192 [2024-10-01 08:46:33.771061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.192 [2024-10-01 08:46:33.771072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.192 qpair failed and we were unable to recover it. 00:31:42.192 [2024-10-01 08:46:33.771329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.192 [2024-10-01 08:46:33.771339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.192 qpair failed and we were unable to recover it. 00:31:42.192 [2024-10-01 08:46:33.771643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.192 [2024-10-01 08:46:33.771652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.192 qpair failed and we were unable to recover it. 00:31:42.192 [2024-10-01 08:46:33.771852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.192 [2024-10-01 08:46:33.771861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.192 qpair failed and we were unable to recover it. 00:31:42.192 [2024-10-01 08:46:33.772164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.192 [2024-10-01 08:46:33.772174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.192 qpair failed and we were unable to recover it. 00:31:42.192 [2024-10-01 08:46:33.772470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.192 [2024-10-01 08:46:33.772480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.192 qpair failed and we were unable to recover it. 00:31:42.192 [2024-10-01 08:46:33.772819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.192 [2024-10-01 08:46:33.772829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.192 qpair failed and we were unable to recover it. 00:31:42.192 [2024-10-01 08:46:33.773123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.192 [2024-10-01 08:46:33.773134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.192 qpair failed and we were unable to recover it. 00:31:42.192 [2024-10-01 08:46:33.773353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.192 [2024-10-01 08:46:33.773363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.192 qpair failed and we were unable to recover it. 00:31:42.192 [2024-10-01 08:46:33.773684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.192 [2024-10-01 08:46:33.773694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.192 qpair failed and we were unable to recover it. 00:31:42.192 [2024-10-01 08:46:33.773967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.192 [2024-10-01 08:46:33.773976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.192 qpair failed and we were unable to recover it. 00:31:42.192 [2024-10-01 08:46:33.774280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.192 [2024-10-01 08:46:33.774289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.192 qpair failed and we were unable to recover it. 00:31:42.192 [2024-10-01 08:46:33.774559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.192 [2024-10-01 08:46:33.774569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.192 qpair failed and we were unable to recover it. 00:31:42.192 [2024-10-01 08:46:33.774898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.192 [2024-10-01 08:46:33.774907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.192 qpair failed and we were unable to recover it. 00:31:42.192 [2024-10-01 08:46:33.775207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.192 [2024-10-01 08:46:33.775218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.192 qpair failed and we were unable to recover it. 00:31:42.192 [2024-10-01 08:46:33.775493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.192 [2024-10-01 08:46:33.775502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.192 qpair failed and we were unable to recover it. 00:31:42.192 [2024-10-01 08:46:33.775761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.193 [2024-10-01 08:46:33.775772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.193 qpair failed and we were unable to recover it. 00:31:42.193 [2024-10-01 08:46:33.776060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.193 [2024-10-01 08:46:33.776072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.193 qpair failed and we were unable to recover it. 00:31:42.193 [2024-10-01 08:46:33.776373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.193 [2024-10-01 08:46:33.776383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.193 qpair failed and we were unable to recover it. 00:31:42.193 [2024-10-01 08:46:33.776740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.193 [2024-10-01 08:46:33.776749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.193 qpair failed and we were unable to recover it. 00:31:42.193 [2024-10-01 08:46:33.777031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.193 [2024-10-01 08:46:33.777041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.193 qpair failed and we were unable to recover it. 00:31:42.193 [2024-10-01 08:46:33.777366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.193 [2024-10-01 08:46:33.777376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.193 qpair failed and we were unable to recover it. 00:31:42.193 [2024-10-01 08:46:33.777704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.193 [2024-10-01 08:46:33.777713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.193 qpair failed and we were unable to recover it. 00:31:42.193 [2024-10-01 08:46:33.777997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.193 [2024-10-01 08:46:33.778008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.193 qpair failed and we were unable to recover it. 00:31:42.193 [2024-10-01 08:46:33.778301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.193 [2024-10-01 08:46:33.778311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.193 qpair failed and we were unable to recover it. 00:31:42.193 [2024-10-01 08:46:33.778573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.193 [2024-10-01 08:46:33.778584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.193 qpair failed and we were unable to recover it. 00:31:42.193 [2024-10-01 08:46:33.778806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.193 [2024-10-01 08:46:33.778816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.193 qpair failed and we were unable to recover it. 00:31:42.193 [2024-10-01 08:46:33.779119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.193 [2024-10-01 08:46:33.779129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.193 qpair failed and we were unable to recover it. 00:31:42.193 [2024-10-01 08:46:33.779447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.193 [2024-10-01 08:46:33.779457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.193 qpair failed and we were unable to recover it. 00:31:42.193 [2024-10-01 08:46:33.779752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.193 [2024-10-01 08:46:33.779762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.193 qpair failed and we were unable to recover it. 00:31:42.193 [2024-10-01 08:46:33.780063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.193 [2024-10-01 08:46:33.780074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.193 qpair failed and we were unable to recover it. 00:31:42.193 [2024-10-01 08:46:33.780359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.193 [2024-10-01 08:46:33.780369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.193 qpair failed and we were unable to recover it. 00:31:42.193 [2024-10-01 08:46:33.780645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.193 [2024-10-01 08:46:33.780655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.193 qpair failed and we were unable to recover it. 00:31:42.193 [2024-10-01 08:46:33.780998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.193 [2024-10-01 08:46:33.781008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.193 qpair failed and we were unable to recover it. 00:31:42.193 [2024-10-01 08:46:33.781302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.193 [2024-10-01 08:46:33.781312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.193 qpair failed and we were unable to recover it. 00:31:42.193 [2024-10-01 08:46:33.781633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.193 [2024-10-01 08:46:33.781642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.193 qpair failed and we were unable to recover it. 00:31:42.193 [2024-10-01 08:46:33.781952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.193 [2024-10-01 08:46:33.781962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.193 qpair failed and we were unable to recover it. 00:31:42.193 [2024-10-01 08:46:33.782170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.193 [2024-10-01 08:46:33.782181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.193 qpair failed and we were unable to recover it. 00:31:42.193 [2024-10-01 08:46:33.782378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.193 [2024-10-01 08:46:33.782388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.193 qpair failed and we were unable to recover it. 00:31:42.193 [2024-10-01 08:46:33.782684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.193 [2024-10-01 08:46:33.782695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.193 qpair failed and we were unable to recover it. 00:31:42.193 [2024-10-01 08:46:33.783002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.193 [2024-10-01 08:46:33.783012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.193 qpair failed and we were unable to recover it. 00:31:42.193 [2024-10-01 08:46:33.783290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.193 [2024-10-01 08:46:33.783299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.193 qpair failed and we were unable to recover it. 00:31:42.193 [2024-10-01 08:46:33.783460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.193 [2024-10-01 08:46:33.783470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.193 qpair failed and we were unable to recover it. 00:31:42.193 [2024-10-01 08:46:33.783790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.193 [2024-10-01 08:46:33.783800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.193 qpair failed and we were unable to recover it. 00:31:42.193 [2024-10-01 08:46:33.784104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.194 [2024-10-01 08:46:33.784114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.194 qpair failed and we were unable to recover it. 00:31:42.194 [2024-10-01 08:46:33.784417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.194 [2024-10-01 08:46:33.784426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.194 qpair failed and we were unable to recover it. 00:31:42.194 [2024-10-01 08:46:33.784727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.194 [2024-10-01 08:46:33.784736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.194 qpair failed and we were unable to recover it. 00:31:42.194 [2024-10-01 08:46:33.785048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.194 [2024-10-01 08:46:33.785058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.194 qpair failed and we were unable to recover it. 00:31:42.194 [2024-10-01 08:46:33.785358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.194 [2024-10-01 08:46:33.785368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.194 qpair failed and we were unable to recover it. 00:31:42.194 [2024-10-01 08:46:33.785679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.194 [2024-10-01 08:46:33.785688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.194 qpair failed and we were unable to recover it. 00:31:42.194 [2024-10-01 08:46:33.785945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.194 [2024-10-01 08:46:33.785955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.194 qpair failed and we were unable to recover it. 00:31:42.194 [2024-10-01 08:46:33.786256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.194 [2024-10-01 08:46:33.786266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.194 qpair failed and we were unable to recover it. 00:31:42.194 [2024-10-01 08:46:33.786567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.194 [2024-10-01 08:46:33.786578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.194 qpair failed and we were unable to recover it. 00:31:42.194 [2024-10-01 08:46:33.786879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.194 [2024-10-01 08:46:33.786889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.194 qpair failed and we were unable to recover it. 00:31:42.194 [2024-10-01 08:46:33.787205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.194 [2024-10-01 08:46:33.787215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.194 qpair failed and we were unable to recover it. 00:31:42.194 [2024-10-01 08:46:33.787537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.194 [2024-10-01 08:46:33.787548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.194 qpair failed and we were unable to recover it. 00:31:42.194 [2024-10-01 08:46:33.787807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.194 [2024-10-01 08:46:33.787817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.194 qpair failed and we were unable to recover it. 00:31:42.194 [2024-10-01 08:46:33.788115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.194 [2024-10-01 08:46:33.788125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.194 qpair failed and we were unable to recover it. 00:31:42.194 [2024-10-01 08:46:33.788436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.194 [2024-10-01 08:46:33.788448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.194 qpair failed and we were unable to recover it. 00:31:42.194 [2024-10-01 08:46:33.788755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.194 [2024-10-01 08:46:33.788765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.194 qpair failed and we were unable to recover it. 00:31:42.194 [2024-10-01 08:46:33.789046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.194 [2024-10-01 08:46:33.789056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.194 qpair failed and we were unable to recover it. 00:31:42.194 [2024-10-01 08:46:33.789348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.194 [2024-10-01 08:46:33.789357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.194 qpair failed and we were unable to recover it. 00:31:42.194 [2024-10-01 08:46:33.789678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.194 [2024-10-01 08:46:33.789687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.194 qpair failed and we were unable to recover it. 00:31:42.194 [2024-10-01 08:46:33.789956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.194 [2024-10-01 08:46:33.789965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.194 qpair failed and we were unable to recover it. 00:31:42.194 [2024-10-01 08:46:33.790254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.194 [2024-10-01 08:46:33.790265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.194 qpair failed and we were unable to recover it. 00:31:42.194 [2024-10-01 08:46:33.790571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.194 [2024-10-01 08:46:33.790580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.194 qpair failed and we were unable to recover it. 00:31:42.194 [2024-10-01 08:46:33.790853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.194 [2024-10-01 08:46:33.790863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.194 qpair failed and we were unable to recover it. 00:31:42.194 [2024-10-01 08:46:33.791154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.194 [2024-10-01 08:46:33.791164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.194 qpair failed and we were unable to recover it. 00:31:42.194 [2024-10-01 08:46:33.791455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.194 [2024-10-01 08:46:33.791465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.194 qpair failed and we were unable to recover it. 00:31:42.194 [2024-10-01 08:46:33.791739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.194 [2024-10-01 08:46:33.791748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.194 qpair failed and we were unable to recover it. 00:31:42.194 [2024-10-01 08:46:33.792031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.194 [2024-10-01 08:46:33.792041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.194 qpair failed and we were unable to recover it. 00:31:42.194 [2024-10-01 08:46:33.792362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.194 [2024-10-01 08:46:33.792372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.194 qpair failed and we were unable to recover it. 00:31:42.194 [2024-10-01 08:46:33.792543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.194 [2024-10-01 08:46:33.792554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.194 qpair failed and we were unable to recover it. 00:31:42.195 [2024-10-01 08:46:33.792882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.195 [2024-10-01 08:46:33.792892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.195 qpair failed and we were unable to recover it. 00:31:42.195 [2024-10-01 08:46:33.793076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.195 [2024-10-01 08:46:33.793086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.195 qpair failed and we were unable to recover it. 00:31:42.195 [2024-10-01 08:46:33.793500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.195 [2024-10-01 08:46:33.793510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.195 qpair failed and we were unable to recover it. 00:31:42.195 [2024-10-01 08:46:33.793784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.195 [2024-10-01 08:46:33.793794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.195 qpair failed and we were unable to recover it. 00:31:42.195 [2024-10-01 08:46:33.793976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.195 [2024-10-01 08:46:33.793986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.195 qpair failed and we were unable to recover it. 00:31:42.195 [2024-10-01 08:46:33.794292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.195 [2024-10-01 08:46:33.794302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.195 qpair failed and we were unable to recover it. 00:31:42.195 [2024-10-01 08:46:33.794644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.195 [2024-10-01 08:46:33.794653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.195 qpair failed and we were unable to recover it. 00:31:42.195 [2024-10-01 08:46:33.794912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.195 [2024-10-01 08:46:33.794923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.195 qpair failed and we were unable to recover it. 00:31:42.195 [2024-10-01 08:46:33.795218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.195 [2024-10-01 08:46:33.795228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.195 qpair failed and we were unable to recover it. 00:31:42.195 [2024-10-01 08:46:33.795518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.195 [2024-10-01 08:46:33.795528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.195 qpair failed and we were unable to recover it. 00:31:42.195 [2024-10-01 08:46:33.795706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.195 [2024-10-01 08:46:33.795715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.195 qpair failed and we were unable to recover it. 00:31:42.195 [2024-10-01 08:46:33.796021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.195 [2024-10-01 08:46:33.796031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.195 qpair failed and we were unable to recover it. 00:31:42.195 [2024-10-01 08:46:33.796335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.195 [2024-10-01 08:46:33.796348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.195 qpair failed and we were unable to recover it. 00:31:42.195 [2024-10-01 08:46:33.796651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.195 [2024-10-01 08:46:33.796661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.195 qpair failed and we were unable to recover it. 00:31:42.195 [2024-10-01 08:46:33.796987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.195 [2024-10-01 08:46:33.797002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.195 qpair failed and we were unable to recover it. 00:31:42.195 [2024-10-01 08:46:33.797306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.195 [2024-10-01 08:46:33.797316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.195 qpair failed and we were unable to recover it. 00:31:42.195 [2024-10-01 08:46:33.797608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.195 [2024-10-01 08:46:33.797617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.195 qpair failed and we were unable to recover it. 00:31:42.195 [2024-10-01 08:46:33.797891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.195 [2024-10-01 08:46:33.797901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.195 qpair failed and we were unable to recover it. 00:31:42.195 [2024-10-01 08:46:33.798231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.195 [2024-10-01 08:46:33.798242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.195 qpair failed and we were unable to recover it. 00:31:42.195 [2024-10-01 08:46:33.798578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.195 [2024-10-01 08:46:33.798588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.195 qpair failed and we were unable to recover it. 00:31:42.195 [2024-10-01 08:46:33.798912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.195 [2024-10-01 08:46:33.798922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.195 qpair failed and we were unable to recover it. 00:31:42.195 [2024-10-01 08:46:33.799222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.195 [2024-10-01 08:46:33.799232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.195 qpair failed and we were unable to recover it. 00:31:42.195 [2024-10-01 08:46:33.799500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.195 [2024-10-01 08:46:33.799509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.195 qpair failed and we were unable to recover it. 00:31:42.195 [2024-10-01 08:46:33.799794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.195 [2024-10-01 08:46:33.799804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.195 qpair failed and we were unable to recover it. 00:31:42.196 [2024-10-01 08:46:33.800124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.196 [2024-10-01 08:46:33.800134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.196 qpair failed and we were unable to recover it. 00:31:42.196 [2024-10-01 08:46:33.800442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.196 [2024-10-01 08:46:33.800452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.196 qpair failed and we were unable to recover it. 00:31:42.196 [2024-10-01 08:46:33.800780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.196 [2024-10-01 08:46:33.800789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.196 qpair failed and we were unable to recover it. 00:31:42.196 [2024-10-01 08:46:33.801098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.196 [2024-10-01 08:46:33.801108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.196 qpair failed and we were unable to recover it. 00:31:42.196 [2024-10-01 08:46:33.801407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.196 [2024-10-01 08:46:33.801417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.196 qpair failed and we were unable to recover it. 00:31:42.196 [2024-10-01 08:46:33.801745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.196 [2024-10-01 08:46:33.801755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.196 qpair failed and we were unable to recover it. 00:31:42.196 [2024-10-01 08:46:33.802016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.196 [2024-10-01 08:46:33.802026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.196 qpair failed and we were unable to recover it. 00:31:42.196 [2024-10-01 08:46:33.802363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.196 [2024-10-01 08:46:33.802374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.196 qpair failed and we were unable to recover it. 00:31:42.196 [2024-10-01 08:46:33.802585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.196 [2024-10-01 08:46:33.802595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.196 qpair failed and we were unable to recover it. 00:31:42.196 [2024-10-01 08:46:33.802904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.196 [2024-10-01 08:46:33.802913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.196 qpair failed and we were unable to recover it. 00:31:42.196 [2024-10-01 08:46:33.803140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.196 [2024-10-01 08:46:33.803150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.196 qpair failed and we were unable to recover it. 00:31:42.196 [2024-10-01 08:46:33.803460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.196 [2024-10-01 08:46:33.803470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.196 qpair failed and we were unable to recover it. 00:31:42.196 [2024-10-01 08:46:33.803822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.196 [2024-10-01 08:46:33.803831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.196 qpair failed and we were unable to recover it. 00:31:42.196 [2024-10-01 08:46:33.804111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.196 [2024-10-01 08:46:33.804121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.196 qpair failed and we were unable to recover it. 00:31:42.196 [2024-10-01 08:46:33.804461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.196 [2024-10-01 08:46:33.804470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.196 qpair failed and we were unable to recover it. 00:31:42.196 [2024-10-01 08:46:33.804745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.196 [2024-10-01 08:46:33.804754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.196 qpair failed and we were unable to recover it. 00:31:42.196 [2024-10-01 08:46:33.805068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.196 [2024-10-01 08:46:33.805078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.196 qpair failed and we were unable to recover it. 00:31:42.196 [2024-10-01 08:46:33.805377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.196 [2024-10-01 08:46:33.805394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.196 qpair failed and we were unable to recover it. 00:31:42.196 [2024-10-01 08:46:33.805722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.196 [2024-10-01 08:46:33.805733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.196 qpair failed and we were unable to recover it. 00:31:42.196 [2024-10-01 08:46:33.805992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.196 [2024-10-01 08:46:33.806006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.196 qpair failed and we were unable to recover it. 00:31:42.196 [2024-10-01 08:46:33.806286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.196 [2024-10-01 08:46:33.806296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.196 qpair failed and we were unable to recover it. 00:31:42.196 [2024-10-01 08:46:33.806625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.196 [2024-10-01 08:46:33.806636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.196 qpair failed and we were unable to recover it. 00:31:42.196 [2024-10-01 08:46:33.806940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.196 [2024-10-01 08:46:33.806949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.196 qpair failed and we were unable to recover it. 00:31:42.196 [2024-10-01 08:46:33.807277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.196 [2024-10-01 08:46:33.807287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.196 qpair failed and we were unable to recover it. 00:31:42.196 [2024-10-01 08:46:33.807591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.196 [2024-10-01 08:46:33.807601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.196 qpair failed and we were unable to recover it. 00:31:42.196 [2024-10-01 08:46:33.807905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.196 [2024-10-01 08:46:33.807914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.196 qpair failed and we were unable to recover it. 00:31:42.196 [2024-10-01 08:46:33.808199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.196 [2024-10-01 08:46:33.808209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.196 qpair failed and we were unable to recover it. 00:31:42.196 [2024-10-01 08:46:33.808592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.196 [2024-10-01 08:46:33.808602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.196 qpair failed and we were unable to recover it. 00:31:42.196 [2024-10-01 08:46:33.808876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.196 [2024-10-01 08:46:33.808886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.197 qpair failed and we were unable to recover it. 00:31:42.197 [2024-10-01 08:46:33.809209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.197 [2024-10-01 08:46:33.809220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.197 qpair failed and we were unable to recover it. 00:31:42.197 [2024-10-01 08:46:33.809480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.197 [2024-10-01 08:46:33.809490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.197 qpair failed and we were unable to recover it. 00:31:42.197 [2024-10-01 08:46:33.809769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.197 [2024-10-01 08:46:33.809779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.197 qpair failed and we were unable to recover it. 00:31:42.197 [2024-10-01 08:46:33.809992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.197 [2024-10-01 08:46:33.810004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.197 qpair failed and we were unable to recover it. 00:31:42.197 [2024-10-01 08:46:33.810218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.197 [2024-10-01 08:46:33.810228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.197 qpair failed and we were unable to recover it. 00:31:42.197 [2024-10-01 08:46:33.810548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.197 [2024-10-01 08:46:33.810557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.197 qpair failed and we were unable to recover it. 00:31:42.197 [2024-10-01 08:46:33.810852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.197 [2024-10-01 08:46:33.810862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.197 qpair failed and we were unable to recover it. 00:31:42.197 [2024-10-01 08:46:33.811177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.197 [2024-10-01 08:46:33.811187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.197 qpair failed and we were unable to recover it. 00:31:42.197 [2024-10-01 08:46:33.811509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.197 [2024-10-01 08:46:33.811519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.197 qpair failed and we were unable to recover it. 00:31:42.197 [2024-10-01 08:46:33.811828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.197 [2024-10-01 08:46:33.811837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.197 qpair failed and we were unable to recover it. 00:31:42.197 [2024-10-01 08:46:33.812131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.197 [2024-10-01 08:46:33.812140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.197 qpair failed and we were unable to recover it. 00:31:42.197 [2024-10-01 08:46:33.812450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.197 [2024-10-01 08:46:33.812461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.197 qpair failed and we were unable to recover it. 00:31:42.197 [2024-10-01 08:46:33.812789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.197 [2024-10-01 08:46:33.812800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.197 qpair failed and we were unable to recover it. 00:31:42.197 [2024-10-01 08:46:33.813098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.197 [2024-10-01 08:46:33.813108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.197 qpair failed and we were unable to recover it. 00:31:42.197 [2024-10-01 08:46:33.813404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.197 [2024-10-01 08:46:33.813414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.197 qpair failed and we were unable to recover it. 00:31:42.197 [2024-10-01 08:46:33.813686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.197 [2024-10-01 08:46:33.813695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.197 qpair failed and we were unable to recover it. 00:31:42.197 [2024-10-01 08:46:33.813992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.197 [2024-10-01 08:46:33.814006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.197 qpair failed and we were unable to recover it. 00:31:42.197 [2024-10-01 08:46:33.814284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.197 [2024-10-01 08:46:33.814295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.197 qpair failed and we were unable to recover it. 00:31:42.197 [2024-10-01 08:46:33.814584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.197 [2024-10-01 08:46:33.814595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.197 qpair failed and we were unable to recover it. 00:31:42.197 [2024-10-01 08:46:33.815440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.197 [2024-10-01 08:46:33.815461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.197 qpair failed and we were unable to recover it. 00:31:42.197 [2024-10-01 08:46:33.815805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.197 [2024-10-01 08:46:33.815816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.197 qpair failed and we were unable to recover it. 00:31:42.197 [2024-10-01 08:46:33.816787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.197 [2024-10-01 08:46:33.816811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.197 qpair failed and we were unable to recover it. 00:31:42.197 [2024-10-01 08:46:33.817141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.197 [2024-10-01 08:46:33.817154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.197 qpair failed and we were unable to recover it. 00:31:42.197 [2024-10-01 08:46:33.817458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.197 [2024-10-01 08:46:33.817468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.197 qpair failed and we were unable to recover it. 00:31:42.197 [2024-10-01 08:46:33.817772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.197 [2024-10-01 08:46:33.817781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.197 qpair failed and we were unable to recover it. 00:31:42.197 [2024-10-01 08:46:33.817974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.197 [2024-10-01 08:46:33.817983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.197 qpair failed and we were unable to recover it. 00:31:42.197 [2024-10-01 08:46:33.818334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.197 [2024-10-01 08:46:33.818344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.197 qpair failed and we were unable to recover it. 00:31:42.197 [2024-10-01 08:46:33.818670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.197 [2024-10-01 08:46:33.818680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.197 qpair failed and we were unable to recover it. 00:31:42.198 [2024-10-01 08:46:33.818954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.198 [2024-10-01 08:46:33.818964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.198 qpair failed and we were unable to recover it. 00:31:42.198 [2024-10-01 08:46:33.819307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.198 [2024-10-01 08:46:33.819318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.198 qpair failed and we were unable to recover it. 00:31:42.198 [2024-10-01 08:46:33.819632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.198 [2024-10-01 08:46:33.819642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.198 qpair failed and we were unable to recover it. 00:31:42.198 [2024-10-01 08:46:33.819919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.198 [2024-10-01 08:46:33.819929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.198 qpair failed and we were unable to recover it. 00:31:42.198 [2024-10-01 08:46:33.820307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.198 [2024-10-01 08:46:33.820317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.198 qpair failed and we were unable to recover it. 00:31:42.198 [2024-10-01 08:46:33.820639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.198 [2024-10-01 08:46:33.820649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.198 qpair failed and we were unable to recover it. 00:31:42.198 [2024-10-01 08:46:33.820969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.198 [2024-10-01 08:46:33.820979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.198 qpair failed and we were unable to recover it. 00:31:42.198 [2024-10-01 08:46:33.821301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.198 [2024-10-01 08:46:33.821311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.198 qpair failed and we were unable to recover it. 00:31:42.198 [2024-10-01 08:46:33.821571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.198 [2024-10-01 08:46:33.821581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.198 qpair failed and we were unable to recover it. 00:31:42.198 [2024-10-01 08:46:33.821911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.198 [2024-10-01 08:46:33.821922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.198 qpair failed and we were unable to recover it. 00:31:42.198 [2024-10-01 08:46:33.822229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.198 [2024-10-01 08:46:33.822238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.198 qpair failed and we were unable to recover it. 00:31:42.198 [2024-10-01 08:46:33.822440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.198 [2024-10-01 08:46:33.822450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.198 qpair failed and we were unable to recover it. 00:31:42.198 [2024-10-01 08:46:33.822810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.198 [2024-10-01 08:46:33.822820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.198 qpair failed and we were unable to recover it. 00:31:42.198 [2024-10-01 08:46:33.823011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.198 [2024-10-01 08:46:33.823024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.198 qpair failed and we were unable to recover it. 00:31:42.198 [2024-10-01 08:46:33.823300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.198 [2024-10-01 08:46:33.823310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.198 qpair failed and we were unable to recover it. 00:31:42.198 [2024-10-01 08:46:33.823568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.198 [2024-10-01 08:46:33.823578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.198 qpair failed and we were unable to recover it. 00:31:42.198 [2024-10-01 08:46:33.823891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.198 [2024-10-01 08:46:33.823901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.198 qpair failed and we were unable to recover it. 00:31:42.198 [2024-10-01 08:46:33.824231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.198 [2024-10-01 08:46:33.824241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.198 qpair failed and we were unable to recover it. 00:31:42.198 [2024-10-01 08:46:33.824553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.198 [2024-10-01 08:46:33.824563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.198 qpair failed and we were unable to recover it. 00:31:42.198 [2024-10-01 08:46:33.824886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.198 [2024-10-01 08:46:33.824897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.198 qpair failed and we were unable to recover it. 00:31:42.198 [2024-10-01 08:46:33.825163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.198 [2024-10-01 08:46:33.825173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.198 qpair failed and we were unable to recover it. 00:31:42.198 [2024-10-01 08:46:33.825366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.198 [2024-10-01 08:46:33.825376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.198 qpair failed and we were unable to recover it. 00:31:42.198 [2024-10-01 08:46:33.825787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.198 [2024-10-01 08:46:33.825797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.198 qpair failed and we were unable to recover it. 00:31:42.198 [2024-10-01 08:46:33.826019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.198 [2024-10-01 08:46:33.826029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.198 qpair failed and we were unable to recover it. 00:31:42.198 [2024-10-01 08:46:33.826353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.198 [2024-10-01 08:46:33.826362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.198 qpair failed and we were unable to recover it. 00:31:42.198 [2024-10-01 08:46:33.826652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.198 [2024-10-01 08:46:33.826661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.198 qpair failed and we were unable to recover it. 00:31:42.198 [2024-10-01 08:46:33.826967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.198 [2024-10-01 08:46:33.826977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.198 qpair failed and we were unable to recover it. 00:31:42.198 [2024-10-01 08:46:33.827370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.198 [2024-10-01 08:46:33.827380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.198 qpair failed and we were unable to recover it. 00:31:42.198 [2024-10-01 08:46:33.827599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.199 [2024-10-01 08:46:33.827609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.199 qpair failed and we were unable to recover it. 00:31:42.199 [2024-10-01 08:46:33.827925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.199 [2024-10-01 08:46:33.827934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.199 qpair failed and we were unable to recover it. 00:31:42.199 [2024-10-01 08:46:33.828166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.199 [2024-10-01 08:46:33.828177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.199 qpair failed and we were unable to recover it. 00:31:42.199 [2024-10-01 08:46:33.828532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.199 [2024-10-01 08:46:33.828542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.199 qpair failed and we were unable to recover it. 00:31:42.199 [2024-10-01 08:46:33.828891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.199 [2024-10-01 08:46:33.828900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.199 qpair failed and we were unable to recover it. 00:31:42.199 [2024-10-01 08:46:33.829097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.199 [2024-10-01 08:46:33.829108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.199 qpair failed and we were unable to recover it. 00:31:42.199 [2024-10-01 08:46:33.829435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.199 [2024-10-01 08:46:33.829445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.199 qpair failed and we were unable to recover it. 00:31:42.199 [2024-10-01 08:46:33.829752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.199 [2024-10-01 08:46:33.829762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.199 qpair failed and we were unable to recover it. 00:31:42.199 [2024-10-01 08:46:33.829975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.199 [2024-10-01 08:46:33.829984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.199 qpair failed and we were unable to recover it. 00:31:42.199 [2024-10-01 08:46:33.830292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.199 [2024-10-01 08:46:33.830302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.199 qpair failed and we were unable to recover it. 00:31:42.199 [2024-10-01 08:46:33.830610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.199 [2024-10-01 08:46:33.830619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.199 qpair failed and we were unable to recover it. 00:31:42.199 [2024-10-01 08:46:33.830899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.199 [2024-10-01 08:46:33.830909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.199 qpair failed and we were unable to recover it. 00:31:42.199 [2024-10-01 08:46:33.831213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.199 [2024-10-01 08:46:33.831225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.199 qpair failed and we were unable to recover it. 00:31:42.199 [2024-10-01 08:46:33.831551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.199 [2024-10-01 08:46:33.831561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.199 qpair failed and we were unable to recover it. 00:31:42.199 [2024-10-01 08:46:33.831864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.199 [2024-10-01 08:46:33.831875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.199 qpair failed and we were unable to recover it. 00:31:42.199 [2024-10-01 08:46:33.832068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.199 [2024-10-01 08:46:33.832078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.199 qpair failed and we were unable to recover it. 00:31:42.199 [2024-10-01 08:46:33.832394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.199 [2024-10-01 08:46:33.832404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.199 qpair failed and we were unable to recover it. 00:31:42.199 [2024-10-01 08:46:33.832562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.199 [2024-10-01 08:46:33.832571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.199 qpair failed and we were unable to recover it. 00:31:42.199 [2024-10-01 08:46:33.832901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.199 [2024-10-01 08:46:33.832910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.199 qpair failed and we were unable to recover it. 00:31:42.199 [2024-10-01 08:46:33.833235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.199 [2024-10-01 08:46:33.833245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.199 qpair failed and we were unable to recover it. 00:31:42.199 [2024-10-01 08:46:33.833526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.199 [2024-10-01 08:46:33.833535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.199 qpair failed and we were unable to recover it. 00:31:42.199 [2024-10-01 08:46:33.833864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.199 [2024-10-01 08:46:33.833873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.199 qpair failed and we were unable to recover it. 00:31:42.199 [2024-10-01 08:46:33.834288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.199 [2024-10-01 08:46:33.834299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.199 qpair failed and we were unable to recover it. 00:31:42.199 [2024-10-01 08:46:33.834635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.199 [2024-10-01 08:46:33.834645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.199 qpair failed and we were unable to recover it. 00:31:42.199 [2024-10-01 08:46:33.834974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.199 [2024-10-01 08:46:33.834984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.199 qpair failed and we were unable to recover it. 00:31:42.199 [2024-10-01 08:46:33.835195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.199 [2024-10-01 08:46:33.835205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.199 qpair failed and we were unable to recover it. 00:31:42.199 [2024-10-01 08:46:33.835519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.199 [2024-10-01 08:46:33.835529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.199 qpair failed and we were unable to recover it. 00:31:42.199 [2024-10-01 08:46:33.835853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.199 [2024-10-01 08:46:33.835863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.199 qpair failed and we were unable to recover it. 00:31:42.200 [2024-10-01 08:46:33.836165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.200 [2024-10-01 08:46:33.836175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.200 qpair failed and we were unable to recover it. 00:31:42.200 [2024-10-01 08:46:33.836462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.200 [2024-10-01 08:46:33.836472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.200 qpair failed and we were unable to recover it. 00:31:42.200 [2024-10-01 08:46:33.836746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.200 [2024-10-01 08:46:33.836756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.200 qpair failed and we were unable to recover it. 00:31:42.200 [2024-10-01 08:46:33.837052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.200 [2024-10-01 08:46:33.837062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.200 qpair failed and we were unable to recover it. 00:31:42.200 [2024-10-01 08:46:33.837392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.200 [2024-10-01 08:46:33.837401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.200 qpair failed and we were unable to recover it. 00:31:42.200 [2024-10-01 08:46:33.837683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.200 [2024-10-01 08:46:33.837693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.200 qpair failed and we were unable to recover it. 00:31:42.200 [2024-10-01 08:46:33.837855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.200 [2024-10-01 08:46:33.837866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.200 qpair failed and we were unable to recover it. 00:31:42.200 [2024-10-01 08:46:33.838130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.200 [2024-10-01 08:46:33.838140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.200 qpair failed and we were unable to recover it. 00:31:42.200 [2024-10-01 08:46:33.838492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.200 [2024-10-01 08:46:33.838503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.200 qpair failed and we were unable to recover it. 00:31:42.200 [2024-10-01 08:46:33.838805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.200 [2024-10-01 08:46:33.838815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.200 qpair failed and we were unable to recover it. 00:31:42.200 [2024-10-01 08:46:33.839026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.200 [2024-10-01 08:46:33.839036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.200 qpair failed and we were unable to recover it. 00:31:42.200 [2024-10-01 08:46:33.839319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.200 [2024-10-01 08:46:33.839329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.200 qpair failed and we were unable to recover it. 00:31:42.200 [2024-10-01 08:46:33.839598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.200 [2024-10-01 08:46:33.839607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.200 qpair failed and we were unable to recover it. 00:31:42.200 [2024-10-01 08:46:33.839935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.200 [2024-10-01 08:46:33.839944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.200 qpair failed and we were unable to recover it. 00:31:42.200 [2024-10-01 08:46:33.840231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.200 [2024-10-01 08:46:33.840242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.200 qpair failed and we were unable to recover it. 00:31:42.200 [2024-10-01 08:46:33.840520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.200 [2024-10-01 08:46:33.840530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.200 qpair failed and we were unable to recover it. 00:31:42.200 [2024-10-01 08:46:33.840837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.200 [2024-10-01 08:46:33.840847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.200 qpair failed and we were unable to recover it. 00:31:42.200 [2024-10-01 08:46:33.841143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.200 [2024-10-01 08:46:33.841154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.200 qpair failed and we were unable to recover it. 00:31:42.200 [2024-10-01 08:46:33.841442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.200 [2024-10-01 08:46:33.841453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.200 qpair failed and we were unable to recover it. 00:31:42.200 [2024-10-01 08:46:33.841757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.200 [2024-10-01 08:46:33.841766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.200 qpair failed and we were unable to recover it. 00:31:42.200 [2024-10-01 08:46:33.842059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.200 [2024-10-01 08:46:33.842069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.200 qpair failed and we were unable to recover it. 00:31:42.200 [2024-10-01 08:46:33.842380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.200 [2024-10-01 08:46:33.842390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.200 qpair failed and we were unable to recover it. 00:31:42.200 [2024-10-01 08:46:33.842664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.200 [2024-10-01 08:46:33.842674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.200 qpair failed and we were unable to recover it. 00:31:42.200 [2024-10-01 08:46:33.842863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.200 [2024-10-01 08:46:33.842873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.200 qpair failed and we were unable to recover it. 00:31:42.200 [2024-10-01 08:46:33.843157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.200 [2024-10-01 08:46:33.843167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.200 qpair failed and we were unable to recover it. 00:31:42.200 [2024-10-01 08:46:33.843495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.200 [2024-10-01 08:46:33.843507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.200 qpair failed and we were unable to recover it. 00:31:42.200 [2024-10-01 08:46:33.843793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.200 [2024-10-01 08:46:33.843803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.200 qpair failed and we were unable to recover it. 00:31:42.200 [2024-10-01 08:46:33.844109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.200 [2024-10-01 08:46:33.844120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.200 qpair failed and we were unable to recover it. 00:31:42.200 [2024-10-01 08:46:33.844440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.200 [2024-10-01 08:46:33.844450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.200 qpair failed and we were unable to recover it. 00:31:42.200 [2024-10-01 08:46:33.844784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.201 [2024-10-01 08:46:33.844793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.201 qpair failed and we were unable to recover it. 00:31:42.201 [2024-10-01 08:46:33.845024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.201 [2024-10-01 08:46:33.845035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.201 qpair failed and we were unable to recover it. 00:31:42.201 [2024-10-01 08:46:33.845350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.201 [2024-10-01 08:46:33.845359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.201 qpair failed and we were unable to recover it. 00:31:42.201 [2024-10-01 08:46:33.845644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.201 [2024-10-01 08:46:33.845654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.201 qpair failed and we were unable to recover it. 00:31:42.201 [2024-10-01 08:46:33.845968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.201 [2024-10-01 08:46:33.845978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.201 qpair failed and we were unable to recover it. 00:31:42.201 [2024-10-01 08:46:33.846272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.201 [2024-10-01 08:46:33.846283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.201 qpair failed and we were unable to recover it. 00:31:42.201 [2024-10-01 08:46:33.846559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.201 [2024-10-01 08:46:33.846569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.201 qpair failed and we were unable to recover it. 00:31:42.201 [2024-10-01 08:46:33.846845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.201 [2024-10-01 08:46:33.846855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.201 qpair failed and we were unable to recover it. 00:31:42.201 [2024-10-01 08:46:33.847096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.201 [2024-10-01 08:46:33.847107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.201 qpair failed and we were unable to recover it. 00:31:42.201 [2024-10-01 08:46:33.847439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.201 [2024-10-01 08:46:33.847449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.201 qpair failed and we were unable to recover it. 00:31:42.201 [2024-10-01 08:46:33.847669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.201 [2024-10-01 08:46:33.847679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.201 qpair failed and we were unable to recover it. 00:31:42.201 [2024-10-01 08:46:33.847991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.201 [2024-10-01 08:46:33.848006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.201 qpair failed and we were unable to recover it. 00:31:42.201 [2024-10-01 08:46:33.848339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.201 [2024-10-01 08:46:33.848348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.201 qpair failed and we were unable to recover it. 00:31:42.201 [2024-10-01 08:46:33.848647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.201 [2024-10-01 08:46:33.848657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.201 qpair failed and we were unable to recover it. 00:31:42.201 [2024-10-01 08:46:33.848936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.201 [2024-10-01 08:46:33.848946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.201 qpair failed and we were unable to recover it. 00:31:42.201 [2024-10-01 08:46:33.849236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.201 [2024-10-01 08:46:33.849247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.201 qpair failed and we were unable to recover it. 00:31:42.201 [2024-10-01 08:46:33.849563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.201 [2024-10-01 08:46:33.849574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.201 qpair failed and we were unable to recover it. 00:31:42.201 [2024-10-01 08:46:33.849903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.201 [2024-10-01 08:46:33.849914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.201 qpair failed and we were unable to recover it. 00:31:42.201 [2024-10-01 08:46:33.850083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.201 [2024-10-01 08:46:33.850094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.201 qpair failed and we were unable to recover it. 00:31:42.201 [2024-10-01 08:46:33.850456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.201 [2024-10-01 08:46:33.850465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.201 qpair failed and we were unable to recover it. 00:31:42.201 [2024-10-01 08:46:33.850762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.201 [2024-10-01 08:46:33.850772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.201 qpair failed and we were unable to recover it. 00:31:42.201 [2024-10-01 08:46:33.850951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.201 [2024-10-01 08:46:33.850962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.201 qpair failed and we were unable to recover it. 00:31:42.201 [2024-10-01 08:46:33.851310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.201 [2024-10-01 08:46:33.851321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.201 qpair failed and we were unable to recover it. 00:31:42.201 [2024-10-01 08:46:33.851509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.201 [2024-10-01 08:46:33.851522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.201 qpair failed and we were unable to recover it. 00:31:42.201 [2024-10-01 08:46:33.851814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.202 [2024-10-01 08:46:33.851825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.202 qpair failed and we were unable to recover it. 00:31:42.202 [2024-10-01 08:46:33.852143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.202 [2024-10-01 08:46:33.852154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.202 qpair failed and we were unable to recover it. 00:31:42.202 [2024-10-01 08:46:33.852428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.202 [2024-10-01 08:46:33.852438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.202 qpair failed and we were unable to recover it. 00:31:42.202 [2024-10-01 08:46:33.852633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.202 [2024-10-01 08:46:33.852644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.202 qpair failed and we were unable to recover it. 00:31:42.202 [2024-10-01 08:46:33.852956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.202 [2024-10-01 08:46:33.852966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.202 qpair failed and we were unable to recover it. 00:31:42.202 [2024-10-01 08:46:33.853266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.202 [2024-10-01 08:46:33.853277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.202 qpair failed and we were unable to recover it. 00:31:42.202 [2024-10-01 08:46:33.853603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.202 [2024-10-01 08:46:33.853613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.202 qpair failed and we were unable to recover it. 00:31:42.202 [2024-10-01 08:46:33.853812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.202 [2024-10-01 08:46:33.853821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.202 qpair failed and we were unable to recover it. 00:31:42.202 [2024-10-01 08:46:33.854167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.202 [2024-10-01 08:46:33.854177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.202 qpair failed and we were unable to recover it. 00:31:42.202 [2024-10-01 08:46:33.854507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.202 [2024-10-01 08:46:33.854518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.202 qpair failed and we were unable to recover it. 00:31:42.202 [2024-10-01 08:46:33.854816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.202 [2024-10-01 08:46:33.854825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.202 qpair failed and we were unable to recover it. 00:31:42.202 [2024-10-01 08:46:33.855123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.202 [2024-10-01 08:46:33.855133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.202 qpair failed and we were unable to recover it. 00:31:42.202 [2024-10-01 08:46:33.855467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.202 [2024-10-01 08:46:33.855476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.202 qpair failed and we were unable to recover it. 00:31:42.202 [2024-10-01 08:46:33.855831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.202 [2024-10-01 08:46:33.855841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.202 qpair failed and we were unable to recover it. 00:31:42.202 [2024-10-01 08:46:33.856029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.202 [2024-10-01 08:46:33.856040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.202 qpair failed and we were unable to recover it. 00:31:42.202 [2024-10-01 08:46:33.856332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.202 [2024-10-01 08:46:33.856341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.202 qpair failed and we were unable to recover it. 00:31:42.202 [2024-10-01 08:46:33.856600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.202 [2024-10-01 08:46:33.856610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.202 qpair failed and we were unable to recover it. 00:31:42.202 [2024-10-01 08:46:33.856916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.202 [2024-10-01 08:46:33.856926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.202 qpair failed and we were unable to recover it. 00:31:42.202 [2024-10-01 08:46:33.857257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.202 [2024-10-01 08:46:33.857268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.202 qpair failed and we were unable to recover it. 00:31:42.202 [2024-10-01 08:46:33.857570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.202 [2024-10-01 08:46:33.857580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.202 qpair failed and we were unable to recover it. 00:31:42.202 [2024-10-01 08:46:33.857770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.202 [2024-10-01 08:46:33.857781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.202 qpair failed and we were unable to recover it. 00:31:42.202 [2024-10-01 08:46:33.858082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.202 [2024-10-01 08:46:33.858093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.202 qpair failed and we were unable to recover it. 00:31:42.202 [2024-10-01 08:46:33.858400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.202 [2024-10-01 08:46:33.858409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.202 qpair failed and we were unable to recover it. 00:31:42.202 [2024-10-01 08:46:33.858736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.202 [2024-10-01 08:46:33.858746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.202 qpair failed and we were unable to recover it. 00:31:42.202 [2024-10-01 08:46:33.859037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.202 [2024-10-01 08:46:33.859047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.202 qpair failed and we were unable to recover it. 00:31:42.202 [2024-10-01 08:46:33.859349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.202 [2024-10-01 08:46:33.859359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.202 qpair failed and we were unable to recover it. 00:31:42.202 [2024-10-01 08:46:33.859670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.202 [2024-10-01 08:46:33.859679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.202 qpair failed and we were unable to recover it. 00:31:42.202 [2024-10-01 08:46:33.860010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.202 [2024-10-01 08:46:33.860022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.202 qpair failed and we were unable to recover it. 00:31:42.202 [2024-10-01 08:46:33.860326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.202 [2024-10-01 08:46:33.860336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.202 qpair failed and we were unable to recover it. 00:31:42.202 [2024-10-01 08:46:33.860533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.202 [2024-10-01 08:46:33.860543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.203 qpair failed and we were unable to recover it. 00:31:42.203 [2024-10-01 08:46:33.860759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.203 [2024-10-01 08:46:33.860768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.203 qpair failed and we were unable to recover it. 00:31:42.203 [2024-10-01 08:46:33.861144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.203 [2024-10-01 08:46:33.861154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.203 qpair failed and we were unable to recover it. 00:31:42.203 [2024-10-01 08:46:33.861427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.203 [2024-10-01 08:46:33.861437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.203 qpair failed and we were unable to recover it. 00:31:42.203 [2024-10-01 08:46:33.861700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.203 [2024-10-01 08:46:33.861709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.203 qpair failed and we were unable to recover it. 00:31:42.203 [2024-10-01 08:46:33.862046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.203 [2024-10-01 08:46:33.862058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.203 qpair failed and we were unable to recover it. 00:31:42.203 [2024-10-01 08:46:33.862383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.203 [2024-10-01 08:46:33.862393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.203 qpair failed and we were unable to recover it. 00:31:42.203 [2024-10-01 08:46:33.862683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.203 [2024-10-01 08:46:33.862693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.203 qpair failed and we were unable to recover it. 00:31:42.203 [2024-10-01 08:46:33.863003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.203 [2024-10-01 08:46:33.863013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.203 qpair failed and we were unable to recover it. 00:31:42.203 [2024-10-01 08:46:33.863344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.203 [2024-10-01 08:46:33.863355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.203 qpair failed and we were unable to recover it. 00:31:42.203 [2024-10-01 08:46:33.863653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.203 [2024-10-01 08:46:33.863662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.203 qpair failed and we were unable to recover it. 00:31:42.203 [2024-10-01 08:46:33.863843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.203 [2024-10-01 08:46:33.863855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.203 qpair failed and we were unable to recover it. 00:31:42.203 [2024-10-01 08:46:33.864203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.203 [2024-10-01 08:46:33.864214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.203 qpair failed and we were unable to recover it. 00:31:42.203 [2024-10-01 08:46:33.864503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.203 [2024-10-01 08:46:33.864513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.203 qpair failed and we were unable to recover it. 00:31:42.203 [2024-10-01 08:46:33.864816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.203 [2024-10-01 08:46:33.864826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.203 qpair failed and we were unable to recover it. 00:31:42.203 [2024-10-01 08:46:33.865099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.203 [2024-10-01 08:46:33.865109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.203 qpair failed and we were unable to recover it. 00:31:42.203 [2024-10-01 08:46:33.865438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.203 [2024-10-01 08:46:33.865448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.203 qpair failed and we were unable to recover it. 00:31:42.203 [2024-10-01 08:46:33.865729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.203 [2024-10-01 08:46:33.865738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.203 qpair failed and we were unable to recover it. 00:31:42.203 [2024-10-01 08:46:33.866043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.203 [2024-10-01 08:46:33.866053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.203 qpair failed and we were unable to recover it. 00:31:42.203 [2024-10-01 08:46:33.866292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.203 [2024-10-01 08:46:33.866302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.203 qpair failed and we were unable to recover it. 00:31:42.203 [2024-10-01 08:46:33.866574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.203 [2024-10-01 08:46:33.866584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.203 qpair failed and we were unable to recover it. 00:31:42.203 [2024-10-01 08:46:33.866918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.203 [2024-10-01 08:46:33.866927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.203 qpair failed and we were unable to recover it. 00:31:42.203 [2024-10-01 08:46:33.867252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.203 [2024-10-01 08:46:33.867262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.203 qpair failed and we were unable to recover it. 00:31:42.203 [2024-10-01 08:46:33.867573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.203 [2024-10-01 08:46:33.867583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.203 qpair failed and we were unable to recover it. 00:31:42.203 [2024-10-01 08:46:33.867892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.203 [2024-10-01 08:46:33.867901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.203 qpair failed and we were unable to recover it. 00:31:42.203 [2024-10-01 08:46:33.868277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.203 [2024-10-01 08:46:33.868287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.203 qpair failed and we were unable to recover it. 00:31:42.203 [2024-10-01 08:46:33.868594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.203 [2024-10-01 08:46:33.868604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.203 qpair failed and we were unable to recover it. 00:31:42.203 [2024-10-01 08:46:33.868882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.203 [2024-10-01 08:46:33.868893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.203 qpair failed and we were unable to recover it. 00:31:42.203 [2024-10-01 08:46:33.869213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.203 [2024-10-01 08:46:33.869223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.203 qpair failed and we were unable to recover it. 00:31:42.203 [2024-10-01 08:46:33.869542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.204 [2024-10-01 08:46:33.869552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.204 qpair failed and we were unable to recover it. 00:31:42.204 [2024-10-01 08:46:33.869849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.204 [2024-10-01 08:46:33.869860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.204 qpair failed and we were unable to recover it. 00:31:42.204 [2024-10-01 08:46:33.870089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.204 [2024-10-01 08:46:33.870099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.204 qpair failed and we were unable to recover it. 00:31:42.204 [2024-10-01 08:46:33.870360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.204 [2024-10-01 08:46:33.870369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.204 qpair failed and we were unable to recover it. 00:31:42.204 [2024-10-01 08:46:33.870642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.204 [2024-10-01 08:46:33.870652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.204 qpair failed and we were unable to recover it. 00:31:42.204 [2024-10-01 08:46:33.870958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.204 [2024-10-01 08:46:33.870969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.204 qpair failed and we were unable to recover it. 00:31:42.204 [2024-10-01 08:46:33.871272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.204 [2024-10-01 08:46:33.871282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.204 qpair failed and we were unable to recover it. 00:31:42.204 [2024-10-01 08:46:33.871597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.204 [2024-10-01 08:46:33.871606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.204 qpair failed and we were unable to recover it. 00:31:42.204 [2024-10-01 08:46:33.871884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.204 [2024-10-01 08:46:33.871894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.204 qpair failed and we were unable to recover it. 00:31:42.204 [2024-10-01 08:46:33.872215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.204 [2024-10-01 08:46:33.872227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.204 qpair failed and we were unable to recover it. 00:31:42.204 [2024-10-01 08:46:33.872543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.204 [2024-10-01 08:46:33.872553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.204 qpair failed and we were unable to recover it. 00:31:42.204 [2024-10-01 08:46:33.872841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.204 [2024-10-01 08:46:33.872851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.204 qpair failed and we were unable to recover it. 00:31:42.204 [2024-10-01 08:46:33.873119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.204 [2024-10-01 08:46:33.873129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.204 qpair failed and we were unable to recover it. 00:31:42.204 [2024-10-01 08:46:33.873453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.204 [2024-10-01 08:46:33.873462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.204 qpair failed and we were unable to recover it. 00:31:42.204 [2024-10-01 08:46:33.873783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.204 [2024-10-01 08:46:33.873794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.204 qpair failed and we were unable to recover it. 00:31:42.204 [2024-10-01 08:46:33.874056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.204 [2024-10-01 08:46:33.874066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.204 qpair failed and we were unable to recover it. 00:31:42.204 [2024-10-01 08:46:33.874354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.204 [2024-10-01 08:46:33.874363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.204 qpair failed and we were unable to recover it. 00:31:42.204 [2024-10-01 08:46:33.874641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.204 [2024-10-01 08:46:33.874652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.204 qpair failed and we were unable to recover it. 00:31:42.204 [2024-10-01 08:46:33.874963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.204 [2024-10-01 08:46:33.874973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.204 qpair failed and we were unable to recover it. 00:31:42.204 [2024-10-01 08:46:33.875167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.204 [2024-10-01 08:46:33.875177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.204 qpair failed and we were unable to recover it. 00:31:42.204 [2024-10-01 08:46:33.875503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.204 [2024-10-01 08:46:33.875513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.204 qpair failed and we were unable to recover it. 00:31:42.204 [2024-10-01 08:46:33.875818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.204 [2024-10-01 08:46:33.875828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.204 qpair failed and we were unable to recover it. 00:31:42.204 [2024-10-01 08:46:33.876154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.204 [2024-10-01 08:46:33.876164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.204 qpair failed and we were unable to recover it. 00:31:42.204 [2024-10-01 08:46:33.876509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.204 [2024-10-01 08:46:33.876519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.204 qpair failed and we were unable to recover it. 00:31:42.204 [2024-10-01 08:46:33.876829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.204 [2024-10-01 08:46:33.876839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.204 qpair failed and we were unable to recover it. 00:31:42.204 [2024-10-01 08:46:33.877145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.204 [2024-10-01 08:46:33.877156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.204 qpair failed and we were unable to recover it. 00:31:42.204 [2024-10-01 08:46:33.877435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.204 [2024-10-01 08:46:33.877445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.204 qpair failed and we were unable to recover it. 00:31:42.204 [2024-10-01 08:46:33.877752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.204 [2024-10-01 08:46:33.877762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.204 qpair failed and we were unable to recover it. 00:31:42.204 [2024-10-01 08:46:33.878096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.204 [2024-10-01 08:46:33.878105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.204 qpair failed and we were unable to recover it. 00:31:42.205 [2024-10-01 08:46:33.878383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.205 [2024-10-01 08:46:33.878392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.205 qpair failed and we were unable to recover it. 00:31:42.205 [2024-10-01 08:46:33.878721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.205 [2024-10-01 08:46:33.878731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.205 qpair failed and we were unable to recover it. 00:31:42.205 [2024-10-01 08:46:33.879060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.205 [2024-10-01 08:46:33.879070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.205 qpair failed and we were unable to recover it. 00:31:42.205 [2024-10-01 08:46:33.879374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.205 [2024-10-01 08:46:33.879384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.205 qpair failed and we were unable to recover it. 00:31:42.205 [2024-10-01 08:46:33.879701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.205 [2024-10-01 08:46:33.879710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.205 qpair failed and we were unable to recover it. 00:31:42.205 [2024-10-01 08:46:33.880020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.205 [2024-10-01 08:46:33.880031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.205 qpair failed and we were unable to recover it. 00:31:42.205 [2024-10-01 08:46:33.880346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.205 [2024-10-01 08:46:33.880355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.205 qpair failed and we were unable to recover it. 00:31:42.205 [2024-10-01 08:46:33.880662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.205 [2024-10-01 08:46:33.880672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.205 qpair failed and we were unable to recover it. 00:31:42.205 [2024-10-01 08:46:33.881001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.205 [2024-10-01 08:46:33.881012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.205 qpair failed and we were unable to recover it. 00:31:42.205 [2024-10-01 08:46:33.881294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.205 [2024-10-01 08:46:33.881304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.205 qpair failed and we were unable to recover it. 00:31:42.205 [2024-10-01 08:46:33.881624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.205 [2024-10-01 08:46:33.881634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.205 qpair failed and we were unable to recover it. 00:31:42.205 [2024-10-01 08:46:33.881909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.205 [2024-10-01 08:46:33.881918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.205 qpair failed and we were unable to recover it. 00:31:42.205 [2024-10-01 08:46:33.882241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.205 [2024-10-01 08:46:33.882251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.205 qpair failed and we were unable to recover it. 00:31:42.205 [2024-10-01 08:46:33.882571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.205 [2024-10-01 08:46:33.882580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.205 qpair failed and we were unable to recover it. 00:31:42.205 [2024-10-01 08:46:33.882856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.205 [2024-10-01 08:46:33.882865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.205 qpair failed and we were unable to recover it. 00:31:42.205 [2024-10-01 08:46:33.883158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.205 [2024-10-01 08:46:33.883168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.205 qpair failed and we were unable to recover it. 00:31:42.205 [2024-10-01 08:46:33.883398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.205 [2024-10-01 08:46:33.883408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.205 qpair failed and we were unable to recover it. 00:31:42.205 [2024-10-01 08:46:33.883696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.205 [2024-10-01 08:46:33.883706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.205 qpair failed and we were unable to recover it. 00:31:42.205 [2024-10-01 08:46:33.884034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.205 [2024-10-01 08:46:33.884044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.205 qpair failed and we were unable to recover it. 00:31:42.205 [2024-10-01 08:46:33.884327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.205 [2024-10-01 08:46:33.884336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.205 qpair failed and we were unable to recover it. 00:31:42.205 [2024-10-01 08:46:33.884659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.205 [2024-10-01 08:46:33.884669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.205 qpair failed and we were unable to recover it. 00:31:42.205 [2024-10-01 08:46:33.884980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.205 [2024-10-01 08:46:33.884992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.205 qpair failed and we were unable to recover it. 00:31:42.205 [2024-10-01 08:46:33.885318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.205 [2024-10-01 08:46:33.885329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.205 qpair failed and we were unable to recover it. 00:31:42.205 [2024-10-01 08:46:33.885632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.205 [2024-10-01 08:46:33.885641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.205 qpair failed and we were unable to recover it. 00:31:42.205 [2024-10-01 08:46:33.885927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.205 [2024-10-01 08:46:33.885938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.205 qpair failed and we were unable to recover it. 00:31:42.205 [2024-10-01 08:46:33.886248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.205 [2024-10-01 08:46:33.886259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.205 qpair failed and we were unable to recover it. 00:31:42.205 [2024-10-01 08:46:33.886563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.205 [2024-10-01 08:46:33.886573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.205 qpair failed and we were unable to recover it. 00:31:42.205 [2024-10-01 08:46:33.886902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.205 [2024-10-01 08:46:33.886913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.205 qpair failed and we were unable to recover it. 00:31:42.205 [2024-10-01 08:46:33.887077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.205 [2024-10-01 08:46:33.887089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.205 qpair failed and we were unable to recover it. 00:31:42.205 [2024-10-01 08:46:33.887414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.205 [2024-10-01 08:46:33.887423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.206 qpair failed and we were unable to recover it. 00:31:42.206 [2024-10-01 08:46:33.887648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.206 [2024-10-01 08:46:33.887657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.206 qpair failed and we were unable to recover it. 00:31:42.206 [2024-10-01 08:46:33.887967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.206 [2024-10-01 08:46:33.887978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.206 qpair failed and we were unable to recover it. 00:31:42.206 [2024-10-01 08:46:33.888277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.206 [2024-10-01 08:46:33.888288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.206 qpair failed and we were unable to recover it. 00:31:42.206 [2024-10-01 08:46:33.888617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.206 [2024-10-01 08:46:33.888627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.206 qpair failed and we were unable to recover it. 00:31:42.206 [2024-10-01 08:46:33.888920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.206 [2024-10-01 08:46:33.888931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.206 qpair failed and we were unable to recover it. 00:31:42.206 [2024-10-01 08:46:33.889252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.206 [2024-10-01 08:46:33.889262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.206 qpair failed and we were unable to recover it. 00:31:42.206 [2024-10-01 08:46:33.889505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.206 [2024-10-01 08:46:33.889514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.206 qpair failed and we were unable to recover it. 00:31:42.206 [2024-10-01 08:46:33.889838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.206 [2024-10-01 08:46:33.889850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.206 qpair failed and we were unable to recover it. 00:31:42.206 [2024-10-01 08:46:33.890048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.206 [2024-10-01 08:46:33.890058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.206 qpair failed and we were unable to recover it. 00:31:42.206 [2024-10-01 08:46:33.890387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.206 [2024-10-01 08:46:33.890397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.206 qpair failed and we were unable to recover it. 00:31:42.206 [2024-10-01 08:46:33.890707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.206 [2024-10-01 08:46:33.890716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.206 qpair failed and we were unable to recover it. 00:31:42.206 [2024-10-01 08:46:33.891009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.206 [2024-10-01 08:46:33.891019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.206 qpair failed and we were unable to recover it. 00:31:42.206 [2024-10-01 08:46:33.891325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.206 [2024-10-01 08:46:33.891335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.206 qpair failed and we were unable to recover it. 00:31:42.206 [2024-10-01 08:46:33.891531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.206 [2024-10-01 08:46:33.891540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.206 qpair failed and we were unable to recover it. 00:31:42.206 [2024-10-01 08:46:33.891804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.206 [2024-10-01 08:46:33.891813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.206 qpair failed and we were unable to recover it. 00:31:42.206 [2024-10-01 08:46:33.892143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.206 [2024-10-01 08:46:33.892154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.206 qpair failed and we were unable to recover it. 00:31:42.206 [2024-10-01 08:46:33.892480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.206 [2024-10-01 08:46:33.892489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.206 qpair failed and we were unable to recover it. 00:31:42.206 [2024-10-01 08:46:33.892818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.206 [2024-10-01 08:46:33.892828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.206 qpair failed and we were unable to recover it. 00:31:42.206 [2024-10-01 08:46:33.893113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.206 [2024-10-01 08:46:33.893124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.206 qpair failed and we were unable to recover it. 00:31:42.206 [2024-10-01 08:46:33.893448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.206 [2024-10-01 08:46:33.893458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.206 qpair failed and we were unable to recover it. 00:31:42.206 [2024-10-01 08:46:33.893655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.206 [2024-10-01 08:46:33.893664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.206 qpair failed and we were unable to recover it. 00:31:42.206 [2024-10-01 08:46:33.893841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.206 [2024-10-01 08:46:33.893850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.206 qpair failed and we were unable to recover it. 00:31:42.206 [2024-10-01 08:46:33.894190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.206 [2024-10-01 08:46:33.894201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.206 qpair failed and we were unable to recover it. 00:31:42.206 [2024-10-01 08:46:33.894421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.206 [2024-10-01 08:46:33.894432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.206 qpair failed and we were unable to recover it. 00:31:42.206 [2024-10-01 08:46:33.894731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.206 [2024-10-01 08:46:33.894741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.206 qpair failed and we were unable to recover it. 00:31:42.206 [2024-10-01 08:46:33.895056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.206 [2024-10-01 08:46:33.895066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.206 qpair failed and we were unable to recover it. 00:31:42.206 [2024-10-01 08:46:33.895377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.206 [2024-10-01 08:46:33.895386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.206 qpair failed and we were unable to recover it. 00:31:42.206 [2024-10-01 08:46:33.895573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.206 [2024-10-01 08:46:33.895583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.206 qpair failed and we were unable to recover it. 00:31:42.207 [2024-10-01 08:46:33.895861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.207 [2024-10-01 08:46:33.895871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.207 qpair failed and we were unable to recover it. 00:31:42.207 [2024-10-01 08:46:33.896153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.207 [2024-10-01 08:46:33.896162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.207 qpair failed and we were unable to recover it. 00:31:42.207 [2024-10-01 08:46:33.896452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.207 [2024-10-01 08:46:33.896462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.207 qpair failed and we were unable to recover it. 00:31:42.207 [2024-10-01 08:46:33.896813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.207 [2024-10-01 08:46:33.896823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.207 qpair failed and we were unable to recover it. 00:31:42.207 [2024-10-01 08:46:33.897105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.207 [2024-10-01 08:46:33.897115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.207 qpair failed and we were unable to recover it. 00:31:42.207 [2024-10-01 08:46:33.897415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.207 [2024-10-01 08:46:33.897425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.207 qpair failed and we were unable to recover it. 00:31:42.207 [2024-10-01 08:46:33.897731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.207 [2024-10-01 08:46:33.897740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.207 qpair failed and we were unable to recover it. 00:31:42.207 [2024-10-01 08:46:33.898068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.207 [2024-10-01 08:46:33.898078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.207 qpair failed and we were unable to recover it. 00:31:42.207 [2024-10-01 08:46:33.898373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.207 [2024-10-01 08:46:33.898383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.207 qpair failed and we were unable to recover it. 00:31:42.207 [2024-10-01 08:46:33.898680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.207 [2024-10-01 08:46:33.898690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.207 qpair failed and we were unable to recover it. 00:31:42.207 [2024-10-01 08:46:33.899005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.207 [2024-10-01 08:46:33.899015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.207 qpair failed and we were unable to recover it. 00:31:42.207 [2024-10-01 08:46:33.899333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.207 [2024-10-01 08:46:33.899343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.207 qpair failed and we were unable to recover it. 00:31:42.207 [2024-10-01 08:46:33.899647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.207 [2024-10-01 08:46:33.899658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.207 qpair failed and we were unable to recover it. 00:31:42.207 [2024-10-01 08:46:33.899862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.207 [2024-10-01 08:46:33.899872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.207 qpair failed and we were unable to recover it. 00:31:42.207 [2024-10-01 08:46:33.900166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.207 [2024-10-01 08:46:33.900176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.207 qpair failed and we were unable to recover it. 00:31:42.207 [2024-10-01 08:46:33.900375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.207 [2024-10-01 08:46:33.900385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.207 qpair failed and we were unable to recover it. 00:31:42.207 [2024-10-01 08:46:33.900718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.207 [2024-10-01 08:46:33.900729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.207 qpair failed and we were unable to recover it. 00:31:42.207 [2024-10-01 08:46:33.901059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.207 [2024-10-01 08:46:33.901069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.207 qpair failed and we were unable to recover it. 00:31:42.207 [2024-10-01 08:46:33.901405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.207 [2024-10-01 08:46:33.901416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.207 qpair failed and we were unable to recover it. 00:31:42.207 [2024-10-01 08:46:33.901687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.207 [2024-10-01 08:46:33.901697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.207 qpair failed and we were unable to recover it. 00:31:42.207 [2024-10-01 08:46:33.902003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.207 [2024-10-01 08:46:33.902013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.207 qpair failed and we were unable to recover it. 00:31:42.207 [2024-10-01 08:46:33.902227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.207 [2024-10-01 08:46:33.902236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.207 qpair failed and we were unable to recover it. 00:31:42.207 [2024-10-01 08:46:33.902420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.207 [2024-10-01 08:46:33.902431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.207 qpair failed and we were unable to recover it. 00:31:42.207 [2024-10-01 08:46:33.902673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.208 [2024-10-01 08:46:33.902683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.208 qpair failed and we were unable to recover it. 00:31:42.208 [2024-10-01 08:46:33.902952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.208 [2024-10-01 08:46:33.902962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.208 qpair failed and we were unable to recover it. 00:31:42.208 [2024-10-01 08:46:33.903281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.208 [2024-10-01 08:46:33.903291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.208 qpair failed and we were unable to recover it. 00:31:42.208 [2024-10-01 08:46:33.903626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.208 [2024-10-01 08:46:33.903636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.208 qpair failed and we were unable to recover it. 00:31:42.208 [2024-10-01 08:46:33.903939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.208 [2024-10-01 08:46:33.903949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.208 qpair failed and we were unable to recover it. 00:31:42.208 [2024-10-01 08:46:33.904161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.208 [2024-10-01 08:46:33.904172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.208 qpair failed and we were unable to recover it. 00:31:42.208 [2024-10-01 08:46:33.904449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.208 [2024-10-01 08:46:33.904458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.208 qpair failed and we were unable to recover it. 00:31:42.208 [2024-10-01 08:46:33.904736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.208 [2024-10-01 08:46:33.904746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.208 qpair failed and we were unable to recover it. 00:31:42.208 [2024-10-01 08:46:33.905003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.208 [2024-10-01 08:46:33.905015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.208 qpair failed and we were unable to recover it. 00:31:42.208 [2024-10-01 08:46:33.905330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.208 [2024-10-01 08:46:33.905340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.208 qpair failed and we were unable to recover it. 00:31:42.208 [2024-10-01 08:46:33.905639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.208 [2024-10-01 08:46:33.905649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.208 qpair failed and we were unable to recover it. 00:31:42.208 [2024-10-01 08:46:33.905866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.208 [2024-10-01 08:46:33.905876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.208 qpair failed and we were unable to recover it. 00:31:42.208 [2024-10-01 08:46:33.906181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.208 [2024-10-01 08:46:33.906191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.208 qpair failed and we were unable to recover it. 00:31:42.208 [2024-10-01 08:46:33.906518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.208 [2024-10-01 08:46:33.906528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.208 qpair failed and we were unable to recover it. 00:31:42.208 [2024-10-01 08:46:33.906807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.208 [2024-10-01 08:46:33.906817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.208 qpair failed and we were unable to recover it. 00:31:42.208 [2024-10-01 08:46:33.907122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.208 [2024-10-01 08:46:33.907132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.208 qpair failed and we were unable to recover it. 00:31:42.208 [2024-10-01 08:46:33.907414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.208 [2024-10-01 08:46:33.907424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.208 qpair failed and we were unable to recover it. 00:31:42.208 [2024-10-01 08:46:33.907582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.208 [2024-10-01 08:46:33.907593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.208 qpair failed and we were unable to recover it. 00:31:42.208 [2024-10-01 08:46:33.907939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.208 [2024-10-01 08:46:33.907950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.208 qpair failed and we were unable to recover it. 00:31:42.208 [2024-10-01 08:46:33.908224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.208 [2024-10-01 08:46:33.908234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.208 qpair failed and we were unable to recover it. 00:31:42.208 [2024-10-01 08:46:33.908537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.208 [2024-10-01 08:46:33.908546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.208 qpair failed and we were unable to recover it. 00:31:42.208 [2024-10-01 08:46:33.908874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.208 [2024-10-01 08:46:33.908885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.208 qpair failed and we were unable to recover it. 00:31:42.208 [2024-10-01 08:46:33.909207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.208 [2024-10-01 08:46:33.909218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.208 qpair failed and we were unable to recover it. 00:31:42.208 [2024-10-01 08:46:33.909417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.208 [2024-10-01 08:46:33.909427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.208 qpair failed and we were unable to recover it. 00:31:42.208 [2024-10-01 08:46:33.909757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.208 [2024-10-01 08:46:33.909767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.208 qpair failed and we were unable to recover it. 00:31:42.208 [2024-10-01 08:46:33.910059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.208 [2024-10-01 08:46:33.910069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.208 qpair failed and we were unable to recover it. 00:31:42.208 [2024-10-01 08:46:33.910340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.208 [2024-10-01 08:46:33.910349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.208 qpair failed and we were unable to recover it. 00:31:42.208 [2024-10-01 08:46:33.910685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.208 [2024-10-01 08:46:33.910695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.208 qpair failed and we were unable to recover it. 00:31:42.208 [2024-10-01 08:46:33.910976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.209 [2024-10-01 08:46:33.910985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.209 qpair failed and we were unable to recover it. 00:31:42.209 [2024-10-01 08:46:33.911179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.209 [2024-10-01 08:46:33.911190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.209 qpair failed and we were unable to recover it. 00:31:42.209 [2024-10-01 08:46:33.911492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.209 [2024-10-01 08:46:33.911503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.209 qpair failed and we were unable to recover it. 00:31:42.209 [2024-10-01 08:46:33.911815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.209 [2024-10-01 08:46:33.911825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.209 qpair failed and we were unable to recover it. 00:31:42.209 [2024-10-01 08:46:33.912132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.209 [2024-10-01 08:46:33.912142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.209 qpair failed and we were unable to recover it. 00:31:42.209 [2024-10-01 08:46:33.912429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.209 [2024-10-01 08:46:33.912439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.209 qpair failed and we were unable to recover it. 00:31:42.209 [2024-10-01 08:46:33.912745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.209 [2024-10-01 08:46:33.912755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.209 qpair failed and we were unable to recover it. 00:31:42.209 [2024-10-01 08:46:33.913007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.209 [2024-10-01 08:46:33.913018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.209 qpair failed and we were unable to recover it. 00:31:42.209 [2024-10-01 08:46:33.913308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.209 [2024-10-01 08:46:33.913317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.209 qpair failed and we were unable to recover it. 00:31:42.209 [2024-10-01 08:46:33.913646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.209 [2024-10-01 08:46:33.913657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.209 qpair failed and we were unable to recover it. 00:31:42.209 [2024-10-01 08:46:33.913979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.209 [2024-10-01 08:46:33.913990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.209 qpair failed and we were unable to recover it. 00:31:42.209 [2024-10-01 08:46:33.914335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.209 [2024-10-01 08:46:33.914346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.209 qpair failed and we were unable to recover it. 00:31:42.209 [2024-10-01 08:46:33.914656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.209 [2024-10-01 08:46:33.914667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.209 qpair failed and we were unable to recover it. 00:31:42.209 [2024-10-01 08:46:33.914941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.209 [2024-10-01 08:46:33.914951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.209 qpair failed and we were unable to recover it. 00:31:42.209 [2024-10-01 08:46:33.915257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.209 [2024-10-01 08:46:33.915267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.209 qpair failed and we were unable to recover it. 00:31:42.209 [2024-10-01 08:46:33.915550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.209 [2024-10-01 08:46:33.915561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.209 qpair failed and we were unable to recover it. 00:31:42.209 [2024-10-01 08:46:33.915871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.209 [2024-10-01 08:46:33.915881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.209 qpair failed and we were unable to recover it. 00:31:42.209 [2024-10-01 08:46:33.916206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.209 [2024-10-01 08:46:33.916216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.209 qpair failed and we were unable to recover it. 00:31:42.209 [2024-10-01 08:46:33.916559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.209 [2024-10-01 08:46:33.916570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.209 qpair failed and we were unable to recover it. 00:31:42.209 [2024-10-01 08:46:33.916893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.209 [2024-10-01 08:46:33.916905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.209 qpair failed and we were unable to recover it. 00:31:42.209 [2024-10-01 08:46:33.917181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.209 [2024-10-01 08:46:33.917192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.209 qpair failed and we were unable to recover it. 00:31:42.209 [2024-10-01 08:46:33.917479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.209 [2024-10-01 08:46:33.917490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.209 qpair failed and we were unable to recover it. 00:31:42.209 [2024-10-01 08:46:33.917795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.209 [2024-10-01 08:46:33.917806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.209 qpair failed and we were unable to recover it. 00:31:42.209 [2024-10-01 08:46:33.918136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.209 [2024-10-01 08:46:33.918146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.209 qpair failed and we were unable to recover it. 00:31:42.209 [2024-10-01 08:46:33.918456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.209 [2024-10-01 08:46:33.918467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.209 qpair failed and we were unable to recover it. 00:31:42.209 [2024-10-01 08:46:33.918795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.209 [2024-10-01 08:46:33.918804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.209 qpair failed and we were unable to recover it. 00:31:42.209 [2024-10-01 08:46:33.919105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.209 [2024-10-01 08:46:33.919115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.209 qpair failed and we were unable to recover it. 00:31:42.209 [2024-10-01 08:46:33.919438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.209 [2024-10-01 08:46:33.919449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.209 qpair failed and we were unable to recover it. 00:31:42.209 [2024-10-01 08:46:33.919775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.209 [2024-10-01 08:46:33.919786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.210 qpair failed and we were unable to recover it. 00:31:42.210 [2024-10-01 08:46:33.920097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.210 [2024-10-01 08:46:33.920108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.210 qpair failed and we were unable to recover it. 00:31:42.210 [2024-10-01 08:46:33.920291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.210 [2024-10-01 08:46:33.920300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.210 qpair failed and we were unable to recover it. 00:31:42.210 [2024-10-01 08:46:33.920586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.210 [2024-10-01 08:46:33.920596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.210 qpair failed and we were unable to recover it. 00:31:42.210 [2024-10-01 08:46:33.920771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.210 [2024-10-01 08:46:33.920781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.210 qpair failed and we were unable to recover it. 00:31:42.210 [2024-10-01 08:46:33.921004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.210 [2024-10-01 08:46:33.921015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.210 qpair failed and we were unable to recover it. 00:31:42.210 [2024-10-01 08:46:33.921384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.210 [2024-10-01 08:46:33.921395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.210 qpair failed and we were unable to recover it. 00:31:42.210 [2024-10-01 08:46:33.921733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.210 [2024-10-01 08:46:33.921743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.210 qpair failed and we were unable to recover it. 00:31:42.210 [2024-10-01 08:46:33.921915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.210 [2024-10-01 08:46:33.921925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.210 qpair failed and we were unable to recover it. 00:31:42.210 [2024-10-01 08:46:33.922241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.210 [2024-10-01 08:46:33.922252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.210 qpair failed and we were unable to recover it. 00:31:42.210 [2024-10-01 08:46:33.922552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.210 [2024-10-01 08:46:33.922563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.210 qpair failed and we were unable to recover it. 00:31:42.210 [2024-10-01 08:46:33.922873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.210 [2024-10-01 08:46:33.922883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.210 qpair failed and we were unable to recover it. 00:31:42.210 [2024-10-01 08:46:33.923065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.210 [2024-10-01 08:46:33.923076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.210 qpair failed and we were unable to recover it. 00:31:42.210 [2024-10-01 08:46:33.923451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.210 [2024-10-01 08:46:33.923463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.210 qpair failed and we were unable to recover it. 00:31:42.210 [2024-10-01 08:46:33.923777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.210 [2024-10-01 08:46:33.923788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.210 qpair failed and we were unable to recover it. 00:31:42.210 [2024-10-01 08:46:33.924009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.210 [2024-10-01 08:46:33.924019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.210 qpair failed and we were unable to recover it. 00:31:42.210 [2024-10-01 08:46:33.924323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.210 [2024-10-01 08:46:33.924333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.210 qpair failed and we were unable to recover it. 00:31:42.210 [2024-10-01 08:46:33.924634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.210 [2024-10-01 08:46:33.924645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.210 qpair failed and we were unable to recover it. 00:31:42.210 [2024-10-01 08:46:33.924996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.210 [2024-10-01 08:46:33.925006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.210 qpair failed and we were unable to recover it. 00:31:42.210 [2024-10-01 08:46:33.925340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.210 [2024-10-01 08:46:33.925350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.210 qpair failed and we were unable to recover it. 00:31:42.210 [2024-10-01 08:46:33.925621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.210 [2024-10-01 08:46:33.925634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.210 qpair failed and we were unable to recover it. 00:31:42.210 [2024-10-01 08:46:33.925963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.210 [2024-10-01 08:46:33.925973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.210 qpair failed and we were unable to recover it. 00:31:42.210 [2024-10-01 08:46:33.926305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.210 [2024-10-01 08:46:33.926315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.210 qpair failed and we were unable to recover it. 00:31:42.210 [2024-10-01 08:46:33.926606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.210 [2024-10-01 08:46:33.926616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.210 qpair failed and we were unable to recover it. 00:31:42.210 [2024-10-01 08:46:33.926922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.210 [2024-10-01 08:46:33.926933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.210 qpair failed and we were unable to recover it. 00:31:42.210 [2024-10-01 08:46:33.927242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.210 [2024-10-01 08:46:33.927253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.210 qpair failed and we were unable to recover it. 00:31:42.210 [2024-10-01 08:46:33.927332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.210 [2024-10-01 08:46:33.927342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.210 qpair failed and we were unable to recover it. 00:31:42.210 [2024-10-01 08:46:33.927619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.210 [2024-10-01 08:46:33.927629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.210 qpair failed and we were unable to recover it. 00:31:42.210 [2024-10-01 08:46:33.927971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.211 [2024-10-01 08:46:33.927981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.211 qpair failed and we were unable to recover it. 00:31:42.211 [2024-10-01 08:46:33.928263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.211 [2024-10-01 08:46:33.928275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.211 qpair failed and we were unable to recover it. 00:31:42.211 [2024-10-01 08:46:33.928577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.211 [2024-10-01 08:46:33.928587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.211 qpair failed and we were unable to recover it. 00:31:42.211 [2024-10-01 08:46:33.928918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.211 [2024-10-01 08:46:33.928928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.211 qpair failed and we were unable to recover it. 00:31:42.211 [2024-10-01 08:46:33.929162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.211 [2024-10-01 08:46:33.929173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.211 qpair failed and we were unable to recover it. 00:31:42.211 [2024-10-01 08:46:33.929338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.211 [2024-10-01 08:46:33.929348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.211 qpair failed and we were unable to recover it. 00:31:42.211 [2024-10-01 08:46:33.929689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.211 [2024-10-01 08:46:33.929699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.211 qpair failed and we were unable to recover it. 00:31:42.211 [2024-10-01 08:46:33.929985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.211 [2024-10-01 08:46:33.929997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.211 qpair failed and we were unable to recover it. 00:31:42.211 [2024-10-01 08:46:33.930312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.211 [2024-10-01 08:46:33.930322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.211 qpair failed and we were unable to recover it. 00:31:42.211 [2024-10-01 08:46:33.930611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.211 [2024-10-01 08:46:33.930620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.211 qpair failed and we were unable to recover it. 00:31:42.211 [2024-10-01 08:46:33.930940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.211 [2024-10-01 08:46:33.930952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.211 qpair failed and we were unable to recover it. 00:31:42.211 [2024-10-01 08:46:33.931225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.211 [2024-10-01 08:46:33.931237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.211 qpair failed and we were unable to recover it. 00:31:42.211 [2024-10-01 08:46:33.931572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.211 [2024-10-01 08:46:33.931584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.211 qpair failed and we were unable to recover it. 00:31:42.211 [2024-10-01 08:46:33.931891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.211 [2024-10-01 08:46:33.931901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.211 qpair failed and we were unable to recover it. 00:31:42.211 [2024-10-01 08:46:33.932169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.211 [2024-10-01 08:46:33.932180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.211 qpair failed and we were unable to recover it. 00:31:42.211 [2024-10-01 08:46:33.932383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.211 [2024-10-01 08:46:33.932393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.211 qpair failed and we were unable to recover it. 00:31:42.211 [2024-10-01 08:46:33.932595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.211 [2024-10-01 08:46:33.932605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.211 qpair failed and we were unable to recover it. 00:31:42.211 [2024-10-01 08:46:33.932773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.211 [2024-10-01 08:46:33.932784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.211 qpair failed and we were unable to recover it. 00:31:42.211 [2024-10-01 08:46:33.933078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.211 [2024-10-01 08:46:33.933088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.211 qpair failed and we were unable to recover it. 00:31:42.211 [2024-10-01 08:46:33.933383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.211 [2024-10-01 08:46:33.933392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.211 qpair failed and we were unable to recover it. 00:31:42.211 [2024-10-01 08:46:33.933717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.211 [2024-10-01 08:46:33.933726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.211 qpair failed and we were unable to recover it. 00:31:42.211 [2024-10-01 08:46:33.934021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.211 [2024-10-01 08:46:33.934032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.211 qpair failed and we were unable to recover it. 00:31:42.211 [2024-10-01 08:46:33.934359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.211 [2024-10-01 08:46:33.934370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.211 qpair failed and we were unable to recover it. 00:31:42.211 [2024-10-01 08:46:33.934648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.211 [2024-10-01 08:46:33.934658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.211 qpair failed and we were unable to recover it. 00:31:42.211 [2024-10-01 08:46:33.934826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.211 [2024-10-01 08:46:33.934835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.211 qpair failed and we were unable to recover it. 00:31:42.211 [2024-10-01 08:46:33.935006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.211 [2024-10-01 08:46:33.935017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.211 qpair failed and we were unable to recover it. 00:31:42.211 [2024-10-01 08:46:33.935326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.211 [2024-10-01 08:46:33.935336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.211 qpair failed and we were unable to recover it. 00:31:42.211 [2024-10-01 08:46:33.935630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.211 [2024-10-01 08:46:33.935641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.211 qpair failed and we were unable to recover it. 00:31:42.211 [2024-10-01 08:46:33.935913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.211 [2024-10-01 08:46:33.935922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.211 qpair failed and we were unable to recover it. 00:31:42.211 [2024-10-01 08:46:33.936264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.212 [2024-10-01 08:46:33.936274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.212 qpair failed and we were unable to recover it. 00:31:42.212 [2024-10-01 08:46:33.936610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.212 [2024-10-01 08:46:33.936621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.212 qpair failed and we were unable to recover it. 00:31:42.212 [2024-10-01 08:46:33.936905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.212 [2024-10-01 08:46:33.936917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.212 qpair failed and we were unable to recover it. 00:31:42.212 [2024-10-01 08:46:33.937197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.212 [2024-10-01 08:46:33.937207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.212 qpair failed and we were unable to recover it. 00:31:42.212 [2024-10-01 08:46:33.937533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.212 [2024-10-01 08:46:33.937547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.212 qpair failed and we were unable to recover it. 00:31:42.212 [2024-10-01 08:46:33.937882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.212 [2024-10-01 08:46:33.937893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.212 qpair failed and we were unable to recover it. 00:31:42.212 [2024-10-01 08:46:33.938167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.212 [2024-10-01 08:46:33.938179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.212 qpair failed and we were unable to recover it. 00:31:42.212 [2024-10-01 08:46:33.938449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.212 [2024-10-01 08:46:33.938458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.212 qpair failed and we were unable to recover it. 00:31:42.212 [2024-10-01 08:46:33.938675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.212 [2024-10-01 08:46:33.938685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.212 qpair failed and we were unable to recover it. 00:31:42.212 [2024-10-01 08:46:33.938888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.212 [2024-10-01 08:46:33.938898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.212 qpair failed and we were unable to recover it. 00:31:42.212 [2024-10-01 08:46:33.939181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.212 [2024-10-01 08:46:33.939192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.212 qpair failed and we were unable to recover it. 00:31:42.212 [2024-10-01 08:46:33.939509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.212 [2024-10-01 08:46:33.939521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.212 qpair failed and we were unable to recover it. 00:31:42.212 [2024-10-01 08:46:33.939892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.212 [2024-10-01 08:46:33.939902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.212 qpair failed and we were unable to recover it. 00:31:42.212 [2024-10-01 08:46:33.940207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.212 [2024-10-01 08:46:33.940217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.212 qpair failed and we were unable to recover it. 00:31:42.212 [2024-10-01 08:46:33.940394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.212 [2024-10-01 08:46:33.940404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.212 qpair failed and we were unable to recover it. 00:31:42.212 [2024-10-01 08:46:33.940580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.212 [2024-10-01 08:46:33.940590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.212 qpair failed and we were unable to recover it. 00:31:42.212 [2024-10-01 08:46:33.940878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.212 [2024-10-01 08:46:33.940889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.212 qpair failed and we were unable to recover it. 00:31:42.212 [2024-10-01 08:46:33.941236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.212 [2024-10-01 08:46:33.941247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.212 qpair failed and we were unable to recover it. 00:31:42.212 [2024-10-01 08:46:33.941517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.212 [2024-10-01 08:46:33.941527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.212 qpair failed and we were unable to recover it. 00:31:42.212 [2024-10-01 08:46:33.941834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.212 [2024-10-01 08:46:33.941844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.212 qpair failed and we were unable to recover it. 00:31:42.212 [2024-10-01 08:46:33.942051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.212 [2024-10-01 08:46:33.942061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.212 qpair failed and we were unable to recover it. 00:31:42.212 [2024-10-01 08:46:33.942345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.212 [2024-10-01 08:46:33.942354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.212 qpair failed and we were unable to recover it. 00:31:42.212 [2024-10-01 08:46:33.942658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.212 [2024-10-01 08:46:33.942668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.212 qpair failed and we were unable to recover it. 00:31:42.212 [2024-10-01 08:46:33.942964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.212 [2024-10-01 08:46:33.942974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.212 qpair failed and we were unable to recover it. 00:31:42.212 [2024-10-01 08:46:33.943283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.212 [2024-10-01 08:46:33.943295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.212 qpair failed and we were unable to recover it. 00:31:42.212 [2024-10-01 08:46:33.943549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.212 [2024-10-01 08:46:33.943559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.212 qpair failed and we were unable to recover it. 00:31:42.212 [2024-10-01 08:46:33.943865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.212 [2024-10-01 08:46:33.943875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.212 qpair failed and we were unable to recover it. 00:31:42.212 [2024-10-01 08:46:33.944078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.212 [2024-10-01 08:46:33.944088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.212 qpair failed and we were unable to recover it. 00:31:42.212 [2024-10-01 08:46:33.944424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.212 [2024-10-01 08:46:33.944434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.212 qpair failed and we were unable to recover it. 00:31:42.212 [2024-10-01 08:46:33.944747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.213 [2024-10-01 08:46:33.944757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.213 qpair failed and we were unable to recover it. 00:31:42.213 [2024-10-01 08:46:33.945092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.213 [2024-10-01 08:46:33.945102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.213 qpair failed and we were unable to recover it. 00:31:42.213 [2024-10-01 08:46:33.945401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.213 [2024-10-01 08:46:33.945412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.213 qpair failed and we were unable to recover it. 00:31:42.213 [2024-10-01 08:46:33.945730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.213 [2024-10-01 08:46:33.945741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.213 qpair failed and we were unable to recover it. 00:31:42.213 [2024-10-01 08:46:33.946068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.213 [2024-10-01 08:46:33.946079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.213 qpair failed and we were unable to recover it. 00:31:42.213 [2024-10-01 08:46:33.946403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.213 [2024-10-01 08:46:33.946413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.213 qpair failed and we were unable to recover it. 00:31:42.213 [2024-10-01 08:46:33.946724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.213 [2024-10-01 08:46:33.946735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.213 qpair failed and we were unable to recover it. 00:31:42.213 [2024-10-01 08:46:33.947072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.213 [2024-10-01 08:46:33.947083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.213 qpair failed and we were unable to recover it. 00:31:42.213 [2024-10-01 08:46:33.947374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.213 [2024-10-01 08:46:33.947384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.213 qpair failed and we were unable to recover it. 00:31:42.213 [2024-10-01 08:46:33.947627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.213 [2024-10-01 08:46:33.947637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.213 qpair failed and we were unable to recover it. 00:31:42.213 [2024-10-01 08:46:33.947980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.213 [2024-10-01 08:46:33.947990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.213 qpair failed and we were unable to recover it. 00:31:42.213 [2024-10-01 08:46:33.948305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.213 [2024-10-01 08:46:33.948315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.213 qpair failed and we were unable to recover it. 00:31:42.213 [2024-10-01 08:46:33.948630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.213 [2024-10-01 08:46:33.948640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.213 qpair failed and we were unable to recover it. 00:31:42.213 [2024-10-01 08:46:33.948861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.213 [2024-10-01 08:46:33.948871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.213 qpair failed and we were unable to recover it. 00:31:42.213 [2024-10-01 08:46:33.949128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.213 [2024-10-01 08:46:33.949138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.213 qpair failed and we were unable to recover it. 00:31:42.213 [2024-10-01 08:46:33.949457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.213 [2024-10-01 08:46:33.949467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.213 qpair failed and we were unable to recover it. 00:31:42.213 [2024-10-01 08:46:33.949805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.213 [2024-10-01 08:46:33.949815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.213 qpair failed and we were unable to recover it. 00:31:42.213 [2024-10-01 08:46:33.950122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.213 [2024-10-01 08:46:33.950132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.213 qpair failed and we were unable to recover it. 00:31:42.213 [2024-10-01 08:46:33.950337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.213 [2024-10-01 08:46:33.950347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.213 qpair failed and we were unable to recover it. 00:31:42.213 [2024-10-01 08:46:33.950655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.213 [2024-10-01 08:46:33.950665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.213 qpair failed and we were unable to recover it. 00:31:42.213 [2024-10-01 08:46:33.950949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.213 [2024-10-01 08:46:33.950959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.213 qpair failed and we were unable to recover it. 00:31:42.213 [2024-10-01 08:46:33.951154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.213 [2024-10-01 08:46:33.951166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.213 qpair failed and we were unable to recover it. 00:31:42.213 [2024-10-01 08:46:33.951449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.213 [2024-10-01 08:46:33.951460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.213 qpair failed and we were unable to recover it. 00:31:42.213 [2024-10-01 08:46:33.951757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.213 [2024-10-01 08:46:33.951768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.213 qpair failed and we were unable to recover it. 00:31:42.213 [2024-10-01 08:46:33.952172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.213 [2024-10-01 08:46:33.952183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.214 qpair failed and we were unable to recover it. 00:31:42.214 [2024-10-01 08:46:33.952463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.214 [2024-10-01 08:46:33.952473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.214 qpair failed and we were unable to recover it. 00:31:42.214 [2024-10-01 08:46:33.952786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.214 [2024-10-01 08:46:33.952796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.214 qpair failed and we were unable to recover it. 00:31:42.214 [2024-10-01 08:46:33.953141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.214 [2024-10-01 08:46:33.953151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.214 qpair failed and we were unable to recover it. 00:31:42.214 [2024-10-01 08:46:33.953429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.214 [2024-10-01 08:46:33.953438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.214 qpair failed and we were unable to recover it. 00:31:42.214 [2024-10-01 08:46:33.953774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.214 [2024-10-01 08:46:33.953784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.214 qpair failed and we were unable to recover it. 00:31:42.214 [2024-10-01 08:46:33.954112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.214 [2024-10-01 08:46:33.954122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.214 qpair failed and we were unable to recover it. 00:31:42.214 [2024-10-01 08:46:33.954423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.214 [2024-10-01 08:46:33.954432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.214 qpair failed and we were unable to recover it. 00:31:42.214 [2024-10-01 08:46:33.954711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.214 [2024-10-01 08:46:33.954721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.214 qpair failed and we were unable to recover it. 00:31:42.214 [2024-10-01 08:46:33.955036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.214 [2024-10-01 08:46:33.955046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.214 qpair failed and we were unable to recover it. 00:31:42.214 [2024-10-01 08:46:33.955250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.214 [2024-10-01 08:46:33.955259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.214 qpair failed and we were unable to recover it. 00:31:42.214 [2024-10-01 08:46:33.955418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.214 [2024-10-01 08:46:33.955428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.214 qpair failed and we were unable to recover it. 00:31:42.214 [2024-10-01 08:46:33.955736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.214 [2024-10-01 08:46:33.955746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.214 qpair failed and we were unable to recover it. 00:31:42.214 [2024-10-01 08:46:33.955977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.214 [2024-10-01 08:46:33.955988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.214 qpair failed and we were unable to recover it. 00:31:42.214 [2024-10-01 08:46:33.956244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.214 [2024-10-01 08:46:33.956254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.214 qpair failed and we were unable to recover it. 00:31:42.214 [2024-10-01 08:46:33.956587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.214 [2024-10-01 08:46:33.956598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.214 qpair failed and we were unable to recover it. 00:31:42.214 [2024-10-01 08:46:33.956783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.214 [2024-10-01 08:46:33.956793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.214 qpair failed and we were unable to recover it. 00:31:42.214 [2024-10-01 08:46:33.957076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.214 [2024-10-01 08:46:33.957087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.214 qpair failed and we were unable to recover it. 00:31:42.214 [2024-10-01 08:46:33.957398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.214 [2024-10-01 08:46:33.957408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.214 qpair failed and we were unable to recover it. 00:31:42.214 [2024-10-01 08:46:33.957610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.214 [2024-10-01 08:46:33.957622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.214 qpair failed and we were unable to recover it. 00:31:42.214 [2024-10-01 08:46:33.957905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.214 [2024-10-01 08:46:33.957914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.214 qpair failed and we were unable to recover it. 00:31:42.214 [2024-10-01 08:46:33.958250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.214 [2024-10-01 08:46:33.958261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.214 qpair failed and we were unable to recover it. 00:31:42.214 [2024-10-01 08:46:33.958548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.214 [2024-10-01 08:46:33.958558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.214 qpair failed and we were unable to recover it. 00:31:42.214 [2024-10-01 08:46:33.958773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.214 [2024-10-01 08:46:33.958782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.214 qpair failed and we were unable to recover it. 00:31:42.214 [2024-10-01 08:46:33.958982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.214 [2024-10-01 08:46:33.958992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.214 qpair failed and we were unable to recover it. 00:31:42.214 [2024-10-01 08:46:33.959318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.214 [2024-10-01 08:46:33.959327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.214 qpair failed and we were unable to recover it. 00:31:42.214 [2024-10-01 08:46:33.959629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.214 [2024-10-01 08:46:33.959639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.214 qpair failed and we were unable to recover it. 00:31:42.214 [2024-10-01 08:46:33.959971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.214 [2024-10-01 08:46:33.959981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.214 qpair failed and we were unable to recover it. 00:31:42.214 [2024-10-01 08:46:33.960325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.214 [2024-10-01 08:46:33.960336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.214 qpair failed and we were unable to recover it. 00:31:42.214 [2024-10-01 08:46:33.960628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.214 [2024-10-01 08:46:33.960638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.214 qpair failed and we were unable to recover it. 00:31:42.215 [2024-10-01 08:46:33.960801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.215 [2024-10-01 08:46:33.960813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.215 qpair failed and we were unable to recover it. 00:31:42.215 [2024-10-01 08:46:33.961091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.215 [2024-10-01 08:46:33.961102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.215 qpair failed and we were unable to recover it. 00:31:42.215 [2024-10-01 08:46:33.961319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.215 [2024-10-01 08:46:33.961329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.215 qpair failed and we were unable to recover it. 00:31:42.215 [2024-10-01 08:46:33.961667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.215 [2024-10-01 08:46:33.961677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.215 qpair failed and we were unable to recover it. 00:31:42.215 [2024-10-01 08:46:33.961945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.215 [2024-10-01 08:46:33.961955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.215 qpair failed and we were unable to recover it. 00:31:42.215 [2024-10-01 08:46:33.962261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.215 [2024-10-01 08:46:33.962271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.215 qpair failed and we were unable to recover it. 00:31:42.215 [2024-10-01 08:46:33.962463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.215 [2024-10-01 08:46:33.962473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.215 qpair failed and we were unable to recover it. 00:31:42.215 [2024-10-01 08:46:33.962791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.215 [2024-10-01 08:46:33.962801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.215 qpair failed and we were unable to recover it. 00:31:42.215 [2024-10-01 08:46:33.963117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.215 [2024-10-01 08:46:33.963127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.215 qpair failed and we were unable to recover it. 00:31:42.215 [2024-10-01 08:46:33.963452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.215 [2024-10-01 08:46:33.963462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.215 qpair failed and we were unable to recover it. 00:31:42.215 [2024-10-01 08:46:33.963802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.215 [2024-10-01 08:46:33.963813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.215 qpair failed and we were unable to recover it. 00:31:42.215 [2024-10-01 08:46:33.964126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.215 [2024-10-01 08:46:33.964135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.215 qpair failed and we were unable to recover it. 00:31:42.215 [2024-10-01 08:46:33.964443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.215 [2024-10-01 08:46:33.964452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.215 qpair failed and we were unable to recover it. 00:31:42.215 [2024-10-01 08:46:33.964781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.215 [2024-10-01 08:46:33.964790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.215 qpair failed and we were unable to recover it. 00:31:42.215 [2024-10-01 08:46:33.965022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.215 [2024-10-01 08:46:33.965032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.215 qpair failed and we were unable to recover it. 00:31:42.215 [2024-10-01 08:46:33.965426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.215 [2024-10-01 08:46:33.965436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.215 qpair failed and we were unable to recover it. 00:31:42.215 [2024-10-01 08:46:33.965740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.215 [2024-10-01 08:46:33.965752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.215 qpair failed and we were unable to recover it. 00:31:42.215 [2024-10-01 08:46:33.966066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.215 [2024-10-01 08:46:33.966076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.215 qpair failed and we were unable to recover it. 00:31:42.215 [2024-10-01 08:46:33.966380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.215 [2024-10-01 08:46:33.966390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.215 qpair failed and we were unable to recover it. 00:31:42.215 [2024-10-01 08:46:33.966660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.215 [2024-10-01 08:46:33.966670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.215 qpair failed and we were unable to recover it. 00:31:42.215 [2024-10-01 08:46:33.966967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.215 [2024-10-01 08:46:33.966977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.215 qpair failed and we were unable to recover it. 00:31:42.215 [2024-10-01 08:46:33.967301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.215 [2024-10-01 08:46:33.967311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.215 qpair failed and we were unable to recover it. 00:31:42.215 [2024-10-01 08:46:33.967616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.215 [2024-10-01 08:46:33.967626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.215 qpair failed and we were unable to recover it. 00:31:42.215 [2024-10-01 08:46:33.967917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.215 [2024-10-01 08:46:33.967927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.215 qpair failed and we were unable to recover it. 00:31:42.215 [2024-10-01 08:46:33.968334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.215 [2024-10-01 08:46:33.968344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.215 qpair failed and we were unable to recover it. 00:31:42.215 [2024-10-01 08:46:33.968714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.215 [2024-10-01 08:46:33.968724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.215 qpair failed and we were unable to recover it. 00:31:42.215 [2024-10-01 08:46:33.968971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.215 [2024-10-01 08:46:33.968982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.215 qpair failed and we were unable to recover it. 00:31:42.215 [2024-10-01 08:46:33.969288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.215 [2024-10-01 08:46:33.969299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.215 qpair failed and we were unable to recover it. 00:31:42.215 [2024-10-01 08:46:33.969569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.216 [2024-10-01 08:46:33.969579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.216 qpair failed and we were unable to recover it. 00:31:42.216 [2024-10-01 08:46:33.969854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.216 [2024-10-01 08:46:33.969864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.216 qpair failed and we were unable to recover it. 00:31:42.216 [2024-10-01 08:46:33.970139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.216 [2024-10-01 08:46:33.970149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.216 qpair failed and we were unable to recover it. 00:31:42.216 [2024-10-01 08:46:33.970344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.216 [2024-10-01 08:46:33.970354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.216 qpair failed and we were unable to recover it. 00:31:42.216 [2024-10-01 08:46:33.970648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.216 [2024-10-01 08:46:33.970657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.216 qpair failed and we were unable to recover it. 00:31:42.216 [2024-10-01 08:46:33.970947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.216 [2024-10-01 08:46:33.970956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.216 qpair failed and we were unable to recover it. 00:31:42.216 [2024-10-01 08:46:33.971347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.216 [2024-10-01 08:46:33.971357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.216 qpair failed and we were unable to recover it. 00:31:42.216 [2024-10-01 08:46:33.971660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.216 [2024-10-01 08:46:33.971670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.216 qpair failed and we were unable to recover it. 00:31:42.216 [2024-10-01 08:46:33.971985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.216 [2024-10-01 08:46:33.971997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.216 qpair failed and we were unable to recover it. 00:31:42.216 [2024-10-01 08:46:33.972307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.216 [2024-10-01 08:46:33.972317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.216 qpair failed and we were unable to recover it. 00:31:42.216 [2024-10-01 08:46:33.972633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.216 [2024-10-01 08:46:33.972643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.216 qpair failed and we were unable to recover it. 00:31:42.216 [2024-10-01 08:46:33.972942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.216 [2024-10-01 08:46:33.972952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.216 qpair failed and we were unable to recover it. 00:31:42.216 [2024-10-01 08:46:33.973258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.216 [2024-10-01 08:46:33.973268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.216 qpair failed and we were unable to recover it. 00:31:42.216 [2024-10-01 08:46:33.973576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.216 [2024-10-01 08:46:33.973586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.216 qpair failed and we were unable to recover it. 00:31:42.216 [2024-10-01 08:46:33.973897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.216 [2024-10-01 08:46:33.973907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.216 qpair failed and we were unable to recover it. 00:31:42.216 [2024-10-01 08:46:33.974203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.216 [2024-10-01 08:46:33.974214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.216 qpair failed and we were unable to recover it. 00:31:42.216 [2024-10-01 08:46:33.974540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.216 [2024-10-01 08:46:33.974551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.216 qpair failed and we were unable to recover it. 00:31:42.216 [2024-10-01 08:46:33.974734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.216 [2024-10-01 08:46:33.974746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.216 qpair failed and we were unable to recover it. 00:31:42.216 [2024-10-01 08:46:33.974952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.216 [2024-10-01 08:46:33.974964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.216 qpair failed and we were unable to recover it. 00:31:42.216 [2024-10-01 08:46:33.975291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.216 [2024-10-01 08:46:33.975302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.216 qpair failed and we were unable to recover it. 00:31:42.216 [2024-10-01 08:46:33.975607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.216 [2024-10-01 08:46:33.975618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.216 qpair failed and we were unable to recover it. 00:31:42.216 [2024-10-01 08:46:33.975938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.216 [2024-10-01 08:46:33.975949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.216 qpair failed and we were unable to recover it. 00:31:42.216 [2024-10-01 08:46:33.976250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.216 [2024-10-01 08:46:33.976261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.216 qpair failed and we were unable to recover it. 00:31:42.216 [2024-10-01 08:46:33.976556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.216 [2024-10-01 08:46:33.976567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.216 qpair failed and we were unable to recover it. 00:31:42.216 [2024-10-01 08:46:33.976898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.216 [2024-10-01 08:46:33.976909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.216 qpair failed and we were unable to recover it. 00:31:42.216 [2024-10-01 08:46:33.977064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.216 [2024-10-01 08:46:33.977075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.216 qpair failed and we were unable to recover it. 00:31:42.216 [2024-10-01 08:46:33.977459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.216 [2024-10-01 08:46:33.977470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.216 qpair failed and we were unable to recover it. 00:31:42.216 [2024-10-01 08:46:33.977794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.216 [2024-10-01 08:46:33.977804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.216 qpair failed and we were unable to recover it. 00:31:42.216 [2024-10-01 08:46:33.978106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.216 [2024-10-01 08:46:33.978116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.216 qpair failed and we were unable to recover it. 00:31:42.216 [2024-10-01 08:46:33.978405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.216 [2024-10-01 08:46:33.978417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.216 qpair failed and we were unable to recover it. 00:31:42.217 [2024-10-01 08:46:33.978727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.217 [2024-10-01 08:46:33.978737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.217 qpair failed and we were unable to recover it. 00:31:42.217 [2024-10-01 08:46:33.979021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.217 [2024-10-01 08:46:33.979032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.217 qpair failed and we were unable to recover it. 00:31:42.217 [2024-10-01 08:46:33.979364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.217 [2024-10-01 08:46:33.979374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.217 qpair failed and we were unable to recover it. 00:31:42.217 [2024-10-01 08:46:33.979679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.217 [2024-10-01 08:46:33.979689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.217 qpair failed and we were unable to recover it. 00:31:42.217 [2024-10-01 08:46:33.979958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.217 [2024-10-01 08:46:33.979968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.217 qpair failed and we were unable to recover it. 00:31:42.217 [2024-10-01 08:46:33.980282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.217 [2024-10-01 08:46:33.980293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.217 qpair failed and we were unable to recover it. 00:31:42.217 [2024-10-01 08:46:33.980547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.217 [2024-10-01 08:46:33.980557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.217 qpair failed and we were unable to recover it. 00:31:42.217 [2024-10-01 08:46:33.980858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.217 [2024-10-01 08:46:33.980874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.217 qpair failed and we were unable to recover it. 00:31:42.217 [2024-10-01 08:46:33.980958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.217 [2024-10-01 08:46:33.980967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.217 qpair failed and we were unable to recover it. 00:31:42.217 [2024-10-01 08:46:33.981287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.217 [2024-10-01 08:46:33.981297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.217 qpair failed and we were unable to recover it. 00:31:42.217 [2024-10-01 08:46:33.981599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.217 [2024-10-01 08:46:33.981609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.217 qpair failed and we were unable to recover it. 00:31:42.217 [2024-10-01 08:46:33.981938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.217 [2024-10-01 08:46:33.981948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.217 qpair failed and we were unable to recover it. 00:31:42.217 [2024-10-01 08:46:33.982275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.217 [2024-10-01 08:46:33.982288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.217 qpair failed and we were unable to recover it. 00:31:42.217 [2024-10-01 08:46:33.982562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.217 [2024-10-01 08:46:33.982573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.217 qpair failed and we were unable to recover it. 00:31:42.217 [2024-10-01 08:46:33.982910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.217 [2024-10-01 08:46:33.982920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.217 qpair failed and we were unable to recover it. 00:31:42.217 [2024-10-01 08:46:33.983260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.217 [2024-10-01 08:46:33.983271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.217 qpair failed and we were unable to recover it. 00:31:42.217 [2024-10-01 08:46:33.983575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.217 [2024-10-01 08:46:33.983585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.217 qpair failed and we were unable to recover it. 00:31:42.217 [2024-10-01 08:46:33.983847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.217 [2024-10-01 08:46:33.983857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.217 qpair failed and we were unable to recover it. 00:31:42.217 [2024-10-01 08:46:33.984164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.217 [2024-10-01 08:46:33.984174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.217 qpair failed and we were unable to recover it. 00:31:42.217 [2024-10-01 08:46:33.984465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.217 [2024-10-01 08:46:33.984475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.217 qpair failed and we were unable to recover it. 00:31:42.217 [2024-10-01 08:46:33.984780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.217 [2024-10-01 08:46:33.984790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.217 qpair failed and we were unable to recover it. 00:31:42.217 [2024-10-01 08:46:33.985070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.217 [2024-10-01 08:46:33.985080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.217 qpair failed and we were unable to recover it. 00:31:42.217 [2024-10-01 08:46:33.985408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.217 [2024-10-01 08:46:33.985418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.217 qpair failed and we were unable to recover it. 00:31:42.217 [2024-10-01 08:46:33.985749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.217 [2024-10-01 08:46:33.985759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.217 qpair failed and we were unable to recover it. 00:31:42.217 [2024-10-01 08:46:33.986023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.217 [2024-10-01 08:46:33.986033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.217 qpair failed and we were unable to recover it. 00:31:42.217 [2024-10-01 08:46:33.986352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.217 [2024-10-01 08:46:33.986362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.217 qpair failed and we were unable to recover it. 00:31:42.217 [2024-10-01 08:46:33.986690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.217 [2024-10-01 08:46:33.986701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.217 qpair failed and we were unable to recover it. 00:31:42.217 [2024-10-01 08:46:33.987035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.217 [2024-10-01 08:46:33.987045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.217 qpair failed and we were unable to recover it. 00:31:42.217 [2024-10-01 08:46:33.987351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.218 [2024-10-01 08:46:33.987360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.218 qpair failed and we were unable to recover it. 00:31:42.218 [2024-10-01 08:46:33.987678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.218 [2024-10-01 08:46:33.987687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.218 qpair failed and we were unable to recover it. 00:31:42.498 [2024-10-01 08:46:33.988002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.498 [2024-10-01 08:46:33.988013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.498 qpair failed and we were unable to recover it. 00:31:42.498 [2024-10-01 08:46:33.988310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.499 [2024-10-01 08:46:33.988321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.499 qpair failed and we were unable to recover it. 00:31:42.499 [2024-10-01 08:46:33.988613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.499 [2024-10-01 08:46:33.988622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.499 qpair failed and we were unable to recover it. 00:31:42.499 [2024-10-01 08:46:33.988953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.499 [2024-10-01 08:46:33.988962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.499 qpair failed and we were unable to recover it. 00:31:42.499 [2024-10-01 08:46:33.989284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.499 [2024-10-01 08:46:33.989295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.499 qpair failed and we were unable to recover it. 00:31:42.499 [2024-10-01 08:46:33.989509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.499 [2024-10-01 08:46:33.989518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.499 qpair failed and we were unable to recover it. 00:31:42.499 [2024-10-01 08:46:33.989815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.499 [2024-10-01 08:46:33.989825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.499 qpair failed and we were unable to recover it. 00:31:42.499 [2024-10-01 08:46:33.990205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.499 [2024-10-01 08:46:33.990215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.499 qpair failed and we were unable to recover it. 00:31:42.499 [2024-10-01 08:46:33.990536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.499 [2024-10-01 08:46:33.990546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.499 qpair failed and we were unable to recover it. 00:31:42.499 [2024-10-01 08:46:33.990882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.499 [2024-10-01 08:46:33.990892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.499 qpair failed and we were unable to recover it. 00:31:42.499 [2024-10-01 08:46:33.991194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.499 [2024-10-01 08:46:33.991204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.499 qpair failed and we were unable to recover it. 00:31:42.499 [2024-10-01 08:46:33.991556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.499 [2024-10-01 08:46:33.991567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.499 qpair failed and we were unable to recover it. 00:31:42.499 [2024-10-01 08:46:33.991895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.499 [2024-10-01 08:46:33.991905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.499 qpair failed and we were unable to recover it. 00:31:42.499 [2024-10-01 08:46:33.992065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.499 [2024-10-01 08:46:33.992075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.499 qpair failed and we were unable to recover it. 00:31:42.499 [2024-10-01 08:46:33.992342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.499 [2024-10-01 08:46:33.992351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.499 qpair failed and we were unable to recover it. 00:31:42.499 [2024-10-01 08:46:33.992642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.499 [2024-10-01 08:46:33.992653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.499 qpair failed and we were unable to recover it. 00:31:42.499 [2024-10-01 08:46:33.992960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.499 [2024-10-01 08:46:33.992970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.499 qpair failed and we were unable to recover it. 00:31:42.499 [2024-10-01 08:46:33.993221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.499 [2024-10-01 08:46:33.993233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.499 qpair failed and we were unable to recover it. 00:31:42.499 [2024-10-01 08:46:33.993531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.499 [2024-10-01 08:46:33.993541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.499 qpair failed and we were unable to recover it. 00:31:42.499 [2024-10-01 08:46:33.993853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.499 [2024-10-01 08:46:33.993863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.499 qpair failed and we were unable to recover it. 00:31:42.499 [2024-10-01 08:46:33.994168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.499 [2024-10-01 08:46:33.994179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.499 qpair failed and we were unable to recover it. 00:31:42.499 [2024-10-01 08:46:33.994393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.499 [2024-10-01 08:46:33.994405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.499 qpair failed and we were unable to recover it. 00:31:42.499 [2024-10-01 08:46:33.994571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.499 [2024-10-01 08:46:33.994582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.499 qpair failed and we were unable to recover it. 00:31:42.499 [2024-10-01 08:46:33.994910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.499 [2024-10-01 08:46:33.994921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.499 qpair failed and we were unable to recover it. 00:31:42.499 [2024-10-01 08:46:33.995243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.499 [2024-10-01 08:46:33.995255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.499 qpair failed and we were unable to recover it. 00:31:42.499 [2024-10-01 08:46:33.995582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.499 [2024-10-01 08:46:33.995593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.499 qpair failed and we were unable to recover it. 00:31:42.499 [2024-10-01 08:46:33.995927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.499 [2024-10-01 08:46:33.995938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.499 qpair failed and we were unable to recover it. 00:31:42.499 [2024-10-01 08:46:33.996228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.499 [2024-10-01 08:46:33.996239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.499 qpair failed and we were unable to recover it. 00:31:42.499 [2024-10-01 08:46:33.996545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.499 [2024-10-01 08:46:33.996556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.499 qpair failed and we were unable to recover it. 00:31:42.499 [2024-10-01 08:46:33.996876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.499 [2024-10-01 08:46:33.996887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.499 qpair failed and we were unable to recover it. 00:31:42.499 [2024-10-01 08:46:33.997227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.499 [2024-10-01 08:46:33.997237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.499 qpair failed and we were unable to recover it. 00:31:42.499 [2024-10-01 08:46:33.997538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.499 [2024-10-01 08:46:33.997548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.499 qpair failed and we were unable to recover it. 00:31:42.499 [2024-10-01 08:46:33.997823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.499 [2024-10-01 08:46:33.997834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.499 qpair failed and we were unable to recover it. 00:31:42.499 [2024-10-01 08:46:33.998103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.499 [2024-10-01 08:46:33.998113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.499 qpair failed and we were unable to recover it. 00:31:42.499 [2024-10-01 08:46:33.998427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.499 [2024-10-01 08:46:33.998438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.499 qpair failed and we were unable to recover it. 00:31:42.499 [2024-10-01 08:46:33.998726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.499 [2024-10-01 08:46:33.998736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.499 qpair failed and we were unable to recover it. 00:31:42.499 [2024-10-01 08:46:33.998904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.499 [2024-10-01 08:46:33.998916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.499 qpair failed and we were unable to recover it. 00:31:42.499 [2024-10-01 08:46:33.999106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.499 [2024-10-01 08:46:33.999122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.499 qpair failed and we were unable to recover it. 00:31:42.499 [2024-10-01 08:46:33.999392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.499 [2024-10-01 08:46:33.999402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.499 qpair failed and we were unable to recover it. 00:31:42.499 [2024-10-01 08:46:33.999614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.500 [2024-10-01 08:46:33.999624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.500 qpair failed and we were unable to recover it. 00:31:42.500 [2024-10-01 08:46:33.999928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.500 [2024-10-01 08:46:33.999939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.500 qpair failed and we were unable to recover it. 00:31:42.500 [2024-10-01 08:46:34.000234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.500 [2024-10-01 08:46:34.000245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.500 qpair failed and we were unable to recover it. 00:31:42.500 [2024-10-01 08:46:34.000565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.500 [2024-10-01 08:46:34.000576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.500 qpair failed and we were unable to recover it. 00:31:42.500 [2024-10-01 08:46:34.000891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.500 [2024-10-01 08:46:34.000902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.500 qpair failed and we were unable to recover it. 00:31:42.500 [2024-10-01 08:46:34.001248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.500 [2024-10-01 08:46:34.001259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.500 qpair failed and we were unable to recover it. 00:31:42.500 [2024-10-01 08:46:34.001547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.500 [2024-10-01 08:46:34.001557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.500 qpair failed and we were unable to recover it. 00:31:42.500 [2024-10-01 08:46:34.001833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.500 [2024-10-01 08:46:34.001843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.500 qpair failed and we were unable to recover it. 00:31:42.500 [2024-10-01 08:46:34.001950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.500 [2024-10-01 08:46:34.001960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.500 qpair failed and we were unable to recover it. 00:31:42.500 [2024-10-01 08:46:34.002324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.500 [2024-10-01 08:46:34.002335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.500 qpair failed and we were unable to recover it. 00:31:42.500 [2024-10-01 08:46:34.002615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.500 [2024-10-01 08:46:34.002625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.500 qpair failed and we were unable to recover it. 00:31:42.500 [2024-10-01 08:46:34.002943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.500 [2024-10-01 08:46:34.002954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.500 qpair failed and we were unable to recover it. 00:31:42.500 [2024-10-01 08:46:34.003162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.500 [2024-10-01 08:46:34.003174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.500 qpair failed and we were unable to recover it. 00:31:42.500 [2024-10-01 08:46:34.003492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.500 [2024-10-01 08:46:34.003502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.500 qpair failed and we were unable to recover it. 00:31:42.500 [2024-10-01 08:46:34.003805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.500 [2024-10-01 08:46:34.003816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.500 qpair failed and we were unable to recover it. 00:31:42.500 [2024-10-01 08:46:34.004037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.500 [2024-10-01 08:46:34.004048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.500 qpair failed and we were unable to recover it. 00:31:42.500 [2024-10-01 08:46:34.004323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.500 [2024-10-01 08:46:34.004334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.500 qpair failed and we were unable to recover it. 00:31:42.500 [2024-10-01 08:46:34.004659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.500 [2024-10-01 08:46:34.004669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.500 qpair failed and we were unable to recover it. 00:31:42.500 [2024-10-01 08:46:34.005011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.500 [2024-10-01 08:46:34.005023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.500 qpair failed and we were unable to recover it. 00:31:42.500 [2024-10-01 08:46:34.005330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.500 [2024-10-01 08:46:34.005340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.500 qpair failed and we were unable to recover it. 00:31:42.500 [2024-10-01 08:46:34.005689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.500 [2024-10-01 08:46:34.005699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.500 qpair failed and we were unable to recover it. 00:31:42.500 [2024-10-01 08:46:34.005988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.500 [2024-10-01 08:46:34.006001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.500 qpair failed and we were unable to recover it. 00:31:42.500 [2024-10-01 08:46:34.006309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.500 [2024-10-01 08:46:34.006320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.500 qpair failed and we were unable to recover it. 00:31:42.500 [2024-10-01 08:46:34.006581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.500 [2024-10-01 08:46:34.006592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.500 qpair failed and we were unable to recover it. 00:31:42.500 [2024-10-01 08:46:34.006850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.500 [2024-10-01 08:46:34.006861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.500 qpair failed and we were unable to recover it. 00:31:42.500 [2024-10-01 08:46:34.007175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.500 [2024-10-01 08:46:34.007186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.500 qpair failed and we were unable to recover it. 00:31:42.500 [2024-10-01 08:46:34.007500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.500 [2024-10-01 08:46:34.007512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.500 qpair failed and we were unable to recover it. 00:31:42.500 [2024-10-01 08:46:34.007820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.500 [2024-10-01 08:46:34.007832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.500 qpair failed and we were unable to recover it. 00:31:42.500 [2024-10-01 08:46:34.008147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.500 [2024-10-01 08:46:34.008158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.500 qpair failed and we were unable to recover it. 00:31:42.500 [2024-10-01 08:46:34.008530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.500 [2024-10-01 08:46:34.008541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.500 qpair failed and we were unable to recover it. 00:31:42.500 [2024-10-01 08:46:34.008829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.500 [2024-10-01 08:46:34.008840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.500 qpair failed and we were unable to recover it. 00:31:42.500 [2024-10-01 08:46:34.009014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.500 [2024-10-01 08:46:34.009025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.500 qpair failed and we were unable to recover it. 00:31:42.500 [2024-10-01 08:46:34.009398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.500 [2024-10-01 08:46:34.009409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.500 qpair failed and we were unable to recover it. 00:31:42.500 [2024-10-01 08:46:34.009711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.500 [2024-10-01 08:46:34.009722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.500 qpair failed and we were unable to recover it. 00:31:42.500 [2024-10-01 08:46:34.010028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.500 [2024-10-01 08:46:34.010039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.500 qpair failed and we were unable to recover it. 00:31:42.500 [2024-10-01 08:46:34.010322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.500 [2024-10-01 08:46:34.010333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.500 qpair failed and we were unable to recover it. 00:31:42.500 [2024-10-01 08:46:34.010665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.500 [2024-10-01 08:46:34.010676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.500 qpair failed and we were unable to recover it. 00:31:42.500 [2024-10-01 08:46:34.010979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.500 [2024-10-01 08:46:34.010990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.500 qpair failed and we were unable to recover it. 00:31:42.500 [2024-10-01 08:46:34.011202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.501 [2024-10-01 08:46:34.011214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.501 qpair failed and we were unable to recover it. 00:31:42.501 [2024-10-01 08:46:34.011529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.501 [2024-10-01 08:46:34.011540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.501 qpair failed and we were unable to recover it. 00:31:42.501 [2024-10-01 08:46:34.011856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.501 [2024-10-01 08:46:34.011867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.501 qpair failed and we were unable to recover it. 00:31:42.501 [2024-10-01 08:46:34.012174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.501 [2024-10-01 08:46:34.012185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.501 qpair failed and we were unable to recover it. 00:31:42.501 [2024-10-01 08:46:34.012376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.501 [2024-10-01 08:46:34.012386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.501 qpair failed and we were unable to recover it. 00:31:42.501 [2024-10-01 08:46:34.012663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.501 [2024-10-01 08:46:34.012674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.501 qpair failed and we were unable to recover it. 00:31:42.501 [2024-10-01 08:46:34.012860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.501 [2024-10-01 08:46:34.012871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.501 qpair failed and we were unable to recover it. 00:31:42.501 [2024-10-01 08:46:34.013194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.501 [2024-10-01 08:46:34.013206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.501 qpair failed and we were unable to recover it. 00:31:42.501 [2024-10-01 08:46:34.013516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.501 [2024-10-01 08:46:34.013527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.501 qpair failed and we were unable to recover it. 00:31:42.501 [2024-10-01 08:46:34.013846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.501 [2024-10-01 08:46:34.013857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.501 qpair failed and we were unable to recover it. 00:31:42.501 [2024-10-01 08:46:34.014170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.501 [2024-10-01 08:46:34.014181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.501 qpair failed and we were unable to recover it. 00:31:42.501 [2024-10-01 08:46:34.014490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.501 [2024-10-01 08:46:34.014500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.501 qpair failed and we were unable to recover it. 00:31:42.501 [2024-10-01 08:46:34.014814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.501 [2024-10-01 08:46:34.014824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.501 qpair failed and we were unable to recover it. 00:31:42.501 [2024-10-01 08:46:34.015166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.501 [2024-10-01 08:46:34.015177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.501 qpair failed and we were unable to recover it. 00:31:42.501 [2024-10-01 08:46:34.015543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.501 [2024-10-01 08:46:34.015554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.501 qpair failed and we were unable to recover it. 00:31:42.501 [2024-10-01 08:46:34.015745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.501 [2024-10-01 08:46:34.015755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.501 qpair failed and we were unable to recover it. 00:31:42.501 [2024-10-01 08:46:34.016116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.501 [2024-10-01 08:46:34.016128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.501 qpair failed and we were unable to recover it. 00:31:42.501 [2024-10-01 08:46:34.016458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.501 [2024-10-01 08:46:34.016469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.501 qpair failed and we were unable to recover it. 00:31:42.501 [2024-10-01 08:46:34.016638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.501 [2024-10-01 08:46:34.016648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.501 qpair failed and we were unable to recover it. 00:31:42.501 [2024-10-01 08:46:34.016937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.501 [2024-10-01 08:46:34.016948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.501 qpair failed and we were unable to recover it. 00:31:42.501 [2024-10-01 08:46:34.017262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.501 [2024-10-01 08:46:34.017273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.501 qpair failed and we were unable to recover it. 00:31:42.501 [2024-10-01 08:46:34.017531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.501 [2024-10-01 08:46:34.017541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.501 qpair failed and we were unable to recover it. 00:31:42.501 [2024-10-01 08:46:34.017820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.501 [2024-10-01 08:46:34.017831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.501 qpair failed and we were unable to recover it. 00:31:42.501 [2024-10-01 08:46:34.018113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.501 [2024-10-01 08:46:34.018123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.501 qpair failed and we were unable to recover it. 00:31:42.501 [2024-10-01 08:46:34.018411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.501 [2024-10-01 08:46:34.018421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.501 qpair failed and we were unable to recover it. 00:31:42.501 [2024-10-01 08:46:34.018695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.501 [2024-10-01 08:46:34.018706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.501 qpair failed and we were unable to recover it. 00:31:42.501 [2024-10-01 08:46:34.019009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.501 [2024-10-01 08:46:34.019020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.501 qpair failed and we were unable to recover it. 00:31:42.501 [2024-10-01 08:46:34.019408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.501 [2024-10-01 08:46:34.019418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.501 qpair failed and we were unable to recover it. 00:31:42.501 [2024-10-01 08:46:34.019582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.501 [2024-10-01 08:46:34.019594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.501 qpair failed and we were unable to recover it. 00:31:42.501 [2024-10-01 08:46:34.019800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.501 [2024-10-01 08:46:34.019810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.501 qpair failed and we were unable to recover it. 00:31:42.501 [2024-10-01 08:46:34.020084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.501 [2024-10-01 08:46:34.020094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.501 qpair failed and we were unable to recover it. 00:31:42.501 [2024-10-01 08:46:34.020429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.501 [2024-10-01 08:46:34.020439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.501 qpair failed and we were unable to recover it. 00:31:42.501 [2024-10-01 08:46:34.020723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.501 [2024-10-01 08:46:34.020733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.501 qpair failed and we were unable to recover it. 00:31:42.501 [2024-10-01 08:46:34.021064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.501 [2024-10-01 08:46:34.021074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.501 qpair failed and we were unable to recover it. 00:31:42.501 [2024-10-01 08:46:34.021409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.501 [2024-10-01 08:46:34.021419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.501 qpair failed and we were unable to recover it. 00:31:42.501 [2024-10-01 08:46:34.021602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.501 [2024-10-01 08:46:34.021612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.501 qpair failed and we were unable to recover it. 00:31:42.501 [2024-10-01 08:46:34.021877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.501 [2024-10-01 08:46:34.021887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.501 qpair failed and we were unable to recover it. 00:31:42.501 [2024-10-01 08:46:34.022189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.501 [2024-10-01 08:46:34.022199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.501 qpair failed and we were unable to recover it. 00:31:42.501 [2024-10-01 08:46:34.022523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.502 [2024-10-01 08:46:34.022532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.502 qpair failed and we were unable to recover it. 00:31:42.502 [2024-10-01 08:46:34.022695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.502 [2024-10-01 08:46:34.022706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.502 qpair failed and we were unable to recover it. 00:31:42.502 [2024-10-01 08:46:34.023016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.502 [2024-10-01 08:46:34.023026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.502 qpair failed and we were unable to recover it. 00:31:42.502 [2024-10-01 08:46:34.023254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.502 [2024-10-01 08:46:34.023264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.502 qpair failed and we were unable to recover it. 00:31:42.502 [2024-10-01 08:46:34.023553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.502 [2024-10-01 08:46:34.023562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.502 qpair failed and we were unable to recover it. 00:31:42.502 [2024-10-01 08:46:34.023763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.502 [2024-10-01 08:46:34.023773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.502 qpair failed and we were unable to recover it. 00:31:42.502 [2024-10-01 08:46:34.024026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.502 [2024-10-01 08:46:34.024036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.502 qpair failed and we were unable to recover it. 00:31:42.502 [2024-10-01 08:46:34.024250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.502 [2024-10-01 08:46:34.024260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.502 qpair failed and we were unable to recover it. 00:31:42.502 [2024-10-01 08:46:34.024563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.502 [2024-10-01 08:46:34.024572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.502 qpair failed and we were unable to recover it. 00:31:42.502 [2024-10-01 08:46:34.024880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.502 [2024-10-01 08:46:34.024891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.502 qpair failed and we were unable to recover it. 00:31:42.502 [2024-10-01 08:46:34.025128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.502 [2024-10-01 08:46:34.025139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.502 qpair failed and we were unable to recover it. 00:31:42.502 [2024-10-01 08:46:34.025435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.502 [2024-10-01 08:46:34.025446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.502 qpair failed and we were unable to recover it. 00:31:42.502 [2024-10-01 08:46:34.025748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.502 [2024-10-01 08:46:34.025758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.502 qpair failed and we were unable to recover it. 00:31:42.502 [2024-10-01 08:46:34.026053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.502 [2024-10-01 08:46:34.026063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.502 qpair failed and we were unable to recover it. 00:31:42.502 [2024-10-01 08:46:34.026461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.502 [2024-10-01 08:46:34.026476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.502 qpair failed and we were unable to recover it. 00:31:42.502 [2024-10-01 08:46:34.026780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.502 [2024-10-01 08:46:34.026789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.502 qpair failed and we were unable to recover it. 00:31:42.502 [2024-10-01 08:46:34.026982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.502 [2024-10-01 08:46:34.026992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.502 qpair failed and we were unable to recover it. 00:31:42.502 [2024-10-01 08:46:34.027229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.502 [2024-10-01 08:46:34.027238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.502 qpair failed and we were unable to recover it. 00:31:42.502 [2024-10-01 08:46:34.027442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.502 [2024-10-01 08:46:34.027452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.502 qpair failed and we were unable to recover it. 00:31:42.502 [2024-10-01 08:46:34.027786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.502 [2024-10-01 08:46:34.027795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.502 qpair failed and we were unable to recover it. 00:31:42.502 [2024-10-01 08:46:34.028132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.502 [2024-10-01 08:46:34.028142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.502 qpair failed and we were unable to recover it. 00:31:42.502 [2024-10-01 08:46:34.028511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.502 [2024-10-01 08:46:34.028522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.502 qpair failed and we were unable to recover it. 00:31:42.502 [2024-10-01 08:46:34.028850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.502 [2024-10-01 08:46:34.028860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.502 qpair failed and we were unable to recover it. 00:31:42.502 [2024-10-01 08:46:34.029074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.502 [2024-10-01 08:46:34.029084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.502 qpair failed and we were unable to recover it. 00:31:42.502 [2024-10-01 08:46:34.029368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.502 [2024-10-01 08:46:34.029377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.502 qpair failed and we were unable to recover it. 00:31:42.502 [2024-10-01 08:46:34.029655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.502 [2024-10-01 08:46:34.029666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.502 qpair failed and we were unable to recover it. 00:31:42.502 [2024-10-01 08:46:34.029951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.502 [2024-10-01 08:46:34.029960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.502 qpair failed and we were unable to recover it. 00:31:42.502 [2024-10-01 08:46:34.030268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.502 [2024-10-01 08:46:34.030277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.502 qpair failed and we were unable to recover it. 00:31:42.502 [2024-10-01 08:46:34.030594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.502 [2024-10-01 08:46:34.030603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.502 qpair failed and we were unable to recover it. 00:31:42.502 [2024-10-01 08:46:34.030902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.502 [2024-10-01 08:46:34.030912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.502 qpair failed and we were unable to recover it. 00:31:42.502 [2024-10-01 08:46:34.031239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.502 [2024-10-01 08:46:34.031249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.502 qpair failed and we were unable to recover it. 00:31:42.502 [2024-10-01 08:46:34.031558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.502 [2024-10-01 08:46:34.031568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.502 qpair failed and we were unable to recover it. 00:31:42.502 [2024-10-01 08:46:34.031902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.502 [2024-10-01 08:46:34.031911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.502 qpair failed and we were unable to recover it. 00:31:42.502 [2024-10-01 08:46:34.032069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.502 [2024-10-01 08:46:34.032079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.502 qpair failed and we were unable to recover it. 00:31:42.502 [2024-10-01 08:46:34.032483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.502 [2024-10-01 08:46:34.032493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.502 qpair failed and we were unable to recover it. 00:31:42.502 [2024-10-01 08:46:34.032802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.502 [2024-10-01 08:46:34.032812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.502 qpair failed and we were unable to recover it. 00:31:42.502 [2024-10-01 08:46:34.033111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.502 [2024-10-01 08:46:34.033121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.502 qpair failed and we were unable to recover it. 00:31:42.502 [2024-10-01 08:46:34.033412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.502 [2024-10-01 08:46:34.033421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.502 qpair failed and we were unable to recover it. 00:31:42.502 [2024-10-01 08:46:34.033614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.502 [2024-10-01 08:46:34.033624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.502 qpair failed and we were unable to recover it. 00:31:42.503 [2024-10-01 08:46:34.033906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.503 [2024-10-01 08:46:34.033915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.503 qpair failed and we were unable to recover it. 00:31:42.503 [2024-10-01 08:46:34.034220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.503 [2024-10-01 08:46:34.034231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.503 qpair failed and we were unable to recover it. 00:31:42.503 [2024-10-01 08:46:34.034519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.503 [2024-10-01 08:46:34.034529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.503 qpair failed and we were unable to recover it. 00:31:42.503 [2024-10-01 08:46:34.034834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.503 [2024-10-01 08:46:34.034844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.503 qpair failed and we were unable to recover it. 00:31:42.503 [2024-10-01 08:46:34.035155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.503 [2024-10-01 08:46:34.035166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.503 qpair failed and we were unable to recover it. 00:31:42.503 [2024-10-01 08:46:34.035462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.503 [2024-10-01 08:46:34.035479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.503 qpair failed and we were unable to recover it. 00:31:42.503 [2024-10-01 08:46:34.035781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.503 [2024-10-01 08:46:34.035790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.503 qpair failed and we were unable to recover it. 00:31:42.503 [2024-10-01 08:46:34.036022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.503 [2024-10-01 08:46:34.036032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.503 qpair failed and we were unable to recover it. 00:31:42.503 [2024-10-01 08:46:34.036343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.503 [2024-10-01 08:46:34.036353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.503 qpair failed and we were unable to recover it. 00:31:42.503 [2024-10-01 08:46:34.036671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.503 [2024-10-01 08:46:34.036680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.503 qpair failed and we were unable to recover it. 00:31:42.503 [2024-10-01 08:46:34.037001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.503 [2024-10-01 08:46:34.037011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.503 qpair failed and we were unable to recover it. 00:31:42.503 [2024-10-01 08:46:34.037245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.503 [2024-10-01 08:46:34.037255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.503 qpair failed and we were unable to recover it. 00:31:42.503 [2024-10-01 08:46:34.037572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.503 [2024-10-01 08:46:34.037582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.503 qpair failed and we were unable to recover it. 00:31:42.503 [2024-10-01 08:46:34.037887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.503 [2024-10-01 08:46:34.037897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.503 qpair failed and we were unable to recover it. 00:31:42.503 [2024-10-01 08:46:34.038279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.503 [2024-10-01 08:46:34.038290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.503 qpair failed and we were unable to recover it. 00:31:42.503 [2024-10-01 08:46:34.038594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.503 [2024-10-01 08:46:34.038605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.503 qpair failed and we were unable to recover it. 00:31:42.503 [2024-10-01 08:46:34.038906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.503 [2024-10-01 08:46:34.038916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.503 qpair failed and we were unable to recover it. 00:31:42.503 [2024-10-01 08:46:34.039099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.503 [2024-10-01 08:46:34.039109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.503 qpair failed and we were unable to recover it. 00:31:42.503 [2024-10-01 08:46:34.039469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.503 [2024-10-01 08:46:34.039480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.503 qpair failed and we were unable to recover it. 00:31:42.503 [2024-10-01 08:46:34.039778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.503 [2024-10-01 08:46:34.039790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.503 qpair failed and we were unable to recover it. 00:31:42.503 [2024-10-01 08:46:34.040065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.503 [2024-10-01 08:46:34.040075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.503 qpair failed and we were unable to recover it. 00:31:42.503 [2024-10-01 08:46:34.040364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.503 [2024-10-01 08:46:34.040374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.503 qpair failed and we were unable to recover it. 00:31:42.503 [2024-10-01 08:46:34.040561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.503 [2024-10-01 08:46:34.040572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.503 qpair failed and we were unable to recover it. 00:31:42.503 [2024-10-01 08:46:34.040851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.503 [2024-10-01 08:46:34.040862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.503 qpair failed and we were unable to recover it. 00:31:42.503 [2024-10-01 08:46:34.041197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.503 [2024-10-01 08:46:34.041207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.503 qpair failed and we were unable to recover it. 00:31:42.503 [2024-10-01 08:46:34.041520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.503 [2024-10-01 08:46:34.041530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.503 qpair failed and we were unable to recover it. 00:31:42.503 [2024-10-01 08:46:34.041873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.503 [2024-10-01 08:46:34.041882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.503 qpair failed and we were unable to recover it. 00:31:42.503 [2024-10-01 08:46:34.042254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.503 [2024-10-01 08:46:34.042265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.503 qpair failed and we were unable to recover it. 00:31:42.503 [2024-10-01 08:46:34.042552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.503 [2024-10-01 08:46:34.042562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.503 qpair failed and we were unable to recover it. 00:31:42.503 [2024-10-01 08:46:34.042847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.503 [2024-10-01 08:46:34.042857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.503 qpair failed and we were unable to recover it. 00:31:42.503 [2024-10-01 08:46:34.043178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.503 [2024-10-01 08:46:34.043188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.503 qpair failed and we were unable to recover it. 00:31:42.503 [2024-10-01 08:46:34.043510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.503 [2024-10-01 08:46:34.043520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.503 qpair failed and we were unable to recover it. 00:31:42.503 [2024-10-01 08:46:34.043847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.503 [2024-10-01 08:46:34.043856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.504 qpair failed and we were unable to recover it. 00:31:42.504 [2024-10-01 08:46:34.044135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.504 [2024-10-01 08:46:34.044145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.504 qpair failed and we were unable to recover it. 00:31:42.504 [2024-10-01 08:46:34.044474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.504 [2024-10-01 08:46:34.044483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.504 qpair failed and we were unable to recover it. 00:31:42.504 [2024-10-01 08:46:34.044744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.504 [2024-10-01 08:46:34.044754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.504 qpair failed and we were unable to recover it. 00:31:42.504 [2024-10-01 08:46:34.045058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.504 [2024-10-01 08:46:34.045068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.504 qpair failed and we were unable to recover it. 00:31:42.504 [2024-10-01 08:46:34.045349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.504 [2024-10-01 08:46:34.045359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.504 qpair failed and we were unable to recover it. 00:31:42.504 [2024-10-01 08:46:34.045660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.504 [2024-10-01 08:46:34.045670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.504 qpair failed and we were unable to recover it. 00:31:42.504 [2024-10-01 08:46:34.045981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.504 [2024-10-01 08:46:34.045990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.504 qpair failed and we were unable to recover it. 00:31:42.504 [2024-10-01 08:46:34.046359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.504 [2024-10-01 08:46:34.046368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.504 qpair failed and we were unable to recover it. 00:31:42.504 [2024-10-01 08:46:34.046679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.504 [2024-10-01 08:46:34.046689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.504 qpair failed and we were unable to recover it. 00:31:42.504 [2024-10-01 08:46:34.047013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.504 [2024-10-01 08:46:34.047023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.504 qpair failed and we were unable to recover it. 00:31:42.504 [2024-10-01 08:46:34.047373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.504 [2024-10-01 08:46:34.047384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.504 qpair failed and we were unable to recover it. 00:31:42.504 [2024-10-01 08:46:34.047692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.504 [2024-10-01 08:46:34.047701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.504 qpair failed and we were unable to recover it. 00:31:42.504 [2024-10-01 08:46:34.048063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.504 [2024-10-01 08:46:34.048073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.504 qpair failed and we were unable to recover it. 00:31:42.504 [2024-10-01 08:46:34.048251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.504 [2024-10-01 08:46:34.048262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.504 qpair failed and we were unable to recover it. 00:31:42.504 [2024-10-01 08:46:34.048564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.504 [2024-10-01 08:46:34.048574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.504 qpair failed and we were unable to recover it. 00:31:42.504 [2024-10-01 08:46:34.048924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.504 [2024-10-01 08:46:34.048934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.504 qpair failed and we were unable to recover it. 00:31:42.504 [2024-10-01 08:46:34.049209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.504 [2024-10-01 08:46:34.049220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.504 qpair failed and we were unable to recover it. 00:31:42.504 [2024-10-01 08:46:34.049514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.504 [2024-10-01 08:46:34.049524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.504 qpair failed and we were unable to recover it. 00:31:42.504 [2024-10-01 08:46:34.049829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.504 [2024-10-01 08:46:34.049839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.504 qpair failed and we were unable to recover it. 00:31:42.504 [2024-10-01 08:46:34.050052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.504 [2024-10-01 08:46:34.050062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.504 qpair failed and we were unable to recover it. 00:31:42.504 [2024-10-01 08:46:34.050379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.504 [2024-10-01 08:46:34.050389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.504 qpair failed and we were unable to recover it. 00:31:42.504 [2024-10-01 08:46:34.050649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.504 [2024-10-01 08:46:34.050659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.504 qpair failed and we were unable to recover it. 00:31:42.504 [2024-10-01 08:46:34.050867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.504 [2024-10-01 08:46:34.050877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.504 qpair failed and we were unable to recover it. 00:31:42.504 [2024-10-01 08:46:34.051052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.504 [2024-10-01 08:46:34.051063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.504 qpair failed and we were unable to recover it. 00:31:42.504 [2024-10-01 08:46:34.051252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.504 [2024-10-01 08:46:34.051261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.504 qpair failed and we were unable to recover it. 00:31:42.504 [2024-10-01 08:46:34.051603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.504 [2024-10-01 08:46:34.051621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.504 qpair failed and we were unable to recover it. 00:31:42.504 [2024-10-01 08:46:34.051932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.504 [2024-10-01 08:46:34.051942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.504 qpair failed and we were unable to recover it. 00:31:42.504 [2024-10-01 08:46:34.052259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.504 [2024-10-01 08:46:34.052272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.504 qpair failed and we were unable to recover it. 00:31:42.504 [2024-10-01 08:46:34.052573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.504 [2024-10-01 08:46:34.052583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.504 qpair failed and we were unable to recover it. 00:31:42.504 [2024-10-01 08:46:34.052790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.504 [2024-10-01 08:46:34.052800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.504 qpair failed and we were unable to recover it. 00:31:42.504 [2024-10-01 08:46:34.053089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.504 [2024-10-01 08:46:34.053099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.504 qpair failed and we were unable to recover it. 00:31:42.504 [2024-10-01 08:46:34.053385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.504 [2024-10-01 08:46:34.053394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.504 qpair failed and we were unable to recover it. 00:31:42.504 [2024-10-01 08:46:34.053733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.504 [2024-10-01 08:46:34.053744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.504 qpair failed and we were unable to recover it. 00:31:42.504 [2024-10-01 08:46:34.054053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.504 [2024-10-01 08:46:34.054064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.504 qpair failed and we were unable to recover it. 00:31:42.504 [2024-10-01 08:46:34.054356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.504 [2024-10-01 08:46:34.054367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.504 qpair failed and we were unable to recover it. 00:31:42.504 [2024-10-01 08:46:34.054692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.504 [2024-10-01 08:46:34.054702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.504 qpair failed and we were unable to recover it. 00:31:42.504 [2024-10-01 08:46:34.055009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.504 [2024-10-01 08:46:34.055018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.504 qpair failed and we were unable to recover it. 00:31:42.504 [2024-10-01 08:46:34.055290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.504 [2024-10-01 08:46:34.055300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.505 qpair failed and we were unable to recover it. 00:31:42.505 [2024-10-01 08:46:34.055463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.505 [2024-10-01 08:46:34.055475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.505 qpair failed and we were unable to recover it. 00:31:42.505 [2024-10-01 08:46:34.055655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.505 [2024-10-01 08:46:34.055665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.505 qpair failed and we were unable to recover it. 00:31:42.505 [2024-10-01 08:46:34.055969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.505 [2024-10-01 08:46:34.055979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.505 qpair failed and we were unable to recover it. 00:31:42.505 [2024-10-01 08:46:34.056317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.505 [2024-10-01 08:46:34.056327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.505 qpair failed and we were unable to recover it. 00:31:42.505 [2024-10-01 08:46:34.056618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.505 [2024-10-01 08:46:34.056627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.505 qpair failed and we were unable to recover it. 00:31:42.505 [2024-10-01 08:46:34.056940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.505 [2024-10-01 08:46:34.056949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.505 qpair failed and we were unable to recover it. 00:31:42.505 [2024-10-01 08:46:34.057255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.505 [2024-10-01 08:46:34.057266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.505 qpair failed and we were unable to recover it. 00:31:42.505 [2024-10-01 08:46:34.057571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.505 [2024-10-01 08:46:34.057581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.505 qpair failed and we were unable to recover it. 00:31:42.505 [2024-10-01 08:46:34.057882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.505 [2024-10-01 08:46:34.057892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.505 qpair failed and we were unable to recover it. 00:31:42.505 [2024-10-01 08:46:34.058205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.505 [2024-10-01 08:46:34.058215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.505 qpair failed and we were unable to recover it. 00:31:42.505 [2024-10-01 08:46:34.058521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.505 [2024-10-01 08:46:34.058531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.505 qpair failed and we were unable to recover it. 00:31:42.505 [2024-10-01 08:46:34.058855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.505 [2024-10-01 08:46:34.058864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.505 qpair failed and we were unable to recover it. 00:31:42.505 [2024-10-01 08:46:34.059157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.505 [2024-10-01 08:46:34.059167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.505 qpair failed and we were unable to recover it. 00:31:42.505 [2024-10-01 08:46:34.059469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.505 [2024-10-01 08:46:34.059479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.505 qpair failed and we were unable to recover it. 00:31:42.505 [2024-10-01 08:46:34.059745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.505 [2024-10-01 08:46:34.059755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.505 qpair failed and we were unable to recover it. 00:31:42.505 [2024-10-01 08:46:34.060075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.505 [2024-10-01 08:46:34.060085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.505 qpair failed and we were unable to recover it. 00:31:42.505 [2024-10-01 08:46:34.060366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.505 [2024-10-01 08:46:34.060380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.505 qpair failed and we were unable to recover it. 00:31:42.505 [2024-10-01 08:46:34.060708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.505 [2024-10-01 08:46:34.060719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.505 qpair failed and we were unable to recover it. 00:31:42.505 [2024-10-01 08:46:34.061046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.505 [2024-10-01 08:46:34.061057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.505 qpair failed and we were unable to recover it. 00:31:42.505 [2024-10-01 08:46:34.061381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.505 [2024-10-01 08:46:34.061391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.505 qpair failed and we were unable to recover it. 00:31:42.505 [2024-10-01 08:46:34.061743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.505 [2024-10-01 08:46:34.061753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.505 qpair failed and we were unable to recover it. 00:31:42.505 [2024-10-01 08:46:34.062087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.505 [2024-10-01 08:46:34.062097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.505 qpair failed and we were unable to recover it. 00:31:42.505 [2024-10-01 08:46:34.062392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.505 [2024-10-01 08:46:34.062401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.505 qpair failed and we were unable to recover it. 00:31:42.505 [2024-10-01 08:46:34.062717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.505 [2024-10-01 08:46:34.062727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.505 qpair failed and we were unable to recover it. 00:31:42.505 [2024-10-01 08:46:34.063059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.505 [2024-10-01 08:46:34.063069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.505 qpair failed and we were unable to recover it. 00:31:42.505 [2024-10-01 08:46:34.063279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.505 [2024-10-01 08:46:34.063288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.505 qpair failed and we were unable to recover it. 00:31:42.505 [2024-10-01 08:46:34.063612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.505 [2024-10-01 08:46:34.063622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.505 qpair failed and we were unable to recover it. 00:31:42.505 [2024-10-01 08:46:34.063896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.505 [2024-10-01 08:46:34.063905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.505 qpair failed and we were unable to recover it. 00:31:42.505 [2024-10-01 08:46:34.064174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.505 [2024-10-01 08:46:34.064184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.505 qpair failed and we were unable to recover it. 00:31:42.505 [2024-10-01 08:46:34.064498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.505 [2024-10-01 08:46:34.064508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.505 qpair failed and we were unable to recover it. 00:31:42.505 [2024-10-01 08:46:34.064847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.505 [2024-10-01 08:46:34.064857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.505 qpair failed and we were unable to recover it. 00:31:42.505 [2024-10-01 08:46:34.065133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.505 [2024-10-01 08:46:34.065144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.505 qpair failed and we were unable to recover it. 00:31:42.505 [2024-10-01 08:46:34.065467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.505 [2024-10-01 08:46:34.065477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.505 qpair failed and we were unable to recover it. 00:31:42.505 [2024-10-01 08:46:34.065797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.505 [2024-10-01 08:46:34.065808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.505 qpair failed and we were unable to recover it. 00:31:42.505 [2024-10-01 08:46:34.066113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.505 [2024-10-01 08:46:34.066123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.505 qpair failed and we were unable to recover it. 00:31:42.505 [2024-10-01 08:46:34.066400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.505 [2024-10-01 08:46:34.066410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.505 qpair failed and we were unable to recover it. 00:31:42.505 [2024-10-01 08:46:34.066702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.505 [2024-10-01 08:46:34.066712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.505 qpair failed and we were unable to recover it. 00:31:42.505 [2024-10-01 08:46:34.066976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.505 [2024-10-01 08:46:34.066986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.505 qpair failed and we were unable to recover it. 00:31:42.505 [2024-10-01 08:46:34.067298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.506 [2024-10-01 08:46:34.067308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.506 qpair failed and we were unable to recover it. 00:31:42.506 [2024-10-01 08:46:34.067607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.506 [2024-10-01 08:46:34.067616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.506 qpair failed and we were unable to recover it. 00:31:42.506 [2024-10-01 08:46:34.067920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.506 [2024-10-01 08:46:34.067930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.506 qpair failed and we were unable to recover it. 00:31:42.506 [2024-10-01 08:46:34.068122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.506 [2024-10-01 08:46:34.068132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.506 qpair failed and we were unable to recover it. 00:31:42.506 [2024-10-01 08:46:34.068449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.506 [2024-10-01 08:46:34.068459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.506 qpair failed and we were unable to recover it. 00:31:42.506 [2024-10-01 08:46:34.068766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.506 [2024-10-01 08:46:34.068776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.506 qpair failed and we were unable to recover it. 00:31:42.506 [2024-10-01 08:46:34.069101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.506 [2024-10-01 08:46:34.069112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.506 qpair failed and we were unable to recover it. 00:31:42.506 [2024-10-01 08:46:34.069397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.506 [2024-10-01 08:46:34.069416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.506 qpair failed and we were unable to recover it. 00:31:42.506 [2024-10-01 08:46:34.069728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.506 [2024-10-01 08:46:34.069738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.506 qpair failed and we were unable to recover it. 00:31:42.506 [2024-10-01 08:46:34.070093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.506 [2024-10-01 08:46:34.070104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.506 qpair failed and we were unable to recover it. 00:31:42.506 [2024-10-01 08:46:34.070424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.506 [2024-10-01 08:46:34.070434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.506 qpair failed and we were unable to recover it. 00:31:42.506 [2024-10-01 08:46:34.070696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.506 [2024-10-01 08:46:34.070705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.506 qpair failed and we were unable to recover it. 00:31:42.506 [2024-10-01 08:46:34.070876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.506 [2024-10-01 08:46:34.070886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.506 qpair failed and we were unable to recover it. 00:31:42.506 [2024-10-01 08:46:34.071159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.506 [2024-10-01 08:46:34.071169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.506 qpair failed and we were unable to recover it. 00:31:42.506 [2024-10-01 08:46:34.071445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.506 [2024-10-01 08:46:34.071455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.506 qpair failed and we were unable to recover it. 00:31:42.506 [2024-10-01 08:46:34.071734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.506 [2024-10-01 08:46:34.071744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.506 qpair failed and we were unable to recover it. 00:31:42.506 [2024-10-01 08:46:34.072030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.506 [2024-10-01 08:46:34.072040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.506 qpair failed and we were unable to recover it. 00:31:42.506 [2024-10-01 08:46:34.072330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.506 [2024-10-01 08:46:34.072340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.506 qpair failed and we were unable to recover it. 00:31:42.506 [2024-10-01 08:46:34.072663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.506 [2024-10-01 08:46:34.072672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.506 qpair failed and we were unable to recover it. 00:31:42.506 [2024-10-01 08:46:34.073005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.506 [2024-10-01 08:46:34.073018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.506 qpair failed and we were unable to recover it. 00:31:42.506 [2024-10-01 08:46:34.073252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.506 [2024-10-01 08:46:34.073262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.506 qpair failed and we were unable to recover it. 00:31:42.506 [2024-10-01 08:46:34.073532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.506 [2024-10-01 08:46:34.073541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.506 qpair failed and we were unable to recover it. 00:31:42.506 [2024-10-01 08:46:34.073707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.506 [2024-10-01 08:46:34.073718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.506 qpair failed and we were unable to recover it. 00:31:42.506 [2024-10-01 08:46:34.074028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.506 [2024-10-01 08:46:34.074038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.506 qpair failed and we were unable to recover it. 00:31:42.506 [2024-10-01 08:46:34.074341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.506 [2024-10-01 08:46:34.074350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.506 qpair failed and we were unable to recover it. 00:31:42.506 [2024-10-01 08:46:34.074686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.506 [2024-10-01 08:46:34.074696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.506 qpair failed and we were unable to recover it. 00:31:42.506 [2024-10-01 08:46:34.075004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.506 [2024-10-01 08:46:34.075014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.506 qpair failed and we were unable to recover it. 00:31:42.506 [2024-10-01 08:46:34.075317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.506 [2024-10-01 08:46:34.075327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.506 qpair failed and we were unable to recover it. 00:31:42.506 [2024-10-01 08:46:34.075616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.506 [2024-10-01 08:46:34.075626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.506 qpair failed and we were unable to recover it. 00:31:42.506 [2024-10-01 08:46:34.075882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.506 [2024-10-01 08:46:34.075892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.506 qpair failed and we were unable to recover it. 00:31:42.506 [2024-10-01 08:46:34.076215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.506 [2024-10-01 08:46:34.076225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.506 qpair failed and we were unable to recover it. 00:31:42.506 [2024-10-01 08:46:34.076512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.506 [2024-10-01 08:46:34.076530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.506 qpair failed and we were unable to recover it. 00:31:42.506 [2024-10-01 08:46:34.076828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.506 [2024-10-01 08:46:34.076838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.506 qpair failed and we were unable to recover it. 00:31:42.506 [2024-10-01 08:46:34.077049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.506 [2024-10-01 08:46:34.077059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.506 qpair failed and we were unable to recover it. 00:31:42.506 [2024-10-01 08:46:34.077368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.506 [2024-10-01 08:46:34.077378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.506 qpair failed and we were unable to recover it. 00:31:42.506 [2024-10-01 08:46:34.077595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.506 [2024-10-01 08:46:34.077604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.506 qpair failed and we were unable to recover it. 00:31:42.506 [2024-10-01 08:46:34.077801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.506 [2024-10-01 08:46:34.077811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.506 qpair failed and we were unable to recover it. 00:31:42.506 [2024-10-01 08:46:34.078144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.506 [2024-10-01 08:46:34.078155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.506 qpair failed and we were unable to recover it. 00:31:42.506 [2024-10-01 08:46:34.078458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.506 [2024-10-01 08:46:34.078469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.506 qpair failed and we were unable to recover it. 00:31:42.507 [2024-10-01 08:46:34.078773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.507 [2024-10-01 08:46:34.078782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.507 qpair failed and we were unable to recover it. 00:31:42.507 [2024-10-01 08:46:34.079068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.507 [2024-10-01 08:46:34.079078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.507 qpair failed and we were unable to recover it. 00:31:42.507 [2024-10-01 08:46:34.079381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.507 [2024-10-01 08:46:34.079390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.507 qpair failed and we were unable to recover it. 00:31:42.507 [2024-10-01 08:46:34.079669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.507 [2024-10-01 08:46:34.079679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.507 qpair failed and we were unable to recover it. 00:31:42.507 [2024-10-01 08:46:34.079964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.507 [2024-10-01 08:46:34.079974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.507 qpair failed and we were unable to recover it. 00:31:42.507 [2024-10-01 08:46:34.080301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.507 [2024-10-01 08:46:34.080319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.507 qpair failed and we were unable to recover it. 00:31:42.507 [2024-10-01 08:46:34.080619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.507 [2024-10-01 08:46:34.080629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.507 qpair failed and we were unable to recover it. 00:31:42.507 [2024-10-01 08:46:34.080897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.507 [2024-10-01 08:46:34.080909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.507 qpair failed and we were unable to recover it. 00:31:42.507 [2024-10-01 08:46:34.081234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.507 [2024-10-01 08:46:34.081245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.507 qpair failed and we were unable to recover it. 00:31:42.507 [2024-10-01 08:46:34.081553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.507 [2024-10-01 08:46:34.081562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.507 qpair failed and we were unable to recover it. 00:31:42.507 [2024-10-01 08:46:34.081888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.507 [2024-10-01 08:46:34.081899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.507 qpair failed and we were unable to recover it. 00:31:42.507 [2024-10-01 08:46:34.082171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.507 [2024-10-01 08:46:34.082181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.507 qpair failed and we were unable to recover it. 00:31:42.507 [2024-10-01 08:46:34.082506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.507 [2024-10-01 08:46:34.082516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.507 qpair failed and we were unable to recover it. 00:31:42.507 [2024-10-01 08:46:34.082836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.507 [2024-10-01 08:46:34.082846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.507 qpair failed and we were unable to recover it. 00:31:42.507 [2024-10-01 08:46:34.083153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.507 [2024-10-01 08:46:34.083163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.507 qpair failed and we were unable to recover it. 00:31:42.507 [2024-10-01 08:46:34.083439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.507 [2024-10-01 08:46:34.083449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.507 qpair failed and we were unable to recover it. 00:31:42.507 [2024-10-01 08:46:34.083729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.507 [2024-10-01 08:46:34.083739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.507 qpair failed and we were unable to recover it. 00:31:42.507 [2024-10-01 08:46:34.084046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.507 [2024-10-01 08:46:34.084057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.507 qpair failed and we were unable to recover it. 00:31:42.507 [2024-10-01 08:46:34.084337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.507 [2024-10-01 08:46:34.084347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.507 qpair failed and we were unable to recover it. 00:31:42.507 [2024-10-01 08:46:34.084623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.507 [2024-10-01 08:46:34.084632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.507 qpair failed and we were unable to recover it. 00:31:42.507 [2024-10-01 08:46:34.084955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.507 [2024-10-01 08:46:34.084965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.507 qpair failed and we were unable to recover it. 00:31:42.507 [2024-10-01 08:46:34.085302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.507 [2024-10-01 08:46:34.085313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.507 qpair failed and we were unable to recover it. 00:31:42.507 [2024-10-01 08:46:34.085514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.507 [2024-10-01 08:46:34.085523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.507 qpair failed and we were unable to recover it. 00:31:42.507 [2024-10-01 08:46:34.085832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.507 [2024-10-01 08:46:34.085842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.507 qpair failed and we were unable to recover it. 00:31:42.507 [2024-10-01 08:46:34.086159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.507 [2024-10-01 08:46:34.086169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.507 qpair failed and we were unable to recover it. 00:31:42.507 [2024-10-01 08:46:34.086460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.507 [2024-10-01 08:46:34.086479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.507 qpair failed and we were unable to recover it. 00:31:42.507 [2024-10-01 08:46:34.086802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.507 [2024-10-01 08:46:34.086812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.507 qpair failed and we were unable to recover it. 00:31:42.507 [2024-10-01 08:46:34.087116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.507 [2024-10-01 08:46:34.087126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.507 qpair failed and we were unable to recover it. 00:31:42.507 [2024-10-01 08:46:34.087326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.507 [2024-10-01 08:46:34.087335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.507 qpair failed and we were unable to recover it. 00:31:42.507 [2024-10-01 08:46:34.087613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.507 [2024-10-01 08:46:34.087622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.507 qpair failed and we were unable to recover it. 00:31:42.507 [2024-10-01 08:46:34.087904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.507 [2024-10-01 08:46:34.087914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.507 qpair failed and we were unable to recover it. 00:31:42.507 [2024-10-01 08:46:34.088219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.507 [2024-10-01 08:46:34.088230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.507 qpair failed and we were unable to recover it. 00:31:42.507 [2024-10-01 08:46:34.088521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.507 [2024-10-01 08:46:34.088530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.507 qpair failed and we were unable to recover it. 00:31:42.507 [2024-10-01 08:46:34.088820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.507 [2024-10-01 08:46:34.088830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.507 qpair failed and we were unable to recover it. 00:31:42.507 [2024-10-01 08:46:34.089139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.507 [2024-10-01 08:46:34.089150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.507 qpair failed and we were unable to recover it. 00:31:42.507 [2024-10-01 08:46:34.089377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.507 [2024-10-01 08:46:34.089387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.507 qpair failed and we were unable to recover it. 00:31:42.507 [2024-10-01 08:46:34.089681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.507 [2024-10-01 08:46:34.089691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.507 qpair failed and we were unable to recover it. 00:31:42.507 [2024-10-01 08:46:34.090013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.507 [2024-10-01 08:46:34.090024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.507 qpair failed and we were unable to recover it. 00:31:42.507 [2024-10-01 08:46:34.090350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.507 [2024-10-01 08:46:34.090359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.507 qpair failed and we were unable to recover it. 00:31:42.508 [2024-10-01 08:46:34.090641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.508 [2024-10-01 08:46:34.090652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.508 qpair failed and we were unable to recover it. 00:31:42.508 [2024-10-01 08:46:34.090960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.508 [2024-10-01 08:46:34.090970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.508 qpair failed and we were unable to recover it. 00:31:42.508 [2024-10-01 08:46:34.091252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.508 [2024-10-01 08:46:34.091262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.508 qpair failed and we were unable to recover it. 00:31:42.508 [2024-10-01 08:46:34.091535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.508 [2024-10-01 08:46:34.091545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.508 qpair failed and we were unable to recover it. 00:31:42.508 [2024-10-01 08:46:34.091850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.508 [2024-10-01 08:46:34.091860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.508 qpair failed and we were unable to recover it. 00:31:42.508 [2024-10-01 08:46:34.092237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.508 [2024-10-01 08:46:34.092248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.508 qpair failed and we were unable to recover it. 00:31:42.508 [2024-10-01 08:46:34.092546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.508 [2024-10-01 08:46:34.092556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.508 qpair failed and we were unable to recover it. 00:31:42.508 [2024-10-01 08:46:34.092862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.508 [2024-10-01 08:46:34.092872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.508 qpair failed and we were unable to recover it. 00:31:42.508 [2024-10-01 08:46:34.093071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.508 [2024-10-01 08:46:34.093081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.508 qpair failed and we were unable to recover it. 00:31:42.508 [2024-10-01 08:46:34.093397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.508 [2024-10-01 08:46:34.093409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.508 qpair failed and we were unable to recover it. 00:31:42.508 [2024-10-01 08:46:34.093740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.508 [2024-10-01 08:46:34.093749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.508 qpair failed and we were unable to recover it. 00:31:42.508 [2024-10-01 08:46:34.094008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.508 [2024-10-01 08:46:34.094018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.508 qpair failed and we were unable to recover it. 00:31:42.508 [2024-10-01 08:46:34.094326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.508 [2024-10-01 08:46:34.094335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.508 qpair failed and we were unable to recover it. 00:31:42.508 [2024-10-01 08:46:34.094638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.508 [2024-10-01 08:46:34.094648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.508 qpair failed and we were unable to recover it. 00:31:42.508 [2024-10-01 08:46:34.094927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.508 [2024-10-01 08:46:34.094937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.508 qpair failed and we were unable to recover it. 00:31:42.508 [2024-10-01 08:46:34.095097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.508 [2024-10-01 08:46:34.095109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.508 qpair failed and we were unable to recover it. 00:31:42.508 [2024-10-01 08:46:34.095502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.508 [2024-10-01 08:46:34.095513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.508 qpair failed and we were unable to recover it. 00:31:42.508 [2024-10-01 08:46:34.095795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.508 [2024-10-01 08:46:34.095805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.508 qpair failed and we were unable to recover it. 00:31:42.508 [2024-10-01 08:46:34.096094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.508 [2024-10-01 08:46:34.096104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.508 qpair failed and we were unable to recover it. 00:31:42.508 [2024-10-01 08:46:34.096396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.508 [2024-10-01 08:46:34.096406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.508 qpair failed and we were unable to recover it. 00:31:42.508 [2024-10-01 08:46:34.096731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.508 [2024-10-01 08:46:34.096741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.508 qpair failed and we were unable to recover it. 00:31:42.508 [2024-10-01 08:46:34.097065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.508 [2024-10-01 08:46:34.097076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.508 qpair failed and we were unable to recover it. 00:31:42.508 [2024-10-01 08:46:34.097262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.508 [2024-10-01 08:46:34.097273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.508 qpair failed and we were unable to recover it. 00:31:42.508 [2024-10-01 08:46:34.097653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.508 [2024-10-01 08:46:34.097663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.508 qpair failed and we were unable to recover it. 00:31:42.508 [2024-10-01 08:46:34.097946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.508 [2024-10-01 08:46:34.097956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.508 qpair failed and we were unable to recover it. 00:31:42.508 [2024-10-01 08:46:34.098162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.508 [2024-10-01 08:46:34.098173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.508 qpair failed and we were unable to recover it. 00:31:42.508 [2024-10-01 08:46:34.098490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.508 [2024-10-01 08:46:34.098499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.508 qpair failed and we were unable to recover it. 00:31:42.508 [2024-10-01 08:46:34.098769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.508 [2024-10-01 08:46:34.098779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.508 qpair failed and we were unable to recover it. 00:31:42.508 [2024-10-01 08:46:34.099052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.508 [2024-10-01 08:46:34.099062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.508 qpair failed and we were unable to recover it. 00:31:42.508 [2024-10-01 08:46:34.099366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.508 [2024-10-01 08:46:34.099376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.508 qpair failed and we were unable to recover it. 00:31:42.508 [2024-10-01 08:46:34.099662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.508 [2024-10-01 08:46:34.099672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.508 qpair failed and we were unable to recover it. 00:31:42.508 [2024-10-01 08:46:34.099959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.508 [2024-10-01 08:46:34.099968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.508 qpair failed and we were unable to recover it. 00:31:42.508 [2024-10-01 08:46:34.100295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.508 [2024-10-01 08:46:34.100305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.508 qpair failed and we were unable to recover it. 00:31:42.508 [2024-10-01 08:46:34.100592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.509 [2024-10-01 08:46:34.100609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.509 qpair failed and we were unable to recover it. 00:31:42.509 [2024-10-01 08:46:34.101005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.509 [2024-10-01 08:46:34.101016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.509 qpair failed and we were unable to recover it. 00:31:42.509 [2024-10-01 08:46:34.101286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.509 [2024-10-01 08:46:34.101295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.509 qpair failed and we were unable to recover it. 00:31:42.509 [2024-10-01 08:46:34.101606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.509 [2024-10-01 08:46:34.101618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.509 qpair failed and we were unable to recover it. 00:31:42.509 [2024-10-01 08:46:34.101891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.509 [2024-10-01 08:46:34.101901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.509 qpair failed and we were unable to recover it. 00:31:42.509 [2024-10-01 08:46:34.102206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.509 [2024-10-01 08:46:34.102216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.509 qpair failed and we were unable to recover it. 00:31:42.509 [2024-10-01 08:46:34.102498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.509 [2024-10-01 08:46:34.102508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.509 qpair failed and we were unable to recover it. 00:31:42.509 [2024-10-01 08:46:34.102793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.509 [2024-10-01 08:46:34.102803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.509 qpair failed and we were unable to recover it. 00:31:42.509 [2024-10-01 08:46:34.103074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.509 [2024-10-01 08:46:34.103084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.509 qpair failed and we were unable to recover it. 00:31:42.509 [2024-10-01 08:46:34.103257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.509 [2024-10-01 08:46:34.103268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.509 qpair failed and we were unable to recover it. 00:31:42.509 [2024-10-01 08:46:34.103589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.509 [2024-10-01 08:46:34.103599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.509 qpair failed and we were unable to recover it. 00:31:42.509 [2024-10-01 08:46:34.103716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.509 [2024-10-01 08:46:34.103726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.509 qpair failed and we were unable to recover it. 00:31:42.509 [2024-10-01 08:46:34.104032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.509 [2024-10-01 08:46:34.104043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.509 qpair failed and we were unable to recover it. 00:31:42.509 [2024-10-01 08:46:34.104359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.509 [2024-10-01 08:46:34.104369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.509 qpair failed and we were unable to recover it. 00:31:42.509 [2024-10-01 08:46:34.104639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.509 [2024-10-01 08:46:34.104648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.509 qpair failed and we were unable to recover it. 00:31:42.509 [2024-10-01 08:46:34.104811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.509 [2024-10-01 08:46:34.104821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.509 qpair failed and we were unable to recover it. 00:31:42.509 [2024-10-01 08:46:34.105149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.509 [2024-10-01 08:46:34.105159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.509 qpair failed and we were unable to recover it. 00:31:42.509 [2024-10-01 08:46:34.105475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.509 [2024-10-01 08:46:34.105485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.509 qpair failed and we were unable to recover it. 00:31:42.509 [2024-10-01 08:46:34.105793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.509 [2024-10-01 08:46:34.105803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.509 qpair failed and we were unable to recover it. 00:31:42.509 [2024-10-01 08:46:34.106114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.509 [2024-10-01 08:46:34.106124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.509 qpair failed and we were unable to recover it. 00:31:42.509 [2024-10-01 08:46:34.106442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.509 [2024-10-01 08:46:34.106452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.509 qpair failed and we were unable to recover it. 00:31:42.509 [2024-10-01 08:46:34.106786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.509 [2024-10-01 08:46:34.106796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.509 qpair failed and we were unable to recover it. 00:31:42.509 [2024-10-01 08:46:34.107107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.509 [2024-10-01 08:46:34.107117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.509 qpair failed and we were unable to recover it. 00:31:42.509 [2024-10-01 08:46:34.107415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.509 [2024-10-01 08:46:34.107425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.509 qpair failed and we were unable to recover it. 00:31:42.509 [2024-10-01 08:46:34.107696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.509 [2024-10-01 08:46:34.107705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.509 qpair failed and we were unable to recover it. 00:31:42.509 [2024-10-01 08:46:34.108018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.509 [2024-10-01 08:46:34.108028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.509 qpair failed and we were unable to recover it. 00:31:42.509 [2024-10-01 08:46:34.108364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.509 [2024-10-01 08:46:34.108374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.509 qpair failed and we were unable to recover it. 00:31:42.509 [2024-10-01 08:46:34.108565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.509 [2024-10-01 08:46:34.108575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.509 qpair failed and we were unable to recover it. 00:31:42.509 [2024-10-01 08:46:34.108845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.509 [2024-10-01 08:46:34.108855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.509 qpair failed and we were unable to recover it. 00:31:42.509 [2024-10-01 08:46:34.109183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.509 [2024-10-01 08:46:34.109194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.509 qpair failed and we were unable to recover it. 00:31:42.509 [2024-10-01 08:46:34.109512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.509 [2024-10-01 08:46:34.109524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.509 qpair failed and we were unable to recover it. 00:31:42.509 [2024-10-01 08:46:34.109823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.509 [2024-10-01 08:46:34.109834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.509 qpair failed and we were unable to recover it. 00:31:42.509 [2024-10-01 08:46:34.110058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.509 [2024-10-01 08:46:34.110069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.509 qpair failed and we were unable to recover it. 00:31:42.509 [2024-10-01 08:46:34.110379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.509 [2024-10-01 08:46:34.110388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.509 qpair failed and we were unable to recover it. 00:31:42.509 [2024-10-01 08:46:34.110679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.509 [2024-10-01 08:46:34.110689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.509 qpair failed and we were unable to recover it. 00:31:42.509 [2024-10-01 08:46:34.111001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.509 [2024-10-01 08:46:34.111011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.509 qpair failed and we were unable to recover it. 00:31:42.509 [2024-10-01 08:46:34.111299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.509 [2024-10-01 08:46:34.111308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.509 qpair failed and we were unable to recover it. 00:31:42.509 [2024-10-01 08:46:34.111657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.510 [2024-10-01 08:46:34.111667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.510 qpair failed and we were unable to recover it. 00:31:42.510 [2024-10-01 08:46:34.111937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.510 [2024-10-01 08:46:34.111947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.510 qpair failed and we were unable to recover it. 00:31:42.510 [2024-10-01 08:46:34.112143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.510 [2024-10-01 08:46:34.112153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.510 qpair failed and we were unable to recover it. 00:31:42.510 [2024-10-01 08:46:34.112419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.510 [2024-10-01 08:46:34.112429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.510 qpair failed and we were unable to recover it. 00:31:42.510 [2024-10-01 08:46:34.112749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.510 [2024-10-01 08:46:34.112759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.510 qpair failed and we were unable to recover it. 00:31:42.510 [2024-10-01 08:46:34.113088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.510 [2024-10-01 08:46:34.113099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.510 qpair failed and we were unable to recover it. 00:31:42.510 [2024-10-01 08:46:34.113398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.510 [2024-10-01 08:46:34.113407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.510 qpair failed and we were unable to recover it. 00:31:42.510 [2024-10-01 08:46:34.113679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.510 [2024-10-01 08:46:34.113690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.510 qpair failed and we were unable to recover it. 00:31:42.510 [2024-10-01 08:46:34.114002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.510 [2024-10-01 08:46:34.114012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.510 qpair failed and we were unable to recover it. 00:31:42.510 [2024-10-01 08:46:34.114294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.510 [2024-10-01 08:46:34.114305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.510 qpair failed and we were unable to recover it. 00:31:42.510 [2024-10-01 08:46:34.114658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.510 [2024-10-01 08:46:34.114668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.510 qpair failed and we were unable to recover it. 00:31:42.510 [2024-10-01 08:46:34.114866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.510 [2024-10-01 08:46:34.114875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.510 qpair failed and we were unable to recover it. 00:31:42.510 [2024-10-01 08:46:34.115104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.510 [2024-10-01 08:46:34.115114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.510 qpair failed and we were unable to recover it. 00:31:42.510 [2024-10-01 08:46:34.115421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.510 [2024-10-01 08:46:34.115432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.510 qpair failed and we were unable to recover it. 00:31:42.510 [2024-10-01 08:46:34.115704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.510 [2024-10-01 08:46:34.115714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.510 qpair failed and we were unable to recover it. 00:31:42.510 [2024-10-01 08:46:34.116016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.510 [2024-10-01 08:46:34.116026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.510 qpair failed and we were unable to recover it. 00:31:42.510 [2024-10-01 08:46:34.116324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.510 [2024-10-01 08:46:34.116334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.510 qpair failed and we were unable to recover it. 00:31:42.510 [2024-10-01 08:46:34.116646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.510 [2024-10-01 08:46:34.116656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.510 qpair failed and we were unable to recover it. 00:31:42.510 [2024-10-01 08:46:34.116926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.510 [2024-10-01 08:46:34.116936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.510 qpair failed and we were unable to recover it. 00:31:42.510 [2024-10-01 08:46:34.117143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.510 [2024-10-01 08:46:34.117154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.510 qpair failed and we were unable to recover it. 00:31:42.510 [2024-10-01 08:46:34.117450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.510 [2024-10-01 08:46:34.117460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.510 qpair failed and we were unable to recover it. 00:31:42.510 [2024-10-01 08:46:34.117643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.510 [2024-10-01 08:46:34.117654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.510 qpair failed and we were unable to recover it. 00:31:42.510 [2024-10-01 08:46:34.117973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.510 [2024-10-01 08:46:34.117984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.510 qpair failed and we were unable to recover it. 00:31:42.510 [2024-10-01 08:46:34.118300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.510 [2024-10-01 08:46:34.118311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.510 qpair failed and we were unable to recover it. 00:31:42.510 [2024-10-01 08:46:34.118590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.510 [2024-10-01 08:46:34.118601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.510 qpair failed and we were unable to recover it. 00:31:42.510 [2024-10-01 08:46:34.118890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.510 [2024-10-01 08:46:34.118900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.510 qpair failed and we were unable to recover it. 00:31:42.510 [2024-10-01 08:46:34.119178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.510 [2024-10-01 08:46:34.119189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.510 qpair failed and we were unable to recover it. 00:31:42.510 [2024-10-01 08:46:34.119490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.510 [2024-10-01 08:46:34.119502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.510 qpair failed and we were unable to recover it. 00:31:42.510 [2024-10-01 08:46:34.119831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.510 [2024-10-01 08:46:34.119842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.510 qpair failed and we were unable to recover it. 00:31:42.510 [2024-10-01 08:46:34.120159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.510 [2024-10-01 08:46:34.120169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.510 qpair failed and we were unable to recover it. 00:31:42.510 [2024-10-01 08:46:34.120434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.510 [2024-10-01 08:46:34.120444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.510 qpair failed and we were unable to recover it. 00:31:42.510 [2024-10-01 08:46:34.120771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.510 [2024-10-01 08:46:34.120781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.510 qpair failed and we were unable to recover it. 00:31:42.510 [2024-10-01 08:46:34.121064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.510 [2024-10-01 08:46:34.121074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.510 qpair failed and we were unable to recover it. 00:31:42.510 [2024-10-01 08:46:34.121307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.510 [2024-10-01 08:46:34.121317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.510 qpair failed and we were unable to recover it. 00:31:42.510 [2024-10-01 08:46:34.121598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.510 [2024-10-01 08:46:34.121608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.510 qpair failed and we were unable to recover it. 00:31:42.510 [2024-10-01 08:46:34.121945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.510 [2024-10-01 08:46:34.121955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.510 qpair failed and we were unable to recover it. 00:31:42.510 [2024-10-01 08:46:34.122225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.510 [2024-10-01 08:46:34.122235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.510 qpair failed and we were unable to recover it. 00:31:42.510 [2024-10-01 08:46:34.122558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.510 [2024-10-01 08:46:34.122568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.511 qpair failed and we were unable to recover it. 00:31:42.511 [2024-10-01 08:46:34.122852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.511 [2024-10-01 08:46:34.122863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.511 qpair failed and we were unable to recover it. 00:31:42.511 [2024-10-01 08:46:34.123136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.511 [2024-10-01 08:46:34.123146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.511 qpair failed and we were unable to recover it. 00:31:42.511 [2024-10-01 08:46:34.123431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.511 [2024-10-01 08:46:34.123441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.511 qpair failed and we were unable to recover it. 00:31:42.511 [2024-10-01 08:46:34.123769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.511 [2024-10-01 08:46:34.123779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.511 qpair failed and we were unable to recover it. 00:31:42.511 [2024-10-01 08:46:34.124082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.511 [2024-10-01 08:46:34.124093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.511 qpair failed and we were unable to recover it. 00:31:42.511 [2024-10-01 08:46:34.124417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.511 [2024-10-01 08:46:34.124427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.511 qpair failed and we were unable to recover it. 00:31:42.511 [2024-10-01 08:46:34.124633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.511 [2024-10-01 08:46:34.124643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.511 qpair failed and we were unable to recover it. 00:31:42.511 [2024-10-01 08:46:34.124906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.511 [2024-10-01 08:46:34.124916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.511 qpair failed and we were unable to recover it. 00:31:42.511 [2024-10-01 08:46:34.125205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.511 [2024-10-01 08:46:34.125216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.511 qpair failed and we were unable to recover it. 00:31:42.511 [2024-10-01 08:46:34.125406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.511 [2024-10-01 08:46:34.125416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.511 qpair failed and we were unable to recover it. 00:31:42.511 [2024-10-01 08:46:34.125598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.511 [2024-10-01 08:46:34.125608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.511 qpair failed and we were unable to recover it. 00:31:42.511 [2024-10-01 08:46:34.125885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.511 [2024-10-01 08:46:34.125896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.511 qpair failed and we were unable to recover it. 00:31:42.511 [2024-10-01 08:46:34.126205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.511 [2024-10-01 08:46:34.126216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.511 qpair failed and we were unable to recover it. 00:31:42.511 [2024-10-01 08:46:34.126507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.511 [2024-10-01 08:46:34.126517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.511 qpair failed and we were unable to recover it. 00:31:42.511 [2024-10-01 08:46:34.126820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.511 [2024-10-01 08:46:34.126831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.511 qpair failed and we were unable to recover it. 00:31:42.511 [2024-10-01 08:46:34.127156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.511 [2024-10-01 08:46:34.127166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.511 qpair failed and we were unable to recover it. 00:31:42.511 [2024-10-01 08:46:34.127462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.511 [2024-10-01 08:46:34.127472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.511 qpair failed and we were unable to recover it. 00:31:42.511 [2024-10-01 08:46:34.127781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.511 [2024-10-01 08:46:34.127791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.511 qpair failed and we were unable to recover it. 00:31:42.511 [2024-10-01 08:46:34.128063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.511 [2024-10-01 08:46:34.128074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.511 qpair failed and we were unable to recover it. 00:31:42.511 [2024-10-01 08:46:34.128398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.511 [2024-10-01 08:46:34.128407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.511 qpair failed and we were unable to recover it. 00:31:42.511 [2024-10-01 08:46:34.128681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.511 [2024-10-01 08:46:34.128691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.511 qpair failed and we were unable to recover it. 00:31:42.511 [2024-10-01 08:46:34.128971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.511 [2024-10-01 08:46:34.128981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.511 qpair failed and we were unable to recover it. 00:31:42.511 [2024-10-01 08:46:34.129256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.511 [2024-10-01 08:46:34.129267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.511 qpair failed and we were unable to recover it. 00:31:42.511 [2024-10-01 08:46:34.129476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.511 [2024-10-01 08:46:34.129486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.511 qpair failed and we were unable to recover it. 00:31:42.511 [2024-10-01 08:46:34.129793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.511 [2024-10-01 08:46:34.129803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.511 qpair failed and we were unable to recover it. 00:31:42.511 [2024-10-01 08:46:34.130104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.511 [2024-10-01 08:46:34.130114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.511 qpair failed and we were unable to recover it. 00:31:42.511 [2024-10-01 08:46:34.130396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.511 [2024-10-01 08:46:34.130406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.511 qpair failed and we were unable to recover it. 00:31:42.511 [2024-10-01 08:46:34.130681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.511 [2024-10-01 08:46:34.130691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.511 qpair failed and we were unable to recover it. 00:31:42.511 [2024-10-01 08:46:34.131007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.511 [2024-10-01 08:46:34.131017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.511 qpair failed and we were unable to recover it. 00:31:42.511 [2024-10-01 08:46:34.131296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.511 [2024-10-01 08:46:34.131306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.511 qpair failed and we were unable to recover it. 00:31:42.511 [2024-10-01 08:46:34.131628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.511 [2024-10-01 08:46:34.131639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.511 qpair failed and we were unable to recover it. 00:31:42.511 [2024-10-01 08:46:34.131937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.511 [2024-10-01 08:46:34.131948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.511 qpair failed and we were unable to recover it. 00:31:42.511 [2024-10-01 08:46:34.132269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.511 [2024-10-01 08:46:34.132279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.511 qpair failed and we were unable to recover it. 00:31:42.511 [2024-10-01 08:46:34.132603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.511 [2024-10-01 08:46:34.132614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.511 qpair failed and we were unable to recover it. 00:31:42.511 [2024-10-01 08:46:34.132914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.511 [2024-10-01 08:46:34.132924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.511 qpair failed and we were unable to recover it. 00:31:42.511 [2024-10-01 08:46:34.133220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.511 [2024-10-01 08:46:34.133231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.511 qpair failed and we were unable to recover it. 00:31:42.511 [2024-10-01 08:46:34.133567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.511 [2024-10-01 08:46:34.133578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.511 qpair failed and we were unable to recover it. 00:31:42.511 [2024-10-01 08:46:34.133876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.511 [2024-10-01 08:46:34.133889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.512 qpair failed and we were unable to recover it. 00:31:42.512 [2024-10-01 08:46:34.134161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.512 [2024-10-01 08:46:34.134173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.512 qpair failed and we were unable to recover it. 00:31:42.512 [2024-10-01 08:46:34.134474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.512 [2024-10-01 08:46:34.134484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.512 qpair failed and we were unable to recover it. 00:31:42.512 [2024-10-01 08:46:34.134792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.512 [2024-10-01 08:46:34.134802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.512 qpair failed and we were unable to recover it. 00:31:42.512 [2024-10-01 08:46:34.135088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.512 [2024-10-01 08:46:34.135099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.512 qpair failed and we were unable to recover it. 00:31:42.512 [2024-10-01 08:46:34.135422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.512 [2024-10-01 08:46:34.135432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.512 qpair failed and we were unable to recover it. 00:31:42.512 [2024-10-01 08:46:34.135734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.512 [2024-10-01 08:46:34.135744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.512 qpair failed and we were unable to recover it. 00:31:42.512 [2024-10-01 08:46:34.136037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.512 [2024-10-01 08:46:34.136047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.512 qpair failed and we were unable to recover it. 00:31:42.512 [2024-10-01 08:46:34.136351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.512 [2024-10-01 08:46:34.136361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.512 qpair failed and we were unable to recover it. 00:31:42.512 [2024-10-01 08:46:34.136673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.512 [2024-10-01 08:46:34.136682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.512 qpair failed and we were unable to recover it. 00:31:42.512 [2024-10-01 08:46:34.136964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.512 [2024-10-01 08:46:34.136973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.512 qpair failed and we were unable to recover it. 00:31:42.512 [2024-10-01 08:46:34.137264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.512 [2024-10-01 08:46:34.137274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.512 qpair failed and we were unable to recover it. 00:31:42.512 [2024-10-01 08:46:34.137540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.512 [2024-10-01 08:46:34.137549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.512 qpair failed and we were unable to recover it. 00:31:42.512 [2024-10-01 08:46:34.137879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.512 [2024-10-01 08:46:34.137889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.512 qpair failed and we were unable to recover it. 00:31:42.512 [2024-10-01 08:46:34.138209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.512 [2024-10-01 08:46:34.138220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.512 qpair failed and we were unable to recover it. 00:31:42.512 [2024-10-01 08:46:34.138527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.512 [2024-10-01 08:46:34.138536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.512 qpair failed and we were unable to recover it. 00:31:42.512 [2024-10-01 08:46:34.138839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.512 [2024-10-01 08:46:34.138850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.512 qpair failed and we were unable to recover it. 00:31:42.512 [2024-10-01 08:46:34.139117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.512 [2024-10-01 08:46:34.139128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.512 qpair failed and we were unable to recover it. 00:31:42.512 [2024-10-01 08:46:34.139457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.512 [2024-10-01 08:46:34.139468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.512 qpair failed and we were unable to recover it. 00:31:42.512 [2024-10-01 08:46:34.139767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.512 [2024-10-01 08:46:34.139777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.512 qpair failed and we were unable to recover it. 00:31:42.512 [2024-10-01 08:46:34.139961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.512 [2024-10-01 08:46:34.139972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.512 qpair failed and we were unable to recover it. 00:31:42.512 [2024-10-01 08:46:34.140179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.512 [2024-10-01 08:46:34.140189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.512 qpair failed and we were unable to recover it. 00:31:42.512 [2024-10-01 08:46:34.140528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.512 [2024-10-01 08:46:34.140538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.512 qpair failed and we were unable to recover it. 00:31:42.512 [2024-10-01 08:46:34.140733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.512 [2024-10-01 08:46:34.140742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.512 qpair failed and we were unable to recover it. 00:31:42.512 [2024-10-01 08:46:34.141060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.512 [2024-10-01 08:46:34.141070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.512 qpair failed and we were unable to recover it. 00:31:42.512 [2024-10-01 08:46:34.141409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.512 [2024-10-01 08:46:34.141419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.512 qpair failed and we were unable to recover it. 00:31:42.512 [2024-10-01 08:46:34.141745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.512 [2024-10-01 08:46:34.141755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.512 qpair failed and we were unable to recover it. 00:31:42.512 [2024-10-01 08:46:34.142133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.512 [2024-10-01 08:46:34.142144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.512 qpair failed and we were unable to recover it. 00:31:42.512 [2024-10-01 08:46:34.142487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.512 [2024-10-01 08:46:34.142498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.512 qpair failed and we were unable to recover it. 00:31:42.512 [2024-10-01 08:46:34.142680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.512 [2024-10-01 08:46:34.142691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.512 qpair failed and we were unable to recover it. 00:31:42.512 [2024-10-01 08:46:34.143083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.512 [2024-10-01 08:46:34.143093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.512 qpair failed and we were unable to recover it. 00:31:42.512 [2024-10-01 08:46:34.143384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.512 [2024-10-01 08:46:34.143393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.512 qpair failed and we were unable to recover it. 00:31:42.512 [2024-10-01 08:46:34.143692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.512 [2024-10-01 08:46:34.143702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.512 qpair failed and we were unable to recover it. 00:31:42.512 [2024-10-01 08:46:34.144005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.512 [2024-10-01 08:46:34.144015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.512 qpair failed and we were unable to recover it. 00:31:42.512 [2024-10-01 08:46:34.144329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.512 [2024-10-01 08:46:34.144339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.512 qpair failed and we were unable to recover it. 00:31:42.513 [2024-10-01 08:46:34.144662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.513 [2024-10-01 08:46:34.144673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.513 qpair failed and we were unable to recover it. 00:31:42.513 [2024-10-01 08:46:34.144973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.513 [2024-10-01 08:46:34.144984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.513 qpair failed and we were unable to recover it. 00:31:42.513 [2024-10-01 08:46:34.145193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.513 [2024-10-01 08:46:34.145204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.513 qpair failed and we were unable to recover it. 00:31:42.513 [2024-10-01 08:46:34.145503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.513 [2024-10-01 08:46:34.145514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.513 qpair failed and we were unable to recover it. 00:31:42.513 [2024-10-01 08:46:34.145833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.513 [2024-10-01 08:46:34.145844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.513 qpair failed and we were unable to recover it. 00:31:42.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3943546 Killed "${NVMF_APP[@]}" "$@" 00:31:42.513 [2024-10-01 08:46:34.146161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.513 [2024-10-01 08:46:34.146171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.513 qpair failed and we were unable to recover it. 00:31:42.513 [2024-10-01 08:46:34.146456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.513 [2024-10-01 08:46:34.146466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.513 qpair failed and we were unable to recover it. 00:31:42.513 08:46:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:31:42.513 [2024-10-01 08:46:34.146779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.513 [2024-10-01 08:46:34.146790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.513 qpair failed and we were unable to recover it. 00:31:42.513 08:46:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:31:42.513 [2024-10-01 08:46:34.147093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.513 [2024-10-01 08:46:34.147104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.513 qpair failed and we were unable to recover it. 00:31:42.513 08:46:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:31:42.513 [2024-10-01 08:46:34.147433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.513 [2024-10-01 08:46:34.147444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.513 qpair failed and we were unable to recover it. 00:31:42.513 08:46:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:42.513 [2024-10-01 08:46:34.147811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.513 [2024-10-01 08:46:34.147822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.513 08:46:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:42.513 qpair failed and we were unable to recover it. 00:31:42.513 [2024-10-01 08:46:34.148159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.513 [2024-10-01 08:46:34.148169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.513 qpair failed and we were unable to recover it. 00:31:42.513 [2024-10-01 08:46:34.148449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.513 [2024-10-01 08:46:34.148459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.513 qpair failed and we were unable to recover it. 00:31:42.513 [2024-10-01 08:46:34.148657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.513 [2024-10-01 08:46:34.148667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.513 qpair failed and we were unable to recover it. 00:31:42.513 [2024-10-01 08:46:34.148970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.513 [2024-10-01 08:46:34.148980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.513 qpair failed and we were unable to recover it. 00:31:42.513 [2024-10-01 08:46:34.149273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.513 [2024-10-01 08:46:34.149285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.513 qpair failed and we were unable to recover it. 00:31:42.513 [2024-10-01 08:46:34.149591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.513 [2024-10-01 08:46:34.149601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.513 qpair failed and we were unable to recover it. 00:31:42.513 [2024-10-01 08:46:34.149925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.513 [2024-10-01 08:46:34.149936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.513 qpair failed and we were unable to recover it. 00:31:42.513 [2024-10-01 08:46:34.150184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.513 [2024-10-01 08:46:34.150194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.513 qpair failed and we were unable to recover it. 00:31:42.513 [2024-10-01 08:46:34.150506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.513 [2024-10-01 08:46:34.150515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.513 qpair failed and we were unable to recover it. 00:31:42.513 [2024-10-01 08:46:34.150842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.513 [2024-10-01 08:46:34.150852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.513 qpair failed and we were unable to recover it. 00:31:42.513 [2024-10-01 08:46:34.151053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.513 [2024-10-01 08:46:34.151064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.513 qpair failed and we were unable to recover it. 00:31:42.513 [2024-10-01 08:46:34.151352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.513 [2024-10-01 08:46:34.151362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.513 qpair failed and we were unable to recover it. 00:31:42.513 [2024-10-01 08:46:34.151695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.513 [2024-10-01 08:46:34.151704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.513 qpair failed and we were unable to recover it. 00:31:42.513 [2024-10-01 08:46:34.152033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.513 [2024-10-01 08:46:34.152043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.513 qpair failed and we were unable to recover it. 00:31:42.513 [2024-10-01 08:46:34.152339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.513 [2024-10-01 08:46:34.152349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.513 qpair failed and we were unable to recover it. 00:31:42.513 [2024-10-01 08:46:34.152557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.513 [2024-10-01 08:46:34.152566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.513 qpair failed and we were unable to recover it. 00:31:42.513 [2024-10-01 08:46:34.152866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.513 [2024-10-01 08:46:34.152875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.513 qpair failed and we were unable to recover it. 00:31:42.513 [2024-10-01 08:46:34.153184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.513 [2024-10-01 08:46:34.153195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.513 qpair failed and we were unable to recover it. 00:31:42.513 [2024-10-01 08:46:34.153454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.513 [2024-10-01 08:46:34.153464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.513 qpair failed and we were unable to recover it. 00:31:42.513 [2024-10-01 08:46:34.153791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.513 [2024-10-01 08:46:34.153801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.513 qpair failed and we were unable to recover it. 00:31:42.513 [2024-10-01 08:46:34.154106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.513 [2024-10-01 08:46:34.154117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.513 qpair failed and we were unable to recover it. 00:31:42.513 [2024-10-01 08:46:34.154443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.513 [2024-10-01 08:46:34.154454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.513 qpair failed and we were unable to recover it. 00:31:42.513 [2024-10-01 08:46:34.154662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.513 [2024-10-01 08:46:34.154672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.513 qpair failed and we were unable to recover it. 00:31:42.513 [2024-10-01 08:46:34.154860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.513 [2024-10-01 08:46:34.154870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.513 qpair failed and we were unable to recover it. 00:31:42.513 [2024-10-01 08:46:34.155153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.514 [2024-10-01 08:46:34.155164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.514 qpair failed and we were unable to recover it. 00:31:42.514 08:46:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # nvmfpid=3944534 00:31:42.514 [2024-10-01 08:46:34.155485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.514 [2024-10-01 08:46:34.155495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.514 qpair failed and we were unable to recover it. 00:31:42.514 08:46:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # waitforlisten 3944534 00:31:42.514 [2024-10-01 08:46:34.155754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.514 [2024-10-01 08:46:34.155763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.514 qpair failed and we were unable to recover it. 00:31:42.514 08:46:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:31:42.514 08:46:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3944534 ']' 00:31:42.514 [2024-10-01 08:46:34.156014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.514 [2024-10-01 08:46:34.156024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.514 qpair failed and we were unable to recover it. 00:31:42.514 08:46:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:42.514 [2024-10-01 08:46:34.156359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.514 [2024-10-01 08:46:34.156369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.514 qpair failed and we were unable to recover it. 00:31:42.514 08:46:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:42.514 08:46:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:42.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:42.514 [2024-10-01 08:46:34.156682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.514 [2024-10-01 08:46:34.156695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.514 qpair failed and we were unable to recover it. 00:31:42.514 08:46:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:42.514 [2024-10-01 08:46:34.157005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.514 [2024-10-01 08:46:34.157015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.514 qpair failed and we were unable to recover it. 00:31:42.514 08:46:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:42.514 [2024-10-01 08:46:34.157315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.514 [2024-10-01 08:46:34.157324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.514 qpair failed and we were unable to recover it. 00:31:42.514 [2024-10-01 08:46:34.157628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.514 [2024-10-01 08:46:34.157639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.514 qpair failed and we were unable to recover it. 00:31:42.514 [2024-10-01 08:46:34.157967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.514 [2024-10-01 08:46:34.157977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.514 qpair failed and we were unable to recover it. 00:31:42.514 [2024-10-01 08:46:34.158327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.514 [2024-10-01 08:46:34.158338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.514 qpair failed and we were unable to recover it. 00:31:42.514 [2024-10-01 08:46:34.158562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.514 [2024-10-01 08:46:34.158572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.514 qpair failed and we were unable to recover it. 00:31:42.514 [2024-10-01 08:46:34.158864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.514 [2024-10-01 08:46:34.158874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.514 qpair failed and we were unable to recover it. 00:31:42.514 [2024-10-01 08:46:34.159157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.514 [2024-10-01 08:46:34.159167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.514 qpair failed and we were unable to recover it. 00:31:42.514 [2024-10-01 08:46:34.159483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.514 [2024-10-01 08:46:34.159494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.514 qpair failed and we were unable to recover it. 00:31:42.514 [2024-10-01 08:46:34.159832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.514 [2024-10-01 08:46:34.159842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.514 qpair failed and we were unable to recover it. 00:31:42.514 [2024-10-01 08:46:34.160063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.514 [2024-10-01 08:46:34.160074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.514 qpair failed and we were unable to recover it. 00:31:42.514 [2024-10-01 08:46:34.160457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.514 [2024-10-01 08:46:34.160468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.514 qpair failed and we were unable to recover it. 00:31:42.514 [2024-10-01 08:46:34.160696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.514 [2024-10-01 08:46:34.160707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.514 qpair failed and we were unable to recover it. 00:31:42.514 [2024-10-01 08:46:34.161033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.514 [2024-10-01 08:46:34.161044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.514 qpair failed and we were unable to recover it. 00:31:42.514 [2024-10-01 08:46:34.161363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.514 [2024-10-01 08:46:34.161374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.514 qpair failed and we were unable to recover it. 00:31:42.514 [2024-10-01 08:46:34.161678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.514 [2024-10-01 08:46:34.161690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.514 qpair failed and we were unable to recover it. 00:31:42.514 [2024-10-01 08:46:34.162023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.514 [2024-10-01 08:46:34.162034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.514 qpair failed and we were unable to recover it. 00:31:42.514 [2024-10-01 08:46:34.162343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.514 [2024-10-01 08:46:34.162353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.514 qpair failed and we were unable to recover it. 00:31:42.514 [2024-10-01 08:46:34.162658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.514 [2024-10-01 08:46:34.162668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.514 qpair failed and we were unable to recover it. 00:31:42.514 [2024-10-01 08:46:34.163010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.514 [2024-10-01 08:46:34.163022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.514 qpair failed and we were unable to recover it. 00:31:42.514 [2024-10-01 08:46:34.163384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.514 [2024-10-01 08:46:34.163394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.514 qpair failed and we were unable to recover it. 00:31:42.514 [2024-10-01 08:46:34.163676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.514 [2024-10-01 08:46:34.163686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.514 qpair failed and we were unable to recover it. 00:31:42.514 [2024-10-01 08:46:34.163967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.514 [2024-10-01 08:46:34.163978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.514 qpair failed and we were unable to recover it. 00:31:42.514 [2024-10-01 08:46:34.164196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.514 [2024-10-01 08:46:34.164207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.514 qpair failed and we were unable to recover it. 00:31:42.514 [2024-10-01 08:46:34.164466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.514 [2024-10-01 08:46:34.164477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.514 qpair failed and we were unable to recover it. 00:31:42.514 [2024-10-01 08:46:34.164701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.514 [2024-10-01 08:46:34.164711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.514 qpair failed and we were unable to recover it. 00:31:42.514 [2024-10-01 08:46:34.165014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.514 [2024-10-01 08:46:34.165025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.514 qpair failed and we were unable to recover it. 00:31:42.514 [2024-10-01 08:46:34.165364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.514 [2024-10-01 08:46:34.165375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.514 qpair failed and we were unable to recover it. 00:31:42.514 [2024-10-01 08:46:34.165583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.515 [2024-10-01 08:46:34.165593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.515 qpair failed and we were unable to recover it. 00:31:42.515 [2024-10-01 08:46:34.165898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.515 [2024-10-01 08:46:34.165909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.515 qpair failed and we were unable to recover it. 00:31:42.515 [2024-10-01 08:46:34.166206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.515 [2024-10-01 08:46:34.166217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.515 qpair failed and we were unable to recover it. 00:31:42.515 [2024-10-01 08:46:34.166554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.515 [2024-10-01 08:46:34.166566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.515 qpair failed and we were unable to recover it. 00:31:42.515 [2024-10-01 08:46:34.166872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.515 [2024-10-01 08:46:34.166882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.515 qpair failed and we were unable to recover it. 00:31:42.515 [2024-10-01 08:46:34.167231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.515 [2024-10-01 08:46:34.167242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.515 qpair failed and we were unable to recover it. 00:31:42.515 [2024-10-01 08:46:34.167536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.515 [2024-10-01 08:46:34.167547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.515 qpair failed and we were unable to recover it. 00:31:42.515 [2024-10-01 08:46:34.167852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.515 [2024-10-01 08:46:34.167863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.515 qpair failed and we were unable to recover it. 00:31:42.515 [2024-10-01 08:46:34.168202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.515 [2024-10-01 08:46:34.168213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.515 qpair failed and we were unable to recover it. 00:31:42.515 [2024-10-01 08:46:34.168425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.515 [2024-10-01 08:46:34.168436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.515 qpair failed and we were unable to recover it. 00:31:42.515 [2024-10-01 08:46:34.168742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.515 [2024-10-01 08:46:34.168752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.515 qpair failed and we were unable to recover it. 00:31:42.515 [2024-10-01 08:46:34.168924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.515 [2024-10-01 08:46:34.168938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.515 qpair failed and we were unable to recover it. 00:31:42.515 [2024-10-01 08:46:34.169251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.515 [2024-10-01 08:46:34.169264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.515 qpair failed and we were unable to recover it. 00:31:42.515 [2024-10-01 08:46:34.169602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.515 [2024-10-01 08:46:34.169613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.515 qpair failed and we were unable to recover it. 00:31:42.515 [2024-10-01 08:46:34.169785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.515 [2024-10-01 08:46:34.169795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.515 qpair failed and we were unable to recover it. 00:31:42.515 [2024-10-01 08:46:34.170156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.515 [2024-10-01 08:46:34.170167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.515 qpair failed and we were unable to recover it. 00:31:42.515 [2024-10-01 08:46:34.170476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.515 [2024-10-01 08:46:34.170487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.515 qpair failed and we were unable to recover it. 00:31:42.515 [2024-10-01 08:46:34.170657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.515 [2024-10-01 08:46:34.170668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.515 qpair failed and we were unable to recover it. 00:31:42.515 [2024-10-01 08:46:34.170887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.515 [2024-10-01 08:46:34.170897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.515 qpair failed and we were unable to recover it. 00:31:42.515 [2024-10-01 08:46:34.171112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.515 [2024-10-01 08:46:34.171124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.515 qpair failed and we were unable to recover it. 00:31:42.515 [2024-10-01 08:46:34.171334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.515 [2024-10-01 08:46:34.171344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.515 qpair failed and we were unable to recover it. 00:31:42.515 [2024-10-01 08:46:34.171594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.515 [2024-10-01 08:46:34.171605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.515 qpair failed and we were unable to recover it. 00:31:42.515 [2024-10-01 08:46:34.171907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.515 [2024-10-01 08:46:34.171919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.515 qpair failed and we were unable to recover it. 00:31:42.515 [2024-10-01 08:46:34.172237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.515 [2024-10-01 08:46:34.172248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.515 qpair failed and we were unable to recover it. 00:31:42.515 [2024-10-01 08:46:34.172548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.515 [2024-10-01 08:46:34.172558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.515 qpair failed and we were unable to recover it. 00:31:42.515 [2024-10-01 08:46:34.172860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.515 [2024-10-01 08:46:34.172871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.515 qpair failed and we were unable to recover it. 00:31:42.515 [2024-10-01 08:46:34.172929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.515 [2024-10-01 08:46:34.172940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.515 qpair failed and we were unable to recover it. 00:31:42.515 [2024-10-01 08:46:34.173238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.515 [2024-10-01 08:46:34.173249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.515 qpair failed and we were unable to recover it. 00:31:42.515 [2024-10-01 08:46:34.173555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.515 [2024-10-01 08:46:34.173566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.515 qpair failed and we were unable to recover it. 00:31:42.515 [2024-10-01 08:46:34.173851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.515 [2024-10-01 08:46:34.173861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.515 qpair failed and we were unable to recover it. 00:31:42.515 [2024-10-01 08:46:34.174148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.515 [2024-10-01 08:46:34.174159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.515 qpair failed and we were unable to recover it. 00:31:42.515 [2024-10-01 08:46:34.174456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.515 [2024-10-01 08:46:34.174466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.515 qpair failed and we were unable to recover it. 00:31:42.515 [2024-10-01 08:46:34.174763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.515 [2024-10-01 08:46:34.174774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.515 qpair failed and we were unable to recover it. 00:31:42.515 [2024-10-01 08:46:34.175118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.515 [2024-10-01 08:46:34.175129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.515 qpair failed and we were unable to recover it. 00:31:42.515 [2024-10-01 08:46:34.175318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.515 [2024-10-01 08:46:34.175329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.515 qpair failed and we were unable to recover it. 00:31:42.515 [2024-10-01 08:46:34.175599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.515 [2024-10-01 08:46:34.175610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.515 qpair failed and we were unable to recover it. 00:31:42.515 [2024-10-01 08:46:34.175822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.515 [2024-10-01 08:46:34.175833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.515 qpair failed and we were unable to recover it. 00:31:42.515 [2024-10-01 08:46:34.176033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.515 [2024-10-01 08:46:34.176045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.515 qpair failed and we were unable to recover it. 00:31:42.515 [2024-10-01 08:46:34.176330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.516 [2024-10-01 08:46:34.176344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.516 qpair failed and we were unable to recover it. 00:31:42.516 [2024-10-01 08:46:34.176557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.516 [2024-10-01 08:46:34.176567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.516 qpair failed and we were unable to recover it. 00:31:42.516 [2024-10-01 08:46:34.176748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.516 [2024-10-01 08:46:34.176759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.516 qpair failed and we were unable to recover it. 00:31:42.516 [2024-10-01 08:46:34.176948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.516 [2024-10-01 08:46:34.176959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.516 qpair failed and we were unable to recover it. 00:31:42.516 [2024-10-01 08:46:34.177249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.516 [2024-10-01 08:46:34.177260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.516 qpair failed and we were unable to recover it. 00:31:42.516 [2024-10-01 08:46:34.177476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.516 [2024-10-01 08:46:34.177488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.516 qpair failed and we were unable to recover it. 00:31:42.516 [2024-10-01 08:46:34.177806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.516 [2024-10-01 08:46:34.177817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.516 qpair failed and we were unable to recover it. 00:31:42.516 [2024-10-01 08:46:34.178137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.516 [2024-10-01 08:46:34.178148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.516 qpair failed and we were unable to recover it. 00:31:42.516 [2024-10-01 08:46:34.178482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.516 [2024-10-01 08:46:34.178493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.516 qpair failed and we were unable to recover it. 00:31:42.516 [2024-10-01 08:46:34.178806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.516 [2024-10-01 08:46:34.178817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.516 qpair failed and we were unable to recover it. 00:31:42.516 [2024-10-01 08:46:34.179137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.516 [2024-10-01 08:46:34.179148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.516 qpair failed and we were unable to recover it. 00:31:42.516 [2024-10-01 08:46:34.179460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.516 [2024-10-01 08:46:34.179470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.516 qpair failed and we were unable to recover it. 00:31:42.516 [2024-10-01 08:46:34.179744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.516 [2024-10-01 08:46:34.179754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.516 qpair failed and we were unable to recover it. 00:31:42.516 [2024-10-01 08:46:34.179940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.516 [2024-10-01 08:46:34.179951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.516 qpair failed and we were unable to recover it. 00:31:42.516 [2024-10-01 08:46:34.180285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.516 [2024-10-01 08:46:34.180296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.516 qpair failed and we were unable to recover it. 00:31:42.516 [2024-10-01 08:46:34.180564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.516 [2024-10-01 08:46:34.180574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.516 qpair failed and we were unable to recover it. 00:31:42.516 [2024-10-01 08:46:34.180893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.516 [2024-10-01 08:46:34.180904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.516 qpair failed and we were unable to recover it. 00:31:42.516 [2024-10-01 08:46:34.181150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.516 [2024-10-01 08:46:34.181160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.516 qpair failed and we were unable to recover it. 00:31:42.516 [2024-10-01 08:46:34.181253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.516 [2024-10-01 08:46:34.181263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.516 qpair failed and we were unable to recover it. 00:31:42.516 [2024-10-01 08:46:34.181502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.516 [2024-10-01 08:46:34.181511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.516 qpair failed and we were unable to recover it. 00:31:42.516 [2024-10-01 08:46:34.181847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.516 [2024-10-01 08:46:34.181857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.516 qpair failed and we were unable to recover it. 00:31:42.516 [2024-10-01 08:46:34.182141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.516 [2024-10-01 08:46:34.182152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.516 qpair failed and we were unable to recover it. 00:31:42.516 [2024-10-01 08:46:34.182445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.516 [2024-10-01 08:46:34.182456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.516 qpair failed and we were unable to recover it. 00:31:42.516 [2024-10-01 08:46:34.182816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.516 [2024-10-01 08:46:34.182827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.516 qpair failed and we were unable to recover it. 00:31:42.516 [2024-10-01 08:46:34.183163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.516 [2024-10-01 08:46:34.183174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.516 qpair failed and we were unable to recover it. 00:31:42.516 [2024-10-01 08:46:34.183418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.516 [2024-10-01 08:46:34.183428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.516 qpair failed and we were unable to recover it. 00:31:42.516 [2024-10-01 08:46:34.183704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.516 [2024-10-01 08:46:34.183714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.516 qpair failed and we were unable to recover it. 00:31:42.516 [2024-10-01 08:46:34.184061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.516 [2024-10-01 08:46:34.184073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.516 qpair failed and we were unable to recover it. 00:31:42.516 [2024-10-01 08:46:34.184375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.516 [2024-10-01 08:46:34.184386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.516 qpair failed and we were unable to recover it. 00:31:42.516 [2024-10-01 08:46:34.184699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.516 [2024-10-01 08:46:34.184709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.516 qpair failed and we were unable to recover it. 00:31:42.516 [2024-10-01 08:46:34.184903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.516 [2024-10-01 08:46:34.184914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.516 qpair failed and we were unable to recover it. 00:31:42.516 [2024-10-01 08:46:34.185242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.516 [2024-10-01 08:46:34.185253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.516 qpair failed and we were unable to recover it. 00:31:42.516 [2024-10-01 08:46:34.185465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.516 [2024-10-01 08:46:34.185476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.516 qpair failed and we were unable to recover it. 00:31:42.516 [2024-10-01 08:46:34.185782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.516 [2024-10-01 08:46:34.185791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.516 qpair failed and we were unable to recover it. 00:31:42.516 [2024-10-01 08:46:34.186011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.516 [2024-10-01 08:46:34.186022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.516 qpair failed and we were unable to recover it. 00:31:42.516 [2024-10-01 08:46:34.186356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.516 [2024-10-01 08:46:34.186365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.516 qpair failed and we were unable to recover it. 00:31:42.516 [2024-10-01 08:46:34.186681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.516 [2024-10-01 08:46:34.186692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.516 qpair failed and we were unable to recover it. 00:31:42.516 [2024-10-01 08:46:34.186887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.516 [2024-10-01 08:46:34.186898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.517 qpair failed and we were unable to recover it. 00:31:42.517 [2024-10-01 08:46:34.187082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.517 [2024-10-01 08:46:34.187094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.517 qpair failed and we were unable to recover it. 00:31:42.517 [2024-10-01 08:46:34.187274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.517 [2024-10-01 08:46:34.187285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.517 qpair failed and we were unable to recover it. 00:31:42.517 [2024-10-01 08:46:34.187603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.517 [2024-10-01 08:46:34.187614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.517 qpair failed and we were unable to recover it. 00:31:42.517 [2024-10-01 08:46:34.187922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.517 [2024-10-01 08:46:34.187935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.517 qpair failed and we were unable to recover it. 00:31:42.517 [2024-10-01 08:46:34.188136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.517 [2024-10-01 08:46:34.188148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.517 qpair failed and we were unable to recover it. 00:31:42.517 [2024-10-01 08:46:34.188487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.517 [2024-10-01 08:46:34.188497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.517 qpair failed and we were unable to recover it. 00:31:42.517 [2024-10-01 08:46:34.188678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.517 [2024-10-01 08:46:34.188689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.517 qpair failed and we were unable to recover it. 00:31:42.517 [2024-10-01 08:46:34.189010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.517 [2024-10-01 08:46:34.189021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.517 qpair failed and we were unable to recover it. 00:31:42.517 [2024-10-01 08:46:34.189225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.517 [2024-10-01 08:46:34.189235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.517 qpair failed and we were unable to recover it. 00:31:42.517 [2024-10-01 08:46:34.189507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.517 [2024-10-01 08:46:34.189517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.517 qpair failed and we were unable to recover it. 00:31:42.517 [2024-10-01 08:46:34.189826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.517 [2024-10-01 08:46:34.189837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.517 qpair failed and we were unable to recover it. 00:31:42.517 [2024-10-01 08:46:34.190122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.517 [2024-10-01 08:46:34.190132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.517 qpair failed and we were unable to recover it. 00:31:42.517 [2024-10-01 08:46:34.190307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.517 [2024-10-01 08:46:34.190319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.517 qpair failed and we were unable to recover it. 00:31:42.517 [2024-10-01 08:46:34.190663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.517 [2024-10-01 08:46:34.190674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.517 qpair failed and we were unable to recover it. 00:31:42.517 [2024-10-01 08:46:34.190907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.517 [2024-10-01 08:46:34.190917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.517 qpair failed and we were unable to recover it. 00:31:42.517 [2024-10-01 08:46:34.191107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.517 [2024-10-01 08:46:34.191119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.517 qpair failed and we were unable to recover it. 00:31:42.517 [2024-10-01 08:46:34.191414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.517 [2024-10-01 08:46:34.191425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.517 qpair failed and we were unable to recover it. 00:31:42.517 [2024-10-01 08:46:34.191745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.517 [2024-10-01 08:46:34.191755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.517 qpair failed and we were unable to recover it. 00:31:42.517 [2024-10-01 08:46:34.192073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.517 [2024-10-01 08:46:34.192083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.517 qpair failed and we were unable to recover it. 00:31:42.517 [2024-10-01 08:46:34.192297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.517 [2024-10-01 08:46:34.192307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.517 qpair failed and we were unable to recover it. 00:31:42.517 [2024-10-01 08:46:34.192704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.517 [2024-10-01 08:46:34.192715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.517 qpair failed and we were unable to recover it. 00:31:42.517 [2024-10-01 08:46:34.193037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.517 [2024-10-01 08:46:34.193047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.517 qpair failed and we were unable to recover it. 00:31:42.517 [2024-10-01 08:46:34.193393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.517 [2024-10-01 08:46:34.193403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.517 qpair failed and we were unable to recover it. 00:31:42.517 [2024-10-01 08:46:34.193587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.517 [2024-10-01 08:46:34.193599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.517 qpair failed and we were unable to recover it. 00:31:42.517 [2024-10-01 08:46:34.193920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.517 [2024-10-01 08:46:34.193931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.517 qpair failed and we were unable to recover it. 00:31:42.517 [2024-10-01 08:46:34.194245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.517 [2024-10-01 08:46:34.194255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.517 qpair failed and we were unable to recover it. 00:31:42.517 [2024-10-01 08:46:34.194554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.517 [2024-10-01 08:46:34.194564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.517 qpair failed and we were unable to recover it. 00:31:42.517 [2024-10-01 08:46:34.194753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.517 [2024-10-01 08:46:34.194763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.517 qpair failed and we were unable to recover it. 00:31:42.517 [2024-10-01 08:46:34.195044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.517 [2024-10-01 08:46:34.195055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.518 qpair failed and we were unable to recover it. 00:31:42.518 [2024-10-01 08:46:34.195269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.518 [2024-10-01 08:46:34.195280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.518 qpair failed and we were unable to recover it. 00:31:42.518 [2024-10-01 08:46:34.195610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.518 [2024-10-01 08:46:34.195622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.518 qpair failed and we were unable to recover it. 00:31:42.518 [2024-10-01 08:46:34.195900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.518 [2024-10-01 08:46:34.195911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.518 qpair failed and we were unable to recover it. 00:31:42.518 [2024-10-01 08:46:34.196202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.518 [2024-10-01 08:46:34.196213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.518 qpair failed and we were unable to recover it. 00:31:42.518 [2024-10-01 08:46:34.196527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.518 [2024-10-01 08:46:34.196538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.518 qpair failed and we were unable to recover it. 00:31:42.518 [2024-10-01 08:46:34.196931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.518 [2024-10-01 08:46:34.196941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.518 qpair failed and we were unable to recover it. 00:31:42.518 [2024-10-01 08:46:34.197224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.518 [2024-10-01 08:46:34.197235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.518 qpair failed and we were unable to recover it. 00:31:42.518 [2024-10-01 08:46:34.197560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.518 [2024-10-01 08:46:34.197571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.518 qpair failed and we were unable to recover it. 00:31:42.518 [2024-10-01 08:46:34.197885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.518 [2024-10-01 08:46:34.197894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.518 qpair failed and we were unable to recover it. 00:31:42.518 [2024-10-01 08:46:34.198200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.518 [2024-10-01 08:46:34.198212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.518 qpair failed and we were unable to recover it. 00:31:42.518 [2024-10-01 08:46:34.198508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.518 [2024-10-01 08:46:34.198519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.518 qpair failed and we were unable to recover it. 00:31:42.518 [2024-10-01 08:46:34.198708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.518 [2024-10-01 08:46:34.198719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.518 qpair failed and we were unable to recover it. 00:31:42.518 [2024-10-01 08:46:34.199061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.518 [2024-10-01 08:46:34.199071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.518 qpair failed and we were unable to recover it. 00:31:42.518 [2024-10-01 08:46:34.199348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.518 [2024-10-01 08:46:34.199358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.518 qpair failed and we were unable to recover it. 00:31:42.518 [2024-10-01 08:46:34.199635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.518 [2024-10-01 08:46:34.199645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.518 qpair failed and we were unable to recover it. 00:31:42.518 [2024-10-01 08:46:34.199742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.518 [2024-10-01 08:46:34.199752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.518 qpair failed and we were unable to recover it. 00:31:42.518 [2024-10-01 08:46:34.200023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.518 [2024-10-01 08:46:34.200034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.518 qpair failed and we were unable to recover it. 00:31:42.518 [2024-10-01 08:46:34.200342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.518 [2024-10-01 08:46:34.200353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.518 qpair failed and we were unable to recover it. 00:31:42.518 [2024-10-01 08:46:34.200676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.518 [2024-10-01 08:46:34.200686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.518 qpair failed and we were unable to recover it. 00:31:42.518 [2024-10-01 08:46:34.200973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.518 [2024-10-01 08:46:34.200983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.518 qpair failed and we were unable to recover it. 00:31:42.518 [2024-10-01 08:46:34.201189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.518 [2024-10-01 08:46:34.201199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.518 qpair failed and we were unable to recover it. 00:31:42.518 [2024-10-01 08:46:34.201543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.518 [2024-10-01 08:46:34.201553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.518 qpair failed and we were unable to recover it. 00:31:42.518 [2024-10-01 08:46:34.201889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.518 [2024-10-01 08:46:34.201899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.518 qpair failed and we were unable to recover it. 00:31:42.518 [2024-10-01 08:46:34.202263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.518 [2024-10-01 08:46:34.202275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.518 qpair failed and we were unable to recover it. 00:31:42.518 [2024-10-01 08:46:34.202610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.518 [2024-10-01 08:46:34.202622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.518 qpair failed and we were unable to recover it. 00:31:42.518 [2024-10-01 08:46:34.202963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.518 [2024-10-01 08:46:34.202974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.518 qpair failed and we were unable to recover it. 00:31:42.518 [2024-10-01 08:46:34.203281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.518 [2024-10-01 08:46:34.203291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.518 qpair failed and we were unable to recover it. 00:31:42.518 [2024-10-01 08:46:34.203610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.518 [2024-10-01 08:46:34.203620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.518 qpair failed and we were unable to recover it. 00:31:42.518 [2024-10-01 08:46:34.203963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.518 [2024-10-01 08:46:34.203973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.518 qpair failed and we were unable to recover it. 00:31:42.518 [2024-10-01 08:46:34.204216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.518 [2024-10-01 08:46:34.204226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.518 qpair failed and we were unable to recover it. 00:31:42.518 [2024-10-01 08:46:34.204548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.518 [2024-10-01 08:46:34.204559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.518 qpair failed and we were unable to recover it. 00:31:42.518 [2024-10-01 08:46:34.204737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.518 [2024-10-01 08:46:34.204749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.518 qpair failed and we were unable to recover it. 00:31:42.518 [2024-10-01 08:46:34.205051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.518 [2024-10-01 08:46:34.205062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.518 qpair failed and we were unable to recover it. 00:31:42.518 [2024-10-01 08:46:34.205400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.518 [2024-10-01 08:46:34.205411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.518 qpair failed and we were unable to recover it. 00:31:42.518 [2024-10-01 08:46:34.205760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.518 [2024-10-01 08:46:34.205770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.518 qpair failed and we were unable to recover it. 00:31:42.518 [2024-10-01 08:46:34.206058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.518 [2024-10-01 08:46:34.206068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.518 qpair failed and we were unable to recover it. 00:31:42.518 [2024-10-01 08:46:34.206446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.518 [2024-10-01 08:46:34.206456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.518 qpair failed and we were unable to recover it. 00:31:42.519 [2024-10-01 08:46:34.206635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.519 [2024-10-01 08:46:34.206645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.519 qpair failed and we were unable to recover it. 00:31:42.519 [2024-10-01 08:46:34.206977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.519 [2024-10-01 08:46:34.206987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.519 qpair failed and we were unable to recover it. 00:31:42.519 [2024-10-01 08:46:34.207174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.519 [2024-10-01 08:46:34.207185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.519 qpair failed and we were unable to recover it. 00:31:42.519 [2024-10-01 08:46:34.207474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.519 [2024-10-01 08:46:34.207484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.519 qpair failed and we were unable to recover it. 00:31:42.519 [2024-10-01 08:46:34.207813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.519 [2024-10-01 08:46:34.207823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.519 qpair failed and we were unable to recover it. 00:31:42.519 [2024-10-01 08:46:34.208101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.519 [2024-10-01 08:46:34.208114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.519 qpair failed and we were unable to recover it. 00:31:42.519 [2024-10-01 08:46:34.208509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.519 [2024-10-01 08:46:34.208520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.519 qpair failed and we were unable to recover it. 00:31:42.519 [2024-10-01 08:46:34.208827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.519 [2024-10-01 08:46:34.208837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.519 qpair failed and we were unable to recover it. 00:31:42.519 [2024-10-01 08:46:34.209137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.519 [2024-10-01 08:46:34.209149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.519 qpair failed and we were unable to recover it. 00:31:42.519 [2024-10-01 08:46:34.209343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.519 [2024-10-01 08:46:34.209354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.519 qpair failed and we were unable to recover it. 00:31:42.519 [2024-10-01 08:46:34.209412] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:31:42.519 [2024-10-01 08:46:34.209458] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:42.519 [2024-10-01 08:46:34.209645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.519 [2024-10-01 08:46:34.209655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.519 qpair failed and we were unable to recover it. 00:31:42.519 [2024-10-01 08:46:34.209943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.519 [2024-10-01 08:46:34.209953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.519 qpair failed and we were unable to recover it. 00:31:42.519 [2024-10-01 08:46:34.210216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.519 [2024-10-01 08:46:34.210226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.519 qpair failed and we were unable to recover it. 00:31:42.519 [2024-10-01 08:46:34.210537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.519 [2024-10-01 08:46:34.210547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.519 qpair failed and we were unable to recover it. 00:31:42.519 [2024-10-01 08:46:34.210874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.519 [2024-10-01 08:46:34.210885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.519 qpair failed and we were unable to recover it. 00:31:42.519 [2024-10-01 08:46:34.211343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.519 [2024-10-01 08:46:34.211354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.519 qpair failed and we were unable to recover it. 00:31:42.519 [2024-10-01 08:46:34.211551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.519 [2024-10-01 08:46:34.211562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.519 qpair failed and we were unable to recover it. 00:31:42.519 [2024-10-01 08:46:34.211884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.519 [2024-10-01 08:46:34.211895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.519 qpair failed and we were unable to recover it. 00:31:42.519 [2024-10-01 08:46:34.212205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.519 [2024-10-01 08:46:34.212216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.519 qpair failed and we were unable to recover it. 00:31:42.519 [2024-10-01 08:46:34.212578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.519 [2024-10-01 08:46:34.212589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.519 qpair failed and we were unable to recover it. 00:31:42.519 [2024-10-01 08:46:34.212858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.519 [2024-10-01 08:46:34.212869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.519 qpair failed and we were unable to recover it. 00:31:42.519 [2024-10-01 08:46:34.213256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.519 [2024-10-01 08:46:34.213267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.519 qpair failed and we were unable to recover it. 00:31:42.519 [2024-10-01 08:46:34.213576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.519 [2024-10-01 08:46:34.213587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.519 qpair failed and we were unable to recover it. 00:31:42.519 [2024-10-01 08:46:34.213908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.519 [2024-10-01 08:46:34.213918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.519 qpair failed and we were unable to recover it. 00:31:42.519 [2024-10-01 08:46:34.214229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.519 [2024-10-01 08:46:34.214240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.519 qpair failed and we were unable to recover it. 00:31:42.519 [2024-10-01 08:46:34.214576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.519 [2024-10-01 08:46:34.214587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.519 qpair failed and we were unable to recover it. 00:31:42.519 [2024-10-01 08:46:34.214822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.519 [2024-10-01 08:46:34.214832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.519 qpair failed and we were unable to recover it. 00:31:42.519 [2024-10-01 08:46:34.215169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.519 [2024-10-01 08:46:34.215179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.519 qpair failed and we were unable to recover it. 00:31:42.519 [2024-10-01 08:46:34.215510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.519 [2024-10-01 08:46:34.215522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.519 qpair failed and we were unable to recover it. 00:31:42.519 [2024-10-01 08:46:34.215699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.519 [2024-10-01 08:46:34.215710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.519 qpair failed and we were unable to recover it. 00:31:42.519 [2024-10-01 08:46:34.215906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.519 [2024-10-01 08:46:34.215917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.519 qpair failed and we were unable to recover it. 00:31:42.519 [2024-10-01 08:46:34.216246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.519 [2024-10-01 08:46:34.216260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.519 qpair failed and we were unable to recover it. 00:31:42.519 [2024-10-01 08:46:34.216578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.520 [2024-10-01 08:46:34.216588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.520 qpair failed and we were unable to recover it. 00:31:42.520 [2024-10-01 08:46:34.216933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.520 [2024-10-01 08:46:34.216943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.520 qpair failed and we were unable to recover it. 00:31:42.520 [2024-10-01 08:46:34.217332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.520 [2024-10-01 08:46:34.217343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.520 qpair failed and we were unable to recover it. 00:31:42.520 [2024-10-01 08:46:34.217533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.520 [2024-10-01 08:46:34.217543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.520 qpair failed and we were unable to recover it. 00:31:42.520 [2024-10-01 08:46:34.217749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.520 [2024-10-01 08:46:34.217761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.520 qpair failed and we were unable to recover it. 00:31:42.520 [2024-10-01 08:46:34.218070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.520 [2024-10-01 08:46:34.218081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.520 qpair failed and we were unable to recover it. 00:31:42.520 [2024-10-01 08:46:34.218394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.520 [2024-10-01 08:46:34.218405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.520 qpair failed and we were unable to recover it. 00:31:42.520 [2024-10-01 08:46:34.218699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.520 [2024-10-01 08:46:34.218710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.520 qpair failed and we were unable to recover it. 00:31:42.520 [2024-10-01 08:46:34.219026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.520 [2024-10-01 08:46:34.219038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.520 qpair failed and we were unable to recover it. 00:31:42.520 [2024-10-01 08:46:34.219377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.520 [2024-10-01 08:46:34.219387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.520 qpair failed and we were unable to recover it. 00:31:42.520 [2024-10-01 08:46:34.219725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.520 [2024-10-01 08:46:34.219735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.520 qpair failed and we were unable to recover it. 00:31:42.520 [2024-10-01 08:46:34.220076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.520 [2024-10-01 08:46:34.220087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.520 qpair failed and we were unable to recover it. 00:31:42.520 [2024-10-01 08:46:34.220432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.520 [2024-10-01 08:46:34.220443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.520 qpair failed and we were unable to recover it. 00:31:42.520 [2024-10-01 08:46:34.220785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.520 [2024-10-01 08:46:34.220796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.520 qpair failed and we were unable to recover it. 00:31:42.520 [2024-10-01 08:46:34.221103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.520 [2024-10-01 08:46:34.221114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.520 qpair failed and we were unable to recover it. 00:31:42.520 [2024-10-01 08:46:34.221420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.520 [2024-10-01 08:46:34.221431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.520 qpair failed and we were unable to recover it. 00:31:42.520 [2024-10-01 08:46:34.221762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.520 [2024-10-01 08:46:34.221772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.520 qpair failed and we were unable to recover it. 00:31:42.520 [2024-10-01 08:46:34.222074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.520 [2024-10-01 08:46:34.222084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.520 qpair failed and we were unable to recover it. 00:31:42.520 [2024-10-01 08:46:34.222274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.520 [2024-10-01 08:46:34.222285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.520 qpair failed and we were unable to recover it. 00:31:42.520 [2024-10-01 08:46:34.222584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.520 [2024-10-01 08:46:34.222595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.520 qpair failed and we were unable to recover it. 00:31:42.520 [2024-10-01 08:46:34.222908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.520 [2024-10-01 08:46:34.222918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.520 qpair failed and we were unable to recover it. 00:31:42.520 [2024-10-01 08:46:34.223242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.520 [2024-10-01 08:46:34.223252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.520 qpair failed and we were unable to recover it. 00:31:42.520 [2024-10-01 08:46:34.223555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.520 [2024-10-01 08:46:34.223565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.520 qpair failed and we were unable to recover it. 00:31:42.520 [2024-10-01 08:46:34.223875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.520 [2024-10-01 08:46:34.223886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.520 qpair failed and we were unable to recover it. 00:31:42.520 [2024-10-01 08:46:34.224152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.520 [2024-10-01 08:46:34.224163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.520 qpair failed and we were unable to recover it. 00:31:42.520 [2024-10-01 08:46:34.224460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.520 [2024-10-01 08:46:34.224472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.520 qpair failed and we were unable to recover it. 00:31:42.520 [2024-10-01 08:46:34.224683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.520 [2024-10-01 08:46:34.224694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.520 qpair failed and we were unable to recover it. 00:31:42.520 [2024-10-01 08:46:34.224862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.520 [2024-10-01 08:46:34.224873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.520 qpair failed and we were unable to recover it. 00:31:42.520 [2024-10-01 08:46:34.225164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.520 [2024-10-01 08:46:34.225175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.520 qpair failed and we were unable to recover it. 00:31:42.520 [2024-10-01 08:46:34.225479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.520 [2024-10-01 08:46:34.225490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.520 qpair failed and we were unable to recover it. 00:31:42.520 [2024-10-01 08:46:34.225798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.520 [2024-10-01 08:46:34.225809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.520 qpair failed and we were unable to recover it. 00:31:42.520 [2024-10-01 08:46:34.226146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.520 [2024-10-01 08:46:34.226157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.520 qpair failed and we were unable to recover it. 00:31:42.520 [2024-10-01 08:46:34.226439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.520 [2024-10-01 08:46:34.226449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.520 qpair failed and we were unable to recover it. 00:31:42.520 [2024-10-01 08:46:34.226713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.520 [2024-10-01 08:46:34.226724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.520 qpair failed and we were unable to recover it. 00:31:42.520 [2024-10-01 08:46:34.227038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.520 [2024-10-01 08:46:34.227049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.520 qpair failed and we were unable to recover it. 00:31:42.520 [2024-10-01 08:46:34.227350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.520 [2024-10-01 08:46:34.227361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.520 qpair failed and we were unable to recover it. 00:31:42.520 [2024-10-01 08:46:34.227572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.520 [2024-10-01 08:46:34.227583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.520 qpair failed and we were unable to recover it. 00:31:42.520 [2024-10-01 08:46:34.227879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.520 [2024-10-01 08:46:34.227889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.520 qpair failed and we were unable to recover it. 00:31:42.520 [2024-10-01 08:46:34.228214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.521 [2024-10-01 08:46:34.228225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.521 qpair failed and we were unable to recover it. 00:31:42.521 [2024-10-01 08:46:34.228542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.521 [2024-10-01 08:46:34.228553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.521 qpair failed and we were unable to recover it. 00:31:42.521 [2024-10-01 08:46:34.228856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.521 [2024-10-01 08:46:34.228867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.521 qpair failed and we were unable to recover it. 00:31:42.521 [2024-10-01 08:46:34.229191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.521 [2024-10-01 08:46:34.229202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.521 qpair failed and we were unable to recover it. 00:31:42.521 [2024-10-01 08:46:34.229487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.521 [2024-10-01 08:46:34.229497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.521 qpair failed and we were unable to recover it. 00:31:42.521 [2024-10-01 08:46:34.229839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.521 [2024-10-01 08:46:34.229850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.521 qpair failed and we were unable to recover it. 00:31:42.521 [2024-10-01 08:46:34.230191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.521 [2024-10-01 08:46:34.230201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.521 qpair failed and we were unable to recover it. 00:31:42.521 [2024-10-01 08:46:34.230513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.521 [2024-10-01 08:46:34.230524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.521 qpair failed and we were unable to recover it. 00:31:42.521 [2024-10-01 08:46:34.230823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.521 [2024-10-01 08:46:34.230834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.521 qpair failed and we were unable to recover it. 00:31:42.521 [2024-10-01 08:46:34.231181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.521 [2024-10-01 08:46:34.231192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.521 qpair failed and we were unable to recover it. 00:31:42.521 [2024-10-01 08:46:34.231472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.521 [2024-10-01 08:46:34.231482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.521 qpair failed and we were unable to recover it. 00:31:42.521 [2024-10-01 08:46:34.231775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.521 [2024-10-01 08:46:34.231786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.521 qpair failed and we were unable to recover it. 00:31:42.521 [2024-10-01 08:46:34.232102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.521 [2024-10-01 08:46:34.232113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.521 qpair failed and we were unable to recover it. 00:31:42.521 [2024-10-01 08:46:34.232323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.521 [2024-10-01 08:46:34.232333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.521 qpair failed and we were unable to recover it. 00:31:42.521 [2024-10-01 08:46:34.232621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.521 [2024-10-01 08:46:34.232631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.521 qpair failed and we were unable to recover it. 00:31:42.521 [2024-10-01 08:46:34.232815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.521 [2024-10-01 08:46:34.232824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.521 qpair failed and we were unable to recover it. 00:31:42.521 [2024-10-01 08:46:34.233152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.521 [2024-10-01 08:46:34.233163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.521 qpair failed and we were unable to recover it. 00:31:42.521 [2024-10-01 08:46:34.233374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.521 [2024-10-01 08:46:34.233384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.521 qpair failed and we were unable to recover it. 00:31:42.521 [2024-10-01 08:46:34.233701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.521 [2024-10-01 08:46:34.233710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.521 qpair failed and we were unable to recover it. 00:31:42.521 [2024-10-01 08:46:34.233989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.521 [2024-10-01 08:46:34.234002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.521 qpair failed and we were unable to recover it. 00:31:42.521 [2024-10-01 08:46:34.234322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.521 [2024-10-01 08:46:34.234331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.521 qpair failed and we were unable to recover it. 00:31:42.521 [2024-10-01 08:46:34.234653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.521 [2024-10-01 08:46:34.234663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.521 qpair failed and we were unable to recover it. 00:31:42.521 [2024-10-01 08:46:34.234744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.521 [2024-10-01 08:46:34.234754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.521 qpair failed and we were unable to recover it. 00:31:42.521 [2024-10-01 08:46:34.234968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.521 [2024-10-01 08:46:34.234979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.521 qpair failed and we were unable to recover it. 00:31:42.521 [2024-10-01 08:46:34.235337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.521 [2024-10-01 08:46:34.235349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.521 qpair failed and we were unable to recover it. 00:31:42.521 [2024-10-01 08:46:34.235664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.521 [2024-10-01 08:46:34.235674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.521 qpair failed and we were unable to recover it. 00:31:42.521 [2024-10-01 08:46:34.236014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.521 [2024-10-01 08:46:34.236025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.521 qpair failed and we were unable to recover it. 00:31:42.521 [2024-10-01 08:46:34.236138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.521 [2024-10-01 08:46:34.236148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.521 qpair failed and we were unable to recover it. 00:31:42.521 [2024-10-01 08:46:34.236422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.521 [2024-10-01 08:46:34.236431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.521 qpair failed and we were unable to recover it. 00:31:42.521 [2024-10-01 08:46:34.236727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.521 [2024-10-01 08:46:34.236739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.521 qpair failed and we were unable to recover it. 00:31:42.521 [2024-10-01 08:46:34.236954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.521 [2024-10-01 08:46:34.236963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.521 qpair failed and we were unable to recover it. 00:31:42.521 [2024-10-01 08:46:34.237306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.521 [2024-10-01 08:46:34.237316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.521 qpair failed and we were unable to recover it. 00:31:42.521 [2024-10-01 08:46:34.237521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.521 [2024-10-01 08:46:34.237532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.521 qpair failed and we were unable to recover it. 00:31:42.521 [2024-10-01 08:46:34.237853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.522 [2024-10-01 08:46:34.237862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.522 qpair failed and we were unable to recover it. 00:31:42.522 [2024-10-01 08:46:34.238155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.522 [2024-10-01 08:46:34.238165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.522 qpair failed and we were unable to recover it. 00:31:42.522 [2024-10-01 08:46:34.238537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.522 [2024-10-01 08:46:34.238547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.522 qpair failed and we were unable to recover it. 00:31:42.522 [2024-10-01 08:46:34.238862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.522 [2024-10-01 08:46:34.238871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.522 qpair failed and we were unable to recover it. 00:31:42.522 [2024-10-01 08:46:34.239160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.522 [2024-10-01 08:46:34.239170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.522 qpair failed and we were unable to recover it. 00:31:42.522 [2024-10-01 08:46:34.239475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.522 [2024-10-01 08:46:34.239486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.522 qpair failed and we were unable to recover it. 00:31:42.522 [2024-10-01 08:46:34.239687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.522 [2024-10-01 08:46:34.239697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.522 qpair failed and we were unable to recover it. 00:31:42.522 [2024-10-01 08:46:34.240008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.522 [2024-10-01 08:46:34.240018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.522 qpair failed and we were unable to recover it. 00:31:42.522 [2024-10-01 08:46:34.240391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.522 [2024-10-01 08:46:34.240402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.522 qpair failed and we were unable to recover it. 00:31:42.522 [2024-10-01 08:46:34.240714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.522 [2024-10-01 08:46:34.240723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.522 qpair failed and we were unable to recover it. 00:31:42.522 [2024-10-01 08:46:34.241008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.522 [2024-10-01 08:46:34.241018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.522 qpair failed and we were unable to recover it. 00:31:42.522 [2024-10-01 08:46:34.241105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.522 [2024-10-01 08:46:34.241114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.522 qpair failed and we were unable to recover it. 00:31:42.522 [2024-10-01 08:46:34.241308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.522 [2024-10-01 08:46:34.241318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.522 qpair failed and we were unable to recover it. 00:31:42.522 [2024-10-01 08:46:34.241602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.522 [2024-10-01 08:46:34.241613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.522 qpair failed and we were unable to recover it. 00:31:42.522 [2024-10-01 08:46:34.241946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.522 [2024-10-01 08:46:34.241956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.522 qpair failed and we were unable to recover it. 00:31:42.522 [2024-10-01 08:46:34.242321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.522 [2024-10-01 08:46:34.242331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.522 qpair failed and we were unable to recover it. 00:31:42.522 [2024-10-01 08:46:34.242656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.522 [2024-10-01 08:46:34.242666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.522 qpair failed and we were unable to recover it. 00:31:42.522 [2024-10-01 08:46:34.242893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.522 [2024-10-01 08:46:34.242903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.522 qpair failed and we were unable to recover it. 00:31:42.522 [2024-10-01 08:46:34.243207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.522 [2024-10-01 08:46:34.243217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.522 qpair failed and we were unable to recover it. 00:31:42.522 [2024-10-01 08:46:34.243528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.522 [2024-10-01 08:46:34.243538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.522 qpair failed and we were unable to recover it. 00:31:42.522 [2024-10-01 08:46:34.243869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.522 [2024-10-01 08:46:34.243879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.522 qpair failed and we were unable to recover it. 00:31:42.522 [2024-10-01 08:46:34.244172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.522 [2024-10-01 08:46:34.244183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.522 qpair failed and we were unable to recover it. 00:31:42.522 [2024-10-01 08:46:34.244508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.522 [2024-10-01 08:46:34.244518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.522 qpair failed and we were unable to recover it. 00:31:42.522 [2024-10-01 08:46:34.244805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.522 [2024-10-01 08:46:34.244816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.522 qpair failed and we were unable to recover it. 00:31:42.522 [2024-10-01 08:46:34.245034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.522 [2024-10-01 08:46:34.245045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.522 qpair failed and we were unable to recover it. 00:31:42.522 [2024-10-01 08:46:34.245387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.522 [2024-10-01 08:46:34.245397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.522 qpair failed and we were unable to recover it. 00:31:42.522 [2024-10-01 08:46:34.245732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.522 [2024-10-01 08:46:34.245742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.522 qpair failed and we were unable to recover it. 00:31:42.522 [2024-10-01 08:46:34.245795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.522 [2024-10-01 08:46:34.245805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.522 qpair failed and we were unable to recover it. 00:31:42.522 [2024-10-01 08:46:34.246089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.522 [2024-10-01 08:46:34.246100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.522 qpair failed and we were unable to recover it. 00:31:42.522 [2024-10-01 08:46:34.246428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.522 [2024-10-01 08:46:34.246439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.522 qpair failed and we were unable to recover it. 00:31:42.522 [2024-10-01 08:46:34.246729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.522 [2024-10-01 08:46:34.246738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.522 qpair failed and we were unable to recover it. 00:31:42.522 [2024-10-01 08:46:34.246931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.522 [2024-10-01 08:46:34.246940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.522 qpair failed and we were unable to recover it. 00:31:42.522 [2024-10-01 08:46:34.247160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.522 [2024-10-01 08:46:34.247170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.522 qpair failed and we were unable to recover it. 00:31:42.522 [2024-10-01 08:46:34.247494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.522 [2024-10-01 08:46:34.247504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.522 qpair failed and we were unable to recover it. 00:31:42.522 [2024-10-01 08:46:34.247814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.523 [2024-10-01 08:46:34.247824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.523 qpair failed and we were unable to recover it. 00:31:42.523 [2024-10-01 08:46:34.248158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.523 [2024-10-01 08:46:34.248168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.523 qpair failed and we were unable to recover it. 00:31:42.523 [2024-10-01 08:46:34.248444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.523 [2024-10-01 08:46:34.248454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.523 qpair failed and we were unable to recover it. 00:31:42.523 [2024-10-01 08:46:34.248730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.523 [2024-10-01 08:46:34.248743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.523 qpair failed and we were unable to recover it. 00:31:42.523 [2024-10-01 08:46:34.249052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.523 [2024-10-01 08:46:34.249063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.523 qpair failed and we were unable to recover it. 00:31:42.523 [2024-10-01 08:46:34.249283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.523 [2024-10-01 08:46:34.249293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.523 qpair failed and we were unable to recover it. 00:31:42.523 [2024-10-01 08:46:34.249601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.523 [2024-10-01 08:46:34.249612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.523 qpair failed and we were unable to recover it. 00:31:42.523 [2024-10-01 08:46:34.249924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.523 [2024-10-01 08:46:34.249934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.523 qpair failed and we were unable to recover it. 00:31:42.523 [2024-10-01 08:46:34.250276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.523 [2024-10-01 08:46:34.250287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.523 qpair failed and we were unable to recover it. 00:31:42.523 [2024-10-01 08:46:34.250629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.523 [2024-10-01 08:46:34.250639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.523 qpair failed and we were unable to recover it. 00:31:42.523 [2024-10-01 08:46:34.250953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.523 [2024-10-01 08:46:34.250963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.523 qpair failed and we were unable to recover it. 00:31:42.523 [2024-10-01 08:46:34.251289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.523 [2024-10-01 08:46:34.251300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.523 qpair failed and we were unable to recover it. 00:31:42.523 [2024-10-01 08:46:34.251592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.523 [2024-10-01 08:46:34.251603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.523 qpair failed and we were unable to recover it. 00:31:42.523 [2024-10-01 08:46:34.251913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.523 [2024-10-01 08:46:34.251922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.523 qpair failed and we were unable to recover it. 00:31:42.523 [2024-10-01 08:46:34.252235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.523 [2024-10-01 08:46:34.252246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.523 qpair failed and we were unable to recover it. 00:31:42.523 [2024-10-01 08:46:34.252580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.523 [2024-10-01 08:46:34.252590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.523 qpair failed and we were unable to recover it. 00:31:42.523 [2024-10-01 08:46:34.252782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.523 [2024-10-01 08:46:34.252792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.523 qpair failed and we were unable to recover it. 00:31:42.523 [2024-10-01 08:46:34.253123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.523 [2024-10-01 08:46:34.253134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.523 qpair failed and we were unable to recover it. 00:31:42.523 [2024-10-01 08:46:34.253358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.523 [2024-10-01 08:46:34.253368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.523 qpair failed and we were unable to recover it. 00:31:42.523 [2024-10-01 08:46:34.253553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.523 [2024-10-01 08:46:34.253563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.523 qpair failed and we were unable to recover it. 00:31:42.523 [2024-10-01 08:46:34.253907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.523 [2024-10-01 08:46:34.253917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.523 qpair failed and we were unable to recover it. 00:31:42.523 [2024-10-01 08:46:34.254237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.523 [2024-10-01 08:46:34.254247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.523 qpair failed and we were unable to recover it. 00:31:42.523 [2024-10-01 08:46:34.254549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.523 [2024-10-01 08:46:34.254559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.523 qpair failed and we were unable to recover it. 00:31:42.523 [2024-10-01 08:46:34.254768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.523 [2024-10-01 08:46:34.254779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.523 qpair failed and we were unable to recover it. 00:31:42.523 [2024-10-01 08:46:34.255010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.523 [2024-10-01 08:46:34.255021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.523 qpair failed and we were unable to recover it. 00:31:42.523 [2024-10-01 08:46:34.255309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.523 [2024-10-01 08:46:34.255318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.523 qpair failed and we were unable to recover it. 00:31:42.523 [2024-10-01 08:46:34.255601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.523 [2024-10-01 08:46:34.255613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.523 qpair failed and we were unable to recover it. 00:31:42.523 [2024-10-01 08:46:34.255790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.523 [2024-10-01 08:46:34.255800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.523 qpair failed and we were unable to recover it. 00:31:42.523 [2024-10-01 08:46:34.256117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.523 [2024-10-01 08:46:34.256127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.523 qpair failed and we were unable to recover it. 00:31:42.523 [2024-10-01 08:46:34.256437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.523 [2024-10-01 08:46:34.256447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.523 qpair failed and we were unable to recover it. 00:31:42.523 [2024-10-01 08:46:34.256723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.523 [2024-10-01 08:46:34.256735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.523 qpair failed and we were unable to recover it. 00:31:42.523 [2024-10-01 08:46:34.257064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.523 [2024-10-01 08:46:34.257074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.523 qpair failed and we were unable to recover it. 00:31:42.523 [2024-10-01 08:46:34.257397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.523 [2024-10-01 08:46:34.257406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.523 qpair failed and we were unable to recover it. 00:31:42.523 [2024-10-01 08:46:34.257672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.523 [2024-10-01 08:46:34.257682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.523 qpair failed and we were unable to recover it. 00:31:42.523 [2024-10-01 08:46:34.257999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.523 [2024-10-01 08:46:34.258009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.523 qpair failed and we were unable to recover it. 00:31:42.523 [2024-10-01 08:46:34.258384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.523 [2024-10-01 08:46:34.258393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.523 qpair failed and we were unable to recover it. 00:31:42.524 [2024-10-01 08:46:34.258676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.524 [2024-10-01 08:46:34.258686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.524 qpair failed and we were unable to recover it. 00:31:42.524 [2024-10-01 08:46:34.259006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.524 [2024-10-01 08:46:34.259016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.524 qpair failed and we were unable to recover it. 00:31:42.524 [2024-10-01 08:46:34.259339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.524 [2024-10-01 08:46:34.259348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.524 qpair failed and we were unable to recover it. 00:31:42.524 [2024-10-01 08:46:34.259643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.524 [2024-10-01 08:46:34.259653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.524 qpair failed and we were unable to recover it. 00:31:42.524 [2024-10-01 08:46:34.260080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.524 [2024-10-01 08:46:34.260090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.524 qpair failed and we were unable to recover it. 00:31:42.524 [2024-10-01 08:46:34.260387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.524 [2024-10-01 08:46:34.260396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.524 qpair failed and we were unable to recover it. 00:31:42.524 [2024-10-01 08:46:34.260728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.524 [2024-10-01 08:46:34.260738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.524 qpair failed and we were unable to recover it. 00:31:42.524 [2024-10-01 08:46:34.261059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.524 [2024-10-01 08:46:34.261070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.524 qpair failed and we were unable to recover it. 00:31:42.524 [2024-10-01 08:46:34.261259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.524 [2024-10-01 08:46:34.261270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.524 qpair failed and we were unable to recover it. 00:31:42.524 [2024-10-01 08:46:34.261583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.524 [2024-10-01 08:46:34.261593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.524 qpair failed and we were unable to recover it. 00:31:42.524 [2024-10-01 08:46:34.261925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.524 [2024-10-01 08:46:34.261934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.524 qpair failed and we were unable to recover it. 00:31:42.524 [2024-10-01 08:46:34.262159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.524 [2024-10-01 08:46:34.262169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.524 qpair failed and we were unable to recover it. 00:31:42.524 [2024-10-01 08:46:34.262525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.524 [2024-10-01 08:46:34.262535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.524 qpair failed and we were unable to recover it. 00:31:42.524 [2024-10-01 08:46:34.262833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.524 [2024-10-01 08:46:34.262842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.524 qpair failed and we were unable to recover it. 00:31:42.524 [2024-10-01 08:46:34.263044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.524 [2024-10-01 08:46:34.263054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.524 qpair failed and we were unable to recover it. 00:31:42.524 [2024-10-01 08:46:34.263407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.524 [2024-10-01 08:46:34.263418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.524 qpair failed and we were unable to recover it. 00:31:42.524 [2024-10-01 08:46:34.263676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.524 [2024-10-01 08:46:34.263685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.524 qpair failed and we were unable to recover it. 00:31:42.524 [2024-10-01 08:46:34.264021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.524 [2024-10-01 08:46:34.264031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.524 qpair failed and we were unable to recover it. 00:31:42.524 [2024-10-01 08:46:34.264373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.524 [2024-10-01 08:46:34.264383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.524 qpair failed and we were unable to recover it. 00:31:42.524 [2024-10-01 08:46:34.264577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.524 [2024-10-01 08:46:34.264587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.524 qpair failed and we were unable to recover it. 00:31:42.524 [2024-10-01 08:46:34.264899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.524 [2024-10-01 08:46:34.264909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.524 qpair failed and we were unable to recover it. 00:31:42.524 [2024-10-01 08:46:34.265210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.524 [2024-10-01 08:46:34.265220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.524 qpair failed and we were unable to recover it. 00:31:42.524 [2024-10-01 08:46:34.265486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.524 [2024-10-01 08:46:34.265495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.524 qpair failed and we were unable to recover it. 00:31:42.524 [2024-10-01 08:46:34.265770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.524 [2024-10-01 08:46:34.265780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.524 qpair failed and we were unable to recover it. 00:31:42.524 [2024-10-01 08:46:34.266125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.524 [2024-10-01 08:46:34.266136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.524 qpair failed and we were unable to recover it. 00:31:42.524 [2024-10-01 08:46:34.266417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.524 [2024-10-01 08:46:34.266427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.524 qpair failed and we were unable to recover it. 00:31:42.524 [2024-10-01 08:46:34.266703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.524 [2024-10-01 08:46:34.266713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.524 qpair failed and we were unable to recover it. 00:31:42.524 [2024-10-01 08:46:34.267048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.524 [2024-10-01 08:46:34.267058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.524 qpair failed and we were unable to recover it. 00:31:42.524 [2024-10-01 08:46:34.267352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.524 [2024-10-01 08:46:34.267362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.524 qpair failed and we were unable to recover it. 00:31:42.524 [2024-10-01 08:46:34.267690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.524 [2024-10-01 08:46:34.267700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.524 qpair failed and we were unable to recover it. 00:31:42.524 [2024-10-01 08:46:34.267987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.524 [2024-10-01 08:46:34.268000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.524 qpair failed and we were unable to recover it. 00:31:42.524 [2024-10-01 08:46:34.268309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.524 [2024-10-01 08:46:34.268319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.524 qpair failed and we were unable to recover it. 00:31:42.524 [2024-10-01 08:46:34.268408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.524 [2024-10-01 08:46:34.268417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.524 qpair failed and we were unable to recover it. 00:31:42.524 [2024-10-01 08:46:34.268686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.524 [2024-10-01 08:46:34.268696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.524 qpair failed and we were unable to recover it. 00:31:42.524 [2024-10-01 08:46:34.268978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.524 [2024-10-01 08:46:34.268987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.524 qpair failed and we were unable to recover it. 00:31:42.524 [2024-10-01 08:46:34.269160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.525 [2024-10-01 08:46:34.269173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.525 qpair failed and we were unable to recover it. 00:31:42.525 [2024-10-01 08:46:34.269546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.525 [2024-10-01 08:46:34.269556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.525 qpair failed and we were unable to recover it. 00:31:42.525 [2024-10-01 08:46:34.269899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.525 [2024-10-01 08:46:34.269910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.525 qpair failed and we were unable to recover it. 00:31:42.525 [2024-10-01 08:46:34.270229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.525 [2024-10-01 08:46:34.270240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.525 qpair failed and we were unable to recover it. 00:31:42.525 [2024-10-01 08:46:34.270505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.525 [2024-10-01 08:46:34.270515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.525 qpair failed and we were unable to recover it. 00:31:42.525 [2024-10-01 08:46:34.270807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.525 [2024-10-01 08:46:34.270817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.525 qpair failed and we were unable to recover it. 00:31:42.525 [2024-10-01 08:46:34.271127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.525 [2024-10-01 08:46:34.271137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.525 qpair failed and we were unable to recover it. 00:31:42.525 [2024-10-01 08:46:34.271488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.525 [2024-10-01 08:46:34.271498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.525 qpair failed and we were unable to recover it. 00:31:42.525 [2024-10-01 08:46:34.271780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.525 [2024-10-01 08:46:34.271789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.525 qpair failed and we were unable to recover it. 00:31:42.525 [2024-10-01 08:46:34.272101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.525 [2024-10-01 08:46:34.272112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.525 qpair failed and we were unable to recover it. 00:31:42.525 [2024-10-01 08:46:34.272436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.525 [2024-10-01 08:46:34.272446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.525 qpair failed and we were unable to recover it. 00:31:42.525 [2024-10-01 08:46:34.272725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.525 [2024-10-01 08:46:34.272735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.525 qpair failed and we were unable to recover it. 00:31:42.525 [2024-10-01 08:46:34.272917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.525 [2024-10-01 08:46:34.272928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.525 qpair failed and we were unable to recover it. 00:31:42.525 [2024-10-01 08:46:34.273270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.525 [2024-10-01 08:46:34.273280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.525 qpair failed and we were unable to recover it. 00:31:42.525 [2024-10-01 08:46:34.273566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.525 [2024-10-01 08:46:34.273577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.525 qpair failed and we were unable to recover it. 00:31:42.525 [2024-10-01 08:46:34.273879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.525 [2024-10-01 08:46:34.273888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.525 qpair failed and we were unable to recover it. 00:31:42.525 [2024-10-01 08:46:34.274158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.525 [2024-10-01 08:46:34.274168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.525 qpair failed and we were unable to recover it. 00:31:42.525 [2024-10-01 08:46:34.274456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.525 [2024-10-01 08:46:34.274466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.525 qpair failed and we were unable to recover it. 00:31:42.525 [2024-10-01 08:46:34.274773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.525 [2024-10-01 08:46:34.274783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.525 qpair failed and we were unable to recover it. 00:31:42.525 [2024-10-01 08:46:34.275112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.525 [2024-10-01 08:46:34.275122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.525 qpair failed and we were unable to recover it. 00:31:42.525 [2024-10-01 08:46:34.275340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.525 [2024-10-01 08:46:34.275349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.525 qpair failed and we were unable to recover it. 00:31:42.525 [2024-10-01 08:46:34.275644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.525 [2024-10-01 08:46:34.275653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.525 qpair failed and we were unable to recover it. 00:31:42.525 [2024-10-01 08:46:34.275940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.525 [2024-10-01 08:46:34.275950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.525 qpair failed and we were unable to recover it. 00:31:42.525 [2024-10-01 08:46:34.276268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.525 [2024-10-01 08:46:34.276278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.525 qpair failed and we were unable to recover it. 00:31:42.525 [2024-10-01 08:46:34.276519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.525 [2024-10-01 08:46:34.276529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.525 qpair failed and we were unable to recover it. 00:31:42.525 [2024-10-01 08:46:34.276858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.525 [2024-10-01 08:46:34.276868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.525 qpair failed and we were unable to recover it. 00:31:42.525 [2024-10-01 08:46:34.277162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.525 [2024-10-01 08:46:34.277172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.525 qpair failed and we were unable to recover it. 00:31:42.525 [2024-10-01 08:46:34.277349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.525 [2024-10-01 08:46:34.277360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.525 qpair failed and we were unable to recover it. 00:31:42.525 [2024-10-01 08:46:34.277638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.525 [2024-10-01 08:46:34.277648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.525 qpair failed and we were unable to recover it. 00:31:42.525 [2024-10-01 08:46:34.277989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.525 [2024-10-01 08:46:34.278003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.525 qpair failed and we were unable to recover it. 00:31:42.525 [2024-10-01 08:46:34.278299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.525 [2024-10-01 08:46:34.278309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.525 qpair failed and we were unable to recover it. 00:31:42.525 [2024-10-01 08:46:34.278624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.525 [2024-10-01 08:46:34.278634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.525 qpair failed and we were unable to recover it. 00:31:42.525 [2024-10-01 08:46:34.278974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.525 [2024-10-01 08:46:34.278985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.525 qpair failed and we were unable to recover it. 00:31:42.525 [2024-10-01 08:46:34.279300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.525 [2024-10-01 08:46:34.279310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.525 qpair failed and we were unable to recover it. 00:31:42.525 [2024-10-01 08:46:34.279509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.525 [2024-10-01 08:46:34.279519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.525 qpair failed and we were unable to recover it. 00:31:42.525 [2024-10-01 08:46:34.279733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.525 [2024-10-01 08:46:34.279743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.525 qpair failed and we were unable to recover it. 00:31:42.525 [2024-10-01 08:46:34.279899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.525 [2024-10-01 08:46:34.279910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.525 qpair failed and we were unable to recover it. 00:31:42.525 [2024-10-01 08:46:34.280202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.525 [2024-10-01 08:46:34.280212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.525 qpair failed and we were unable to recover it. 00:31:42.526 [2024-10-01 08:46:34.280505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.526 [2024-10-01 08:46:34.280515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.526 qpair failed and we were unable to recover it. 00:31:42.526 [2024-10-01 08:46:34.280832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.526 [2024-10-01 08:46:34.280842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.526 qpair failed and we were unable to recover it. 00:31:42.526 [2024-10-01 08:46:34.281121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.526 [2024-10-01 08:46:34.281132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.526 qpair failed and we were unable to recover it. 00:31:42.526 [2024-10-01 08:46:34.281432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.526 [2024-10-01 08:46:34.281443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.526 qpair failed and we were unable to recover it. 00:31:42.526 [2024-10-01 08:46:34.281659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.526 [2024-10-01 08:46:34.281670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.526 qpair failed and we were unable to recover it. 00:31:42.526 [2024-10-01 08:46:34.281968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.526 [2024-10-01 08:46:34.281978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.526 qpair failed and we were unable to recover it. 00:31:42.526 [2024-10-01 08:46:34.282262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.526 [2024-10-01 08:46:34.282273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.526 qpair failed and we were unable to recover it. 00:31:42.526 [2024-10-01 08:46:34.282540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.526 [2024-10-01 08:46:34.282550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.526 qpair failed and we were unable to recover it. 00:31:42.526 [2024-10-01 08:46:34.282834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.526 [2024-10-01 08:46:34.282843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.526 qpair failed and we were unable to recover it. 00:31:42.526 [2024-10-01 08:46:34.283127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.526 [2024-10-01 08:46:34.283137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.526 qpair failed and we were unable to recover it. 00:31:42.526 [2024-10-01 08:46:34.283493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.526 [2024-10-01 08:46:34.283503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.526 qpair failed and we were unable to recover it. 00:31:42.526 [2024-10-01 08:46:34.283763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.526 [2024-10-01 08:46:34.283773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.526 qpair failed and we were unable to recover it. 00:31:42.526 [2024-10-01 08:46:34.284076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.526 [2024-10-01 08:46:34.284086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.526 qpair failed and we were unable to recover it. 00:31:42.526 [2024-10-01 08:46:34.284417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.526 [2024-10-01 08:46:34.284426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.526 qpair failed and we were unable to recover it. 00:31:42.526 [2024-10-01 08:46:34.284758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.526 [2024-10-01 08:46:34.284768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.526 qpair failed and we were unable to recover it. 00:31:42.526 [2024-10-01 08:46:34.285073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.526 [2024-10-01 08:46:34.285083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.526 qpair failed and we were unable to recover it. 00:31:42.526 [2024-10-01 08:46:34.285395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.526 [2024-10-01 08:46:34.285405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.526 qpair failed and we were unable to recover it. 00:31:42.526 [2024-10-01 08:46:34.285715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.526 [2024-10-01 08:46:34.285725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.526 qpair failed and we were unable to recover it. 00:31:42.526 [2024-10-01 08:46:34.286012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.526 [2024-10-01 08:46:34.286023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.526 qpair failed and we were unable to recover it. 00:31:42.526 [2024-10-01 08:46:34.286354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.526 [2024-10-01 08:46:34.286363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.526 qpair failed and we were unable to recover it. 00:31:42.526 [2024-10-01 08:46:34.286628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.526 [2024-10-01 08:46:34.286637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.526 qpair failed and we were unable to recover it. 00:31:42.526 [2024-10-01 08:46:34.286985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.526 [2024-10-01 08:46:34.286997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.526 qpair failed and we were unable to recover it. 00:31:42.526 [2024-10-01 08:46:34.287311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.526 [2024-10-01 08:46:34.287321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.526 qpair failed and we were unable to recover it. 00:31:42.526 [2024-10-01 08:46:34.287629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.526 [2024-10-01 08:46:34.287639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.526 qpair failed and we were unable to recover it. 00:31:42.526 [2024-10-01 08:46:34.287955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.526 [2024-10-01 08:46:34.287965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.526 qpair failed and we were unable to recover it. 00:31:42.526 [2024-10-01 08:46:34.288247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.526 [2024-10-01 08:46:34.288258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.526 qpair failed and we were unable to recover it. 00:31:42.526 [2024-10-01 08:46:34.288440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.526 [2024-10-01 08:46:34.288449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.526 qpair failed and we were unable to recover it. 00:31:42.526 [2024-10-01 08:46:34.288838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.526 [2024-10-01 08:46:34.288847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.526 qpair failed and we were unable to recover it. 00:31:42.526 [2024-10-01 08:46:34.289063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.526 [2024-10-01 08:46:34.289073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.526 qpair failed and we were unable to recover it. 00:31:42.526 [2024-10-01 08:46:34.289271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.526 [2024-10-01 08:46:34.289281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.526 qpair failed and we were unable to recover it. 00:31:42.526 [2024-10-01 08:46:34.289588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.526 [2024-10-01 08:46:34.289599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.526 qpair failed and we were unable to recover it. 00:31:42.526 [2024-10-01 08:46:34.289907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.526 [2024-10-01 08:46:34.289917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.526 qpair failed and we were unable to recover it. 00:31:42.526 [2024-10-01 08:46:34.290121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.526 [2024-10-01 08:46:34.290131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.526 qpair failed and we were unable to recover it. 00:31:42.526 [2024-10-01 08:46:34.290452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.526 [2024-10-01 08:46:34.290462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.526 qpair failed and we were unable to recover it. 00:31:42.526 [2024-10-01 08:46:34.290764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.526 [2024-10-01 08:46:34.290774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.526 qpair failed and we were unable to recover it. 00:31:42.526 [2024-10-01 08:46:34.290942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.526 [2024-10-01 08:46:34.290952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.526 qpair failed and we were unable to recover it. 00:31:42.526 [2024-10-01 08:46:34.291324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.526 [2024-10-01 08:46:34.291334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.526 qpair failed and we were unable to recover it. 00:31:42.526 [2024-10-01 08:46:34.291639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.526 [2024-10-01 08:46:34.291649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.526 qpair failed and we were unable to recover it. 00:31:42.526 [2024-10-01 08:46:34.291955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.526 [2024-10-01 08:46:34.291965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.527 qpair failed and we were unable to recover it. 00:31:42.527 [2024-10-01 08:46:34.292261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.527 [2024-10-01 08:46:34.292271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.527 qpair failed and we were unable to recover it. 00:31:42.527 [2024-10-01 08:46:34.292588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.527 [2024-10-01 08:46:34.292598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.527 qpair failed and we were unable to recover it. 00:31:42.527 [2024-10-01 08:46:34.292889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.527 [2024-10-01 08:46:34.292899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.527 qpair failed and we were unable to recover it. 00:31:42.527 [2024-10-01 08:46:34.293182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.527 [2024-10-01 08:46:34.293192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.527 qpair failed and we were unable to recover it. 00:31:42.527 [2024-10-01 08:46:34.293458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.527 [2024-10-01 08:46:34.293468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.527 qpair failed and we were unable to recover it. 00:31:42.527 [2024-10-01 08:46:34.293756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.527 [2024-10-01 08:46:34.293766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.527 qpair failed and we were unable to recover it. 00:31:42.527 [2024-10-01 08:46:34.294058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.527 [2024-10-01 08:46:34.294067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.527 qpair failed and we were unable to recover it. 00:31:42.527 [2024-10-01 08:46:34.294354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.527 [2024-10-01 08:46:34.294364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.527 qpair failed and we were unable to recover it. 00:31:42.527 [2024-10-01 08:46:34.294390] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:42.527 [2024-10-01 08:46:34.294697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.527 [2024-10-01 08:46:34.294707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.527 qpair failed and we were unable to recover it. 00:31:42.527 [2024-10-01 08:46:34.295035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.527 [2024-10-01 08:46:34.295046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.527 qpair failed and we were unable to recover it. 00:31:42.527 [2024-10-01 08:46:34.295340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.527 [2024-10-01 08:46:34.295350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.527 qpair failed and we were unable to recover it. 00:31:42.527 [2024-10-01 08:46:34.295659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.527 [2024-10-01 08:46:34.295669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.527 qpair failed and we were unable to recover it. 00:31:42.527 [2024-10-01 08:46:34.296010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.527 [2024-10-01 08:46:34.296021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.527 qpair failed and we were unable to recover it. 00:31:42.527 [2024-10-01 08:46:34.296337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.527 [2024-10-01 08:46:34.296347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.527 qpair failed and we were unable to recover it. 00:31:42.527 [2024-10-01 08:46:34.296656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.527 [2024-10-01 08:46:34.296666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.527 qpair failed and we were unable to recover it. 00:31:42.527 [2024-10-01 08:46:34.297001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.527 [2024-10-01 08:46:34.297011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.527 qpair failed and we were unable to recover it. 00:31:42.527 [2024-10-01 08:46:34.297320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.527 [2024-10-01 08:46:34.297330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.527 qpair failed and we were unable to recover it. 00:31:42.527 [2024-10-01 08:46:34.297666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.527 [2024-10-01 08:46:34.297675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.527 qpair failed and we were unable to recover it. 00:31:42.527 [2024-10-01 08:46:34.297952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.527 [2024-10-01 08:46:34.297964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.527 qpair failed and we were unable to recover it. 00:31:42.527 [2024-10-01 08:46:34.298294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.527 [2024-10-01 08:46:34.298305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.527 qpair failed and we were unable to recover it. 00:31:42.527 [2024-10-01 08:46:34.298618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.527 [2024-10-01 08:46:34.298628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.527 qpair failed and we were unable to recover it. 00:31:42.527 [2024-10-01 08:46:34.298966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.527 [2024-10-01 08:46:34.298976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.527 qpair failed and we were unable to recover it. 00:31:42.527 [2024-10-01 08:46:34.299349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.527 [2024-10-01 08:46:34.299359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.527 qpair failed and we were unable to recover it. 00:31:42.527 [2024-10-01 08:46:34.299690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.527 [2024-10-01 08:46:34.299700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.527 qpair failed and we were unable to recover it. 00:31:42.527 [2024-10-01 08:46:34.300040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.527 [2024-10-01 08:46:34.300050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.527 qpair failed and we were unable to recover it. 00:31:42.527 [2024-10-01 08:46:34.300336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.527 [2024-10-01 08:46:34.300345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.527 qpair failed and we were unable to recover it. 00:31:42.527 [2024-10-01 08:46:34.300679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.527 [2024-10-01 08:46:34.300690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.527 qpair failed and we were unable to recover it. 00:31:42.840 [2024-10-01 08:46:34.300991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.840 [2024-10-01 08:46:34.301006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.840 qpair failed and we were unable to recover it. 00:31:42.840 [2024-10-01 08:46:34.301338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.840 [2024-10-01 08:46:34.301349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.840 qpair failed and we were unable to recover it. 00:31:42.840 [2024-10-01 08:46:34.301613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.840 [2024-10-01 08:46:34.301624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.840 qpair failed and we were unable to recover it. 00:31:42.840 [2024-10-01 08:46:34.301961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.840 [2024-10-01 08:46:34.301972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.840 qpair failed and we were unable to recover it. 00:31:42.840 [2024-10-01 08:46:34.302300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.840 [2024-10-01 08:46:34.302310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.840 qpair failed and we were unable to recover it. 00:31:42.840 [2024-10-01 08:46:34.302510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.840 [2024-10-01 08:46:34.302520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.840 qpair failed and we were unable to recover it. 00:31:42.840 [2024-10-01 08:46:34.302844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.841 [2024-10-01 08:46:34.302854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.841 qpair failed and we were unable to recover it. 00:31:42.841 [2024-10-01 08:46:34.303211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.841 [2024-10-01 08:46:34.303221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.841 qpair failed and we were unable to recover it. 00:31:42.841 [2024-10-01 08:46:34.303516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.841 [2024-10-01 08:46:34.303526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.841 qpair failed and we were unable to recover it. 00:31:42.841 [2024-10-01 08:46:34.303828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.841 [2024-10-01 08:46:34.303839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.841 qpair failed and we were unable to recover it. 00:31:42.841 [2024-10-01 08:46:34.304158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.841 [2024-10-01 08:46:34.304169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.841 qpair failed and we were unable to recover it. 00:31:42.841 [2024-10-01 08:46:34.304462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.841 [2024-10-01 08:46:34.304472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.841 qpair failed and we were unable to recover it. 00:31:42.841 [2024-10-01 08:46:34.304778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.841 [2024-10-01 08:46:34.304788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.841 qpair failed and we were unable to recover it. 00:31:42.841 [2024-10-01 08:46:34.304990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.841 [2024-10-01 08:46:34.305003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.841 qpair failed and we were unable to recover it. 00:31:42.841 [2024-10-01 08:46:34.305342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.841 [2024-10-01 08:46:34.305352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.841 qpair failed and we were unable to recover it. 00:31:42.841 [2024-10-01 08:46:34.305663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.841 [2024-10-01 08:46:34.305673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.841 qpair failed and we were unable to recover it. 00:31:42.841 [2024-10-01 08:46:34.305996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.841 [2024-10-01 08:46:34.306006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.841 qpair failed and we were unable to recover it. 00:31:42.841 [2024-10-01 08:46:34.306288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.841 [2024-10-01 08:46:34.306297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.841 qpair failed and we were unable to recover it. 00:31:42.841 [2024-10-01 08:46:34.306487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.841 [2024-10-01 08:46:34.306499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.841 qpair failed and we were unable to recover it. 00:31:42.841 [2024-10-01 08:46:34.306674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.841 [2024-10-01 08:46:34.306685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.841 qpair failed and we were unable to recover it. 00:31:42.841 [2024-10-01 08:46:34.306971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.841 [2024-10-01 08:46:34.306981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.841 qpair failed and we were unable to recover it. 00:31:42.841 [2024-10-01 08:46:34.307300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.841 [2024-10-01 08:46:34.307310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.841 qpair failed and we were unable to recover it. 00:31:42.841 [2024-10-01 08:46:34.307679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.841 [2024-10-01 08:46:34.307689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.841 qpair failed and we were unable to recover it. 00:31:42.841 [2024-10-01 08:46:34.307968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.841 [2024-10-01 08:46:34.307978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.841 qpair failed and we were unable to recover it. 00:31:42.841 [2024-10-01 08:46:34.308293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.841 [2024-10-01 08:46:34.308303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.841 qpair failed and we were unable to recover it. 00:31:42.841 [2024-10-01 08:46:34.308651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.841 [2024-10-01 08:46:34.308661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.841 qpair failed and we were unable to recover it. 00:31:42.841 [2024-10-01 08:46:34.308966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.841 [2024-10-01 08:46:34.308975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.841 qpair failed and we were unable to recover it. 00:31:42.841 [2024-10-01 08:46:34.309285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.841 [2024-10-01 08:46:34.309296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.841 qpair failed and we were unable to recover it. 00:31:42.841 [2024-10-01 08:46:34.309615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.841 [2024-10-01 08:46:34.309625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.841 qpair failed and we were unable to recover it. 00:31:42.841 [2024-10-01 08:46:34.309804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.841 [2024-10-01 08:46:34.309815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.841 qpair failed and we were unable to recover it. 00:31:42.841 [2024-10-01 08:46:34.310109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.841 [2024-10-01 08:46:34.310120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.841 qpair failed and we were unable to recover it. 00:31:42.841 [2024-10-01 08:46:34.310308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.841 [2024-10-01 08:46:34.310318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.841 qpair failed and we were unable to recover it. 00:31:42.841 [2024-10-01 08:46:34.310627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.841 [2024-10-01 08:46:34.310637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.841 qpair failed and we were unable to recover it. 00:31:42.841 [2024-10-01 08:46:34.310977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.841 [2024-10-01 08:46:34.310987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.841 qpair failed and we were unable to recover it. 00:31:42.841 [2024-10-01 08:46:34.311302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.841 [2024-10-01 08:46:34.311312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.841 qpair failed and we were unable to recover it. 00:31:42.841 [2024-10-01 08:46:34.311500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.841 [2024-10-01 08:46:34.311509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.841 qpair failed and we were unable to recover it. 00:31:42.841 [2024-10-01 08:46:34.311787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.841 [2024-10-01 08:46:34.311797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.841 qpair failed and we were unable to recover it. 00:31:42.841 [2024-10-01 08:46:34.311992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.841 [2024-10-01 08:46:34.312011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.841 qpair failed and we were unable to recover it. 00:31:42.841 [2024-10-01 08:46:34.312173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.841 [2024-10-01 08:46:34.312183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.841 qpair failed and we were unable to recover it. 00:31:42.841 [2024-10-01 08:46:34.312470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.841 [2024-10-01 08:46:34.312479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.841 qpair failed and we were unable to recover it. 00:31:42.841 [2024-10-01 08:46:34.312783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.841 [2024-10-01 08:46:34.312792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.841 qpair failed and we were unable to recover it. 00:31:42.841 [2024-10-01 08:46:34.312978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.841 [2024-10-01 08:46:34.312988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.841 qpair failed and we were unable to recover it. 00:31:42.841 [2024-10-01 08:46:34.313332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.841 [2024-10-01 08:46:34.313342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.841 qpair failed and we were unable to recover it. 00:31:42.841 [2024-10-01 08:46:34.313528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.841 [2024-10-01 08:46:34.313538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.841 qpair failed and we were unable to recover it. 00:31:42.841 [2024-10-01 08:46:34.313903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.841 [2024-10-01 08:46:34.313913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.842 qpair failed and we were unable to recover it. 00:31:42.842 [2024-10-01 08:46:34.314218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.842 [2024-10-01 08:46:34.314228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.842 qpair failed and we were unable to recover it. 00:31:42.842 [2024-10-01 08:46:34.314405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.842 [2024-10-01 08:46:34.314415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.842 qpair failed and we were unable to recover it. 00:31:42.842 [2024-10-01 08:46:34.314774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.842 [2024-10-01 08:46:34.314784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.842 qpair failed and we were unable to recover it. 00:31:42.842 [2024-10-01 08:46:34.315075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.842 [2024-10-01 08:46:34.315086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.842 qpair failed and we were unable to recover it. 00:31:42.842 [2024-10-01 08:46:34.315423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.842 [2024-10-01 08:46:34.315432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.842 qpair failed and we were unable to recover it. 00:31:42.842 [2024-10-01 08:46:34.315745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.842 [2024-10-01 08:46:34.315754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.842 qpair failed and we were unable to recover it. 00:31:42.842 [2024-10-01 08:46:34.316082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.842 [2024-10-01 08:46:34.316092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.842 qpair failed and we were unable to recover it. 00:31:42.842 [2024-10-01 08:46:34.316385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.842 [2024-10-01 08:46:34.316395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.842 qpair failed and we were unable to recover it. 00:31:42.842 [2024-10-01 08:46:34.316707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.842 [2024-10-01 08:46:34.316717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.842 qpair failed and we were unable to recover it. 00:31:42.842 [2024-10-01 08:46:34.317004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.842 [2024-10-01 08:46:34.317015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.842 qpair failed and we were unable to recover it. 00:31:42.842 [2024-10-01 08:46:34.317324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.842 [2024-10-01 08:46:34.317334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.842 qpair failed and we were unable to recover it. 00:31:42.842 [2024-10-01 08:46:34.317520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.842 [2024-10-01 08:46:34.317530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.842 qpair failed and we were unable to recover it. 00:31:42.842 [2024-10-01 08:46:34.317860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.842 [2024-10-01 08:46:34.317870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.842 qpair failed and we were unable to recover it. 00:31:42.842 [2024-10-01 08:46:34.318059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.842 [2024-10-01 08:46:34.318069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.842 qpair failed and we were unable to recover it. 00:31:42.842 [2024-10-01 08:46:34.318199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.842 [2024-10-01 08:46:34.318211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.842 qpair failed and we were unable to recover it. 00:31:42.842 [2024-10-01 08:46:34.318419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.842 [2024-10-01 08:46:34.318429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.842 qpair failed and we were unable to recover it. 00:31:42.842 [2024-10-01 08:46:34.318755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.842 [2024-10-01 08:46:34.318765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.842 qpair failed and we were unable to recover it. 00:31:42.842 [2024-10-01 08:46:34.318988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.842 [2024-10-01 08:46:34.319001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.842 qpair failed and we were unable to recover it. 00:31:42.842 [2024-10-01 08:46:34.319285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.842 [2024-10-01 08:46:34.319295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.842 qpair failed and we were unable to recover it. 00:31:42.842 [2024-10-01 08:46:34.319606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.842 [2024-10-01 08:46:34.319616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.842 qpair failed and we were unable to recover it. 00:31:42.842 [2024-10-01 08:46:34.319905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.842 [2024-10-01 08:46:34.319915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.842 qpair failed and we were unable to recover it. 00:31:42.842 [2024-10-01 08:46:34.320213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.842 [2024-10-01 08:46:34.320224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.842 qpair failed and we were unable to recover it. 00:31:42.842 [2024-10-01 08:46:34.320538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.842 [2024-10-01 08:46:34.320548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.842 qpair failed and we were unable to recover it. 00:31:42.842 [2024-10-01 08:46:34.320830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.842 [2024-10-01 08:46:34.320840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.842 qpair failed and we were unable to recover it. 00:31:42.842 [2024-10-01 08:46:34.321166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.842 [2024-10-01 08:46:34.321176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.842 qpair failed and we were unable to recover it. 00:31:42.842 [2024-10-01 08:46:34.321509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.842 [2024-10-01 08:46:34.321518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.842 qpair failed and we were unable to recover it. 00:31:42.842 [2024-10-01 08:46:34.321807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.842 [2024-10-01 08:46:34.321817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.842 qpair failed and we were unable to recover it. 00:31:42.842 [2024-10-01 08:46:34.322153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.842 [2024-10-01 08:46:34.322163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.842 qpair failed and we were unable to recover it. 00:31:42.842 [2024-10-01 08:46:34.322355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.842 [2024-10-01 08:46:34.322365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.842 qpair failed and we were unable to recover it. 00:31:42.842 [2024-10-01 08:46:34.322712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.842 [2024-10-01 08:46:34.322722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.842 qpair failed and we were unable to recover it. 00:31:42.842 [2024-10-01 08:46:34.323060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.842 [2024-10-01 08:46:34.323071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.842 qpair failed and we were unable to recover it. 00:31:42.842 [2024-10-01 08:46:34.323346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.842 [2024-10-01 08:46:34.323356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.842 qpair failed and we were unable to recover it. 00:31:42.842 [2024-10-01 08:46:34.323620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.842 [2024-10-01 08:46:34.323630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.842 qpair failed and we were unable to recover it. 00:31:42.842 [2024-10-01 08:46:34.323955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.842 [2024-10-01 08:46:34.323965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.842 qpair failed and we were unable to recover it. 00:31:42.842 [2024-10-01 08:46:34.324275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.843 [2024-10-01 08:46:34.324285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.843 qpair failed and we were unable to recover it. 00:31:42.843 [2024-10-01 08:46:34.324614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.843 [2024-10-01 08:46:34.324624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.843 qpair failed and we were unable to recover it. 00:31:42.843 [2024-10-01 08:46:34.324929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.843 [2024-10-01 08:46:34.324940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.843 qpair failed and we were unable to recover it. 00:31:42.843 [2024-10-01 08:46:34.325246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.843 [2024-10-01 08:46:34.325258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.843 qpair failed and we were unable to recover it. 00:31:42.843 [2024-10-01 08:46:34.325525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.843 [2024-10-01 08:46:34.325535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.843 qpair failed and we were unable to recover it. 00:31:42.843 [2024-10-01 08:46:34.325828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.843 [2024-10-01 08:46:34.325838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.843 qpair failed and we were unable to recover it. 00:31:42.843 [2024-10-01 08:46:34.326168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.843 [2024-10-01 08:46:34.326179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.843 qpair failed and we were unable to recover it. 00:31:42.843 [2024-10-01 08:46:34.326469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.843 [2024-10-01 08:46:34.326482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.843 qpair failed and we were unable to recover it. 00:31:42.843 [2024-10-01 08:46:34.326783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.843 [2024-10-01 08:46:34.326794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.843 qpair failed and we were unable to recover it. 00:31:42.843 [2024-10-01 08:46:34.327060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.843 [2024-10-01 08:46:34.327070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.843 qpair failed and we were unable to recover it. 00:31:42.843 [2024-10-01 08:46:34.327359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.843 [2024-10-01 08:46:34.327370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.843 qpair failed and we were unable to recover it. 00:31:42.843 [2024-10-01 08:46:34.327730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.843 [2024-10-01 08:46:34.327741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.843 qpair failed and we were unable to recover it. 00:31:42.843 [2024-10-01 08:46:34.328052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.843 [2024-10-01 08:46:34.328063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.843 qpair failed and we were unable to recover it. 00:31:42.843 [2024-10-01 08:46:34.328367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.843 [2024-10-01 08:46:34.328378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.843 qpair failed and we were unable to recover it. 00:31:42.843 [2024-10-01 08:46:34.328700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.843 [2024-10-01 08:46:34.328711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.843 qpair failed and we were unable to recover it. 00:31:42.843 [2024-10-01 08:46:34.328978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.843 [2024-10-01 08:46:34.328989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.843 qpair failed and we were unable to recover it. 00:31:42.843 [2024-10-01 08:46:34.329289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.843 [2024-10-01 08:46:34.329299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.843 qpair failed and we were unable to recover it. 00:31:42.843 [2024-10-01 08:46:34.329575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.843 [2024-10-01 08:46:34.329585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.843 qpair failed and we were unable to recover it. 00:31:42.843 [2024-10-01 08:46:34.329801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.843 [2024-10-01 08:46:34.329811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.843 qpair failed and we were unable to recover it. 00:31:42.843 [2024-10-01 08:46:34.330109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.843 [2024-10-01 08:46:34.330119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.843 qpair failed and we were unable to recover it. 00:31:42.843 [2024-10-01 08:46:34.330409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.843 [2024-10-01 08:46:34.330419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.843 qpair failed and we were unable to recover it. 00:31:42.843 [2024-10-01 08:46:34.330617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.843 [2024-10-01 08:46:34.330628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.843 qpair failed and we were unable to recover it. 00:31:42.843 [2024-10-01 08:46:34.330892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.843 [2024-10-01 08:46:34.330902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.843 qpair failed and we were unable to recover it. 00:31:42.843 [2024-10-01 08:46:34.331214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.843 [2024-10-01 08:46:34.331224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.843 qpair failed and we were unable to recover it. 00:31:42.843 [2024-10-01 08:46:34.331535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.843 [2024-10-01 08:46:34.331545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.843 qpair failed and we were unable to recover it. 00:31:42.843 [2024-10-01 08:46:34.331853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.843 [2024-10-01 08:46:34.331863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.843 qpair failed and we were unable to recover it. 00:31:42.843 [2024-10-01 08:46:34.332178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.843 [2024-10-01 08:46:34.332188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.843 qpair failed and we were unable to recover it. 00:31:42.843 [2024-10-01 08:46:34.332373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.843 [2024-10-01 08:46:34.332383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.843 qpair failed and we were unable to recover it. 00:31:42.843 [2024-10-01 08:46:34.332703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.843 [2024-10-01 08:46:34.332713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.843 qpair failed and we were unable to recover it. 00:31:42.843 [2024-10-01 08:46:34.333001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.843 [2024-10-01 08:46:34.333011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.843 qpair failed and we were unable to recover it. 00:31:42.843 [2024-10-01 08:46:34.333200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.843 [2024-10-01 08:46:34.333210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.843 qpair failed and we were unable to recover it. 00:31:42.843 [2024-10-01 08:46:34.333530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.843 [2024-10-01 08:46:34.333540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.843 qpair failed and we were unable to recover it. 00:31:42.843 [2024-10-01 08:46:34.333710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.843 [2024-10-01 08:46:34.333720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.843 qpair failed and we were unable to recover it. 00:31:42.843 [2024-10-01 08:46:34.333947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.843 [2024-10-01 08:46:34.333957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.843 qpair failed and we were unable to recover it. 00:31:42.843 [2024-10-01 08:46:34.334245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.843 [2024-10-01 08:46:34.334255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.843 qpair failed and we were unable to recover it. 00:31:42.843 [2024-10-01 08:46:34.334594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.843 [2024-10-01 08:46:34.334610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.843 qpair failed and we were unable to recover it. 00:31:42.843 [2024-10-01 08:46:34.334938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.843 [2024-10-01 08:46:34.334948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.843 qpair failed and we were unable to recover it. 00:31:42.843 [2024-10-01 08:46:34.335241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.844 [2024-10-01 08:46:34.335252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.844 qpair failed and we were unable to recover it. 00:31:42.844 [2024-10-01 08:46:34.335441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.844 [2024-10-01 08:46:34.335452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.844 qpair failed and we were unable to recover it. 00:31:42.844 [2024-10-01 08:46:34.335779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.844 [2024-10-01 08:46:34.335790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.844 qpair failed and we were unable to recover it. 00:31:42.844 [2024-10-01 08:46:34.336134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.844 [2024-10-01 08:46:34.336145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.844 qpair failed and we were unable to recover it. 00:31:42.844 [2024-10-01 08:46:34.336507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.844 [2024-10-01 08:46:34.336516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.844 qpair failed and we were unable to recover it. 00:31:42.844 [2024-10-01 08:46:34.336817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.844 [2024-10-01 08:46:34.336827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.844 qpair failed and we were unable to recover it. 00:31:42.844 [2024-10-01 08:46:34.337140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.844 [2024-10-01 08:46:34.337150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.844 qpair failed and we were unable to recover it. 00:31:42.844 [2024-10-01 08:46:34.337435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.844 [2024-10-01 08:46:34.337445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.844 qpair failed and we were unable to recover it. 00:31:42.844 [2024-10-01 08:46:34.337797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.844 [2024-10-01 08:46:34.337807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.844 qpair failed and we were unable to recover it. 00:31:42.844 [2024-10-01 08:46:34.338068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.844 [2024-10-01 08:46:34.338078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.844 qpair failed and we were unable to recover it. 00:31:42.844 [2024-10-01 08:46:34.338391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.844 [2024-10-01 08:46:34.338401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.844 qpair failed and we were unable to recover it. 00:31:42.844 [2024-10-01 08:46:34.338665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.844 [2024-10-01 08:46:34.338677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.844 qpair failed and we were unable to recover it. 00:31:42.844 [2024-10-01 08:46:34.338967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.844 [2024-10-01 08:46:34.338976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.844 qpair failed and we were unable to recover it. 00:31:42.844 [2024-10-01 08:46:34.339279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.844 [2024-10-01 08:46:34.339289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.844 qpair failed and we were unable to recover it. 00:31:42.844 [2024-10-01 08:46:34.339613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.844 [2024-10-01 08:46:34.339624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.844 qpair failed and we were unable to recover it. 00:31:42.844 [2024-10-01 08:46:34.339936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.844 [2024-10-01 08:46:34.339946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.844 qpair failed and we were unable to recover it. 00:31:42.844 [2024-10-01 08:46:34.340264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.844 [2024-10-01 08:46:34.340275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.844 qpair failed and we were unable to recover it. 00:31:42.844 [2024-10-01 08:46:34.340617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.844 [2024-10-01 08:46:34.340626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.844 qpair failed and we were unable to recover it. 00:31:42.844 [2024-10-01 08:46:34.340863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.844 [2024-10-01 08:46:34.340873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.844 qpair failed and we were unable to recover it. 00:31:42.844 [2024-10-01 08:46:34.341188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.844 [2024-10-01 08:46:34.341198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.844 qpair failed and we were unable to recover it. 00:31:42.844 [2024-10-01 08:46:34.341526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.844 [2024-10-01 08:46:34.341536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.844 qpair failed and we were unable to recover it. 00:31:42.844 [2024-10-01 08:46:34.341867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.844 [2024-10-01 08:46:34.341877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.844 qpair failed and we were unable to recover it. 00:31:42.844 [2024-10-01 08:46:34.342175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.844 [2024-10-01 08:46:34.342185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.844 qpair failed and we were unable to recover it. 00:31:42.844 [2024-10-01 08:46:34.342498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.844 [2024-10-01 08:46:34.342508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.844 qpair failed and we were unable to recover it. 00:31:42.844 [2024-10-01 08:46:34.342784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.844 [2024-10-01 08:46:34.342793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.844 qpair failed and we were unable to recover it. 00:31:42.844 [2024-10-01 08:46:34.343134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.844 [2024-10-01 08:46:34.343144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.844 qpair failed and we were unable to recover it. 00:31:42.844 [2024-10-01 08:46:34.343498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.844 [2024-10-01 08:46:34.343508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.844 qpair failed and we were unable to recover it. 00:31:42.844 [2024-10-01 08:46:34.343768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.844 [2024-10-01 08:46:34.343778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.844 qpair failed and we were unable to recover it. 00:31:42.844 [2024-10-01 08:46:34.343937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.844 [2024-10-01 08:46:34.343949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.844 qpair failed and we were unable to recover it. 00:31:42.844 [2024-10-01 08:46:34.344145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.844 [2024-10-01 08:46:34.344155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.844 qpair failed and we were unable to recover it. 00:31:42.844 [2024-10-01 08:46:34.344476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.844 [2024-10-01 08:46:34.344485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.844 qpair failed and we were unable to recover it. 00:31:42.844 [2024-10-01 08:46:34.344815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.844 [2024-10-01 08:46:34.344825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.844 qpair failed and we were unable to recover it. 00:31:42.844 [2024-10-01 08:46:34.345127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.844 [2024-10-01 08:46:34.345137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.844 qpair failed and we were unable to recover it. 00:31:42.844 [2024-10-01 08:46:34.345365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.844 [2024-10-01 08:46:34.345375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.844 qpair failed and we were unable to recover it. 00:31:42.844 [2024-10-01 08:46:34.345562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.844 [2024-10-01 08:46:34.345580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.844 qpair failed and we were unable to recover it. 00:31:42.844 [2024-10-01 08:46:34.345788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.844 [2024-10-01 08:46:34.345798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.844 qpair failed and we were unable to recover it. 00:31:42.844 [2024-10-01 08:46:34.346085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.844 [2024-10-01 08:46:34.346095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.844 qpair failed and we were unable to recover it. 00:31:42.844 [2024-10-01 08:46:34.346391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.844 [2024-10-01 08:46:34.346400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.844 qpair failed and we were unable to recover it. 00:31:42.844 [2024-10-01 08:46:34.346596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.845 [2024-10-01 08:46:34.346608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.845 qpair failed and we were unable to recover it. 00:31:42.845 [2024-10-01 08:46:34.346895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.845 [2024-10-01 08:46:34.346905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.845 qpair failed and we were unable to recover it. 00:31:42.845 [2024-10-01 08:46:34.347206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.845 [2024-10-01 08:46:34.347216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.845 qpair failed and we were unable to recover it. 00:31:42.845 [2024-10-01 08:46:34.347527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.845 [2024-10-01 08:46:34.347537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.845 qpair failed and we were unable to recover it. 00:31:42.845 [2024-10-01 08:46:34.347802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.845 [2024-10-01 08:46:34.347812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.845 qpair failed and we were unable to recover it. 00:31:42.845 [2024-10-01 08:46:34.348151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.845 [2024-10-01 08:46:34.348162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.845 qpair failed and we were unable to recover it. 00:31:42.845 [2024-10-01 08:46:34.348343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.845 [2024-10-01 08:46:34.348353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.845 qpair failed and we were unable to recover it. 00:31:42.845 [2024-10-01 08:46:34.348557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.845 [2024-10-01 08:46:34.348567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.845 qpair failed and we were unable to recover it. 00:31:42.845 [2024-10-01 08:46:34.348869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.845 [2024-10-01 08:46:34.348879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.845 qpair failed and we were unable to recover it. 00:31:42.845 [2024-10-01 08:46:34.349253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.845 [2024-10-01 08:46:34.349263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.845 qpair failed and we were unable to recover it. 00:31:42.845 [2024-10-01 08:46:34.349591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.845 [2024-10-01 08:46:34.349600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.845 qpair failed and we were unable to recover it. 00:31:42.845 [2024-10-01 08:46:34.349930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.845 [2024-10-01 08:46:34.349940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.845 qpair failed and we were unable to recover it. 00:31:42.845 [2024-10-01 08:46:34.350284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.845 [2024-10-01 08:46:34.350294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.845 qpair failed and we were unable to recover it. 00:31:42.845 [2024-10-01 08:46:34.350473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.845 [2024-10-01 08:46:34.350483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.845 qpair failed and we were unable to recover it. 00:31:42.845 [2024-10-01 08:46:34.350782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.845 [2024-10-01 08:46:34.350792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.845 qpair failed and we were unable to recover it. 00:31:42.845 [2024-10-01 08:46:34.351018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.845 [2024-10-01 08:46:34.351028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.845 qpair failed and we were unable to recover it. 00:31:42.845 [2024-10-01 08:46:34.351315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.845 [2024-10-01 08:46:34.351324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.845 qpair failed and we were unable to recover it. 00:31:42.845 [2024-10-01 08:46:34.351648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.845 [2024-10-01 08:46:34.351658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.845 qpair failed and we were unable to recover it. 00:31:42.845 [2024-10-01 08:46:34.351959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.845 [2024-10-01 08:46:34.351969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.845 qpair failed and we were unable to recover it. 00:31:42.845 [2024-10-01 08:46:34.352287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.845 [2024-10-01 08:46:34.352298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.845 qpair failed and we were unable to recover it. 00:31:42.845 [2024-10-01 08:46:34.352611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.845 [2024-10-01 08:46:34.352621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.845 qpair failed and we were unable to recover it. 00:31:42.845 [2024-10-01 08:46:34.352807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.845 [2024-10-01 08:46:34.352817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.845 qpair failed and we were unable to recover it. 00:31:42.845 [2024-10-01 08:46:34.353088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.845 [2024-10-01 08:46:34.353098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.845 qpair failed and we were unable to recover it. 00:31:42.845 [2024-10-01 08:46:34.353284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.845 [2024-10-01 08:46:34.353294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.845 qpair failed and we were unable to recover it. 00:31:42.845 [2024-10-01 08:46:34.353667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.845 [2024-10-01 08:46:34.353676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.845 qpair failed and we were unable to recover it. 00:31:42.845 [2024-10-01 08:46:34.353853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.845 [2024-10-01 08:46:34.353863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.845 qpair failed and we were unable to recover it. 00:31:42.845 [2024-10-01 08:46:34.354159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.845 [2024-10-01 08:46:34.354169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.845 qpair failed and we were unable to recover it. 00:31:42.845 [2024-10-01 08:46:34.354460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.845 [2024-10-01 08:46:34.354470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.845 qpair failed and we were unable to recover it. 00:31:42.845 [2024-10-01 08:46:34.354795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.845 [2024-10-01 08:46:34.354805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.845 qpair failed and we were unable to recover it. 00:31:42.845 [2024-10-01 08:46:34.355108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.845 [2024-10-01 08:46:34.355118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.845 qpair failed and we were unable to recover it. 00:31:42.845 [2024-10-01 08:46:34.355432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.845 [2024-10-01 08:46:34.355442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.845 qpair failed and we were unable to recover it. 00:31:42.845 [2024-10-01 08:46:34.355721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.845 [2024-10-01 08:46:34.355731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.845 qpair failed and we were unable to recover it. 00:31:42.845 [2024-10-01 08:46:34.356012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.845 [2024-10-01 08:46:34.356023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.845 qpair failed and we were unable to recover it. 00:31:42.846 [2024-10-01 08:46:34.356310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.846 [2024-10-01 08:46:34.356320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.846 qpair failed and we were unable to recover it. 00:31:42.846 [2024-10-01 08:46:34.356639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.846 [2024-10-01 08:46:34.356649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.846 qpair failed and we were unable to recover it. 00:31:42.846 [2024-10-01 08:46:34.356999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.846 [2024-10-01 08:46:34.357010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.846 qpair failed and we were unable to recover it. 00:31:42.846 [2024-10-01 08:46:34.357271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.846 [2024-10-01 08:46:34.357283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.846 qpair failed and we were unable to recover it. 00:31:42.846 [2024-10-01 08:46:34.357552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.846 [2024-10-01 08:46:34.357563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.846 qpair failed and we were unable to recover it. 00:31:42.846 [2024-10-01 08:46:34.357871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.846 [2024-10-01 08:46:34.357882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.846 qpair failed and we were unable to recover it. 00:31:42.846 [2024-10-01 08:46:34.358044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.846 [2024-10-01 08:46:34.358055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.846 qpair failed and we were unable to recover it. 00:31:42.846 [2024-10-01 08:46:34.358316] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:42.846 [2024-10-01 08:46:34.358345] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:42.846 [2024-10-01 08:46:34.358353] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:42.846 [2024-10-01 08:46:34.358357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.846 [2024-10-01 08:46:34.358365] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:42.846 [2024-10-01 08:46:34.358368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.846 [2024-10-01 08:46:34.358372] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:42.846 qpair failed and we were unable to recover it. 00:31:42.846 [2024-10-01 08:46:34.358672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.846 [2024-10-01 08:46:34.358683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.846 qpair failed and we were unable to recover it. 00:31:42.846 [2024-10-01 08:46:34.358983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.846 [2024-10-01 08:46:34.358997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.846 qpair failed and we were unable to recover it. 00:31:42.846 [2024-10-01 08:46:34.359290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.846 [2024-10-01 08:46:34.359300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.846 qpair failed and we were unable to recover it. 00:31:42.846 [2024-10-01 08:46:34.359684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.846 [2024-10-01 08:46:34.359693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.846 qpair failed and we were unable to recover it. 00:31:42.846 [2024-10-01 08:46:34.359972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.846 [2024-10-01 08:46:34.359982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.846 qpair failed and we were unable to recover it. 00:31:42.846 [2024-10-01 08:46:34.359924] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:31:42.846 [2024-10-01 08:46:34.360194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.846 [2024-10-01 08:46:34.360103] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:31:42.846 [2024-10-01 08:46:34.360205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.846 qpair failed and we were unable to recover it. 00:31:42.846 [2024-10-01 08:46:34.360418] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:31:42.846 [2024-10-01 08:46:34.360501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.846 [2024-10-01 08:46:34.360419] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:31:42.846 [2024-10-01 08:46:34.360520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.846 qpair failed and we were unable to recover it. 00:31:42.846 [2024-10-01 08:46:34.360719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.846 [2024-10-01 08:46:34.360729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.846 qpair failed and we were unable to recover it. 00:31:42.846 [2024-10-01 08:46:34.360899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.846 [2024-10-01 08:46:34.360908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.846 qpair failed and we were unable to recover it. 00:31:42.846 [2024-10-01 08:46:34.361087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.846 [2024-10-01 08:46:34.361098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.846 qpair failed and we were unable to recover it. 00:31:42.846 [2024-10-01 08:46:34.361445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.846 [2024-10-01 08:46:34.361455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.846 qpair failed and we were unable to recover it. 00:31:42.846 [2024-10-01 08:46:34.361799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.846 [2024-10-01 08:46:34.361809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.846 qpair failed and we were unable to recover it. 00:31:42.846 [2024-10-01 08:46:34.362102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.846 [2024-10-01 08:46:34.362112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.846 qpair failed and we were unable to recover it. 00:31:42.846 [2024-10-01 08:46:34.362305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.846 [2024-10-01 08:46:34.362315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.846 qpair failed and we were unable to recover it. 00:31:42.846 [2024-10-01 08:46:34.362642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.846 [2024-10-01 08:46:34.362652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.846 qpair failed and we were unable to recover it. 00:31:42.846 [2024-10-01 08:46:34.362840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.846 [2024-10-01 08:46:34.362851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.846 qpair failed and we were unable to recover it. 00:31:42.846 [2024-10-01 08:46:34.363169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.846 [2024-10-01 08:46:34.363180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.846 qpair failed and we were unable to recover it. 00:31:42.846 [2024-10-01 08:46:34.363485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.846 [2024-10-01 08:46:34.363495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.846 qpair failed and we were unable to recover it. 00:31:42.846 [2024-10-01 08:46:34.363797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.846 [2024-10-01 08:46:34.363806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.846 qpair failed and we were unable to recover it. 00:31:42.846 [2024-10-01 08:46:34.364084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.846 [2024-10-01 08:46:34.364094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.846 qpair failed and we were unable to recover it. 00:31:42.846 [2024-10-01 08:46:34.364313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.846 [2024-10-01 08:46:34.364323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.846 qpair failed and we were unable to recover it. 00:31:42.846 [2024-10-01 08:46:34.364554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.846 [2024-10-01 08:46:34.364564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.846 qpair failed and we were unable to recover it. 00:31:42.846 [2024-10-01 08:46:34.364830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.846 [2024-10-01 08:46:34.364840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.846 qpair failed and we were unable to recover it. 00:31:42.846 [2024-10-01 08:46:34.365234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.846 [2024-10-01 08:46:34.365245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.846 qpair failed and we were unable to recover it. 00:31:42.846 [2024-10-01 08:46:34.365511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.846 [2024-10-01 08:46:34.365526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.846 qpair failed and we were unable to recover it. 00:31:42.846 [2024-10-01 08:46:34.365812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.846 [2024-10-01 08:46:34.365822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.846 qpair failed and we were unable to recover it. 00:31:42.846 [2024-10-01 08:46:34.366053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.846 [2024-10-01 08:46:34.366063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.847 qpair failed and we were unable to recover it. 00:31:42.847 [2024-10-01 08:46:34.366314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.847 [2024-10-01 08:46:34.366324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.847 qpair failed and we were unable to recover it. 00:31:42.847 [2024-10-01 08:46:34.366654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.847 [2024-10-01 08:46:34.366664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.847 qpair failed and we were unable to recover it. 00:31:42.847 [2024-10-01 08:46:34.366947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.847 [2024-10-01 08:46:34.366957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.847 qpair failed and we were unable to recover it. 00:31:42.847 [2024-10-01 08:46:34.367135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.847 [2024-10-01 08:46:34.367145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.847 qpair failed and we were unable to recover it. 00:31:42.847 [2024-10-01 08:46:34.367449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.847 [2024-10-01 08:46:34.367459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.847 qpair failed and we were unable to recover it. 00:31:42.847 [2024-10-01 08:46:34.367723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.847 [2024-10-01 08:46:34.367733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.847 qpair failed and we were unable to recover it. 00:31:42.847 [2024-10-01 08:46:34.368035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.847 [2024-10-01 08:46:34.368045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.847 qpair failed and we were unable to recover it. 00:31:42.847 [2024-10-01 08:46:34.368125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.847 [2024-10-01 08:46:34.368134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.847 qpair failed and we were unable to recover it. 00:31:42.847 [2024-10-01 08:46:34.368419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.847 [2024-10-01 08:46:34.368429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.847 qpair failed and we were unable to recover it. 00:31:42.847 [2024-10-01 08:46:34.368761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.847 [2024-10-01 08:46:34.368771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.847 qpair failed and we were unable to recover it. 00:31:42.847 [2024-10-01 08:46:34.369100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.847 [2024-10-01 08:46:34.369111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.847 qpair failed and we were unable to recover it. 00:31:42.847 [2024-10-01 08:46:34.369396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.847 [2024-10-01 08:46:34.369406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.847 qpair failed and we were unable to recover it. 00:31:42.847 [2024-10-01 08:46:34.369683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.847 [2024-10-01 08:46:34.369693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.847 qpair failed and we were unable to recover it. 00:31:42.847 [2024-10-01 08:46:34.369917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.847 [2024-10-01 08:46:34.369927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.847 qpair failed and we were unable to recover it. 00:31:42.847 [2024-10-01 08:46:34.370191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.847 [2024-10-01 08:46:34.370201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.847 qpair failed and we were unable to recover it. 00:31:42.847 [2024-10-01 08:46:34.370499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.847 [2024-10-01 08:46:34.370509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.847 qpair failed and we were unable to recover it. 00:31:42.847 [2024-10-01 08:46:34.370786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.847 [2024-10-01 08:46:34.370796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.847 qpair failed and we were unable to recover it. 00:31:42.847 [2024-10-01 08:46:34.370984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.847 [2024-10-01 08:46:34.370996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.847 qpair failed and we were unable to recover it. 00:31:42.847 [2024-10-01 08:46:34.371317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.847 [2024-10-01 08:46:34.371327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.847 qpair failed and we were unable to recover it. 00:31:42.847 [2024-10-01 08:46:34.371609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.847 [2024-10-01 08:46:34.371619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.847 qpair failed and we were unable to recover it. 00:31:42.847 [2024-10-01 08:46:34.371921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.847 [2024-10-01 08:46:34.371931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.847 qpair failed and we were unable to recover it. 00:31:42.847 [2024-10-01 08:46:34.372233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.847 [2024-10-01 08:46:34.372243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.847 qpair failed and we were unable to recover it. 00:31:42.847 [2024-10-01 08:46:34.372531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.847 [2024-10-01 08:46:34.372541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.847 qpair failed and we were unable to recover it. 00:31:42.847 [2024-10-01 08:46:34.372867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.847 [2024-10-01 08:46:34.372877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.847 qpair failed and we were unable to recover it. 00:31:42.847 [2024-10-01 08:46:34.373169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.847 [2024-10-01 08:46:34.373183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.847 qpair failed and we were unable to recover it. 00:31:42.847 [2024-10-01 08:46:34.373520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.847 [2024-10-01 08:46:34.373529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.847 qpair failed and we were unable to recover it. 00:31:42.847 [2024-10-01 08:46:34.373706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.847 [2024-10-01 08:46:34.373716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.847 qpair failed and we were unable to recover it. 00:31:42.847 [2024-10-01 08:46:34.373769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.847 [2024-10-01 08:46:34.373780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.847 qpair failed and we were unable to recover it. 00:31:42.847 [2024-10-01 08:46:34.373981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.847 [2024-10-01 08:46:34.373992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.847 qpair failed and we were unable to recover it. 00:31:42.847 [2024-10-01 08:46:34.374202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.847 [2024-10-01 08:46:34.374213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.847 qpair failed and we were unable to recover it. 00:31:42.847 [2024-10-01 08:46:34.374545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.847 [2024-10-01 08:46:34.374555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.847 qpair failed and we were unable to recover it. 00:31:42.847 [2024-10-01 08:46:34.374756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.847 [2024-10-01 08:46:34.374765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.847 qpair failed and we were unable to recover it. 00:31:42.847 [2024-10-01 08:46:34.374982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.847 [2024-10-01 08:46:34.374992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.847 qpair failed and we were unable to recover it. 00:31:42.847 [2024-10-01 08:46:34.375328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.847 [2024-10-01 08:46:34.375338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.847 qpair failed and we were unable to recover it. 00:31:42.847 [2024-10-01 08:46:34.375626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.847 [2024-10-01 08:46:34.375636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.847 qpair failed and we were unable to recover it. 00:31:42.847 [2024-10-01 08:46:34.375943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.847 [2024-10-01 08:46:34.375953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.847 qpair failed and we were unable to recover it. 00:31:42.847 [2024-10-01 08:46:34.376125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.847 [2024-10-01 08:46:34.376135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.847 qpair failed and we were unable to recover it. 00:31:42.847 [2024-10-01 08:46:34.376370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.847 [2024-10-01 08:46:34.376380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.847 qpair failed and we were unable to recover it. 00:31:42.848 [2024-10-01 08:46:34.376610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.848 [2024-10-01 08:46:34.376621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.848 qpair failed and we were unable to recover it. 00:31:42.848 [2024-10-01 08:46:34.376921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.848 [2024-10-01 08:46:34.376930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.848 qpair failed and we were unable to recover it. 00:31:42.848 [2024-10-01 08:46:34.377198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.848 [2024-10-01 08:46:34.377208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.848 qpair failed and we were unable to recover it. 00:31:42.848 [2024-10-01 08:46:34.377487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.848 [2024-10-01 08:46:34.377497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.848 qpair failed and we were unable to recover it. 00:31:42.848 [2024-10-01 08:46:34.377666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.848 [2024-10-01 08:46:34.377676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.848 qpair failed and we were unable to recover it. 00:31:42.848 [2024-10-01 08:46:34.377870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.848 [2024-10-01 08:46:34.377879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.848 qpair failed and we were unable to recover it. 00:31:42.848 [2024-10-01 08:46:34.378199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.848 [2024-10-01 08:46:34.378209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.848 qpair failed and we were unable to recover it. 00:31:42.848 [2024-10-01 08:46:34.378491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.848 [2024-10-01 08:46:34.378501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.848 qpair failed and we were unable to recover it. 00:31:42.848 [2024-10-01 08:46:34.378676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.848 [2024-10-01 08:46:34.378687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.848 qpair failed and we were unable to recover it. 00:31:42.848 [2024-10-01 08:46:34.378890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.848 [2024-10-01 08:46:34.378901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.848 qpair failed and we were unable to recover it. 00:31:42.848 [2024-10-01 08:46:34.379068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.848 [2024-10-01 08:46:34.379078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.848 qpair failed and we were unable to recover it. 00:31:42.848 [2024-10-01 08:46:34.379300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.848 [2024-10-01 08:46:34.379310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.848 qpair failed and we were unable to recover it. 00:31:42.848 [2024-10-01 08:46:34.379492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.848 [2024-10-01 08:46:34.379502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.848 qpair failed and we were unable to recover it. 00:31:42.848 [2024-10-01 08:46:34.379834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.848 [2024-10-01 08:46:34.379845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.848 qpair failed and we were unable to recover it. 00:31:42.848 [2024-10-01 08:46:34.380174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.848 [2024-10-01 08:46:34.380184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.848 qpair failed and we were unable to recover it. 00:31:42.848 [2024-10-01 08:46:34.380364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.848 [2024-10-01 08:46:34.380374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.848 qpair failed and we were unable to recover it. 00:31:42.848 [2024-10-01 08:46:34.380568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.848 [2024-10-01 08:46:34.380578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.848 qpair failed and we were unable to recover it. 00:31:42.848 [2024-10-01 08:46:34.380950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.848 [2024-10-01 08:46:34.380960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.848 qpair failed and we were unable to recover it. 00:31:42.848 [2024-10-01 08:46:34.381164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.848 [2024-10-01 08:46:34.381175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.848 qpair failed and we were unable to recover it. 00:31:42.848 [2024-10-01 08:46:34.381353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.848 [2024-10-01 08:46:34.381363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.848 qpair failed and we were unable to recover it. 00:31:42.848 [2024-10-01 08:46:34.381690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.848 [2024-10-01 08:46:34.381699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.848 qpair failed and we were unable to recover it. 00:31:42.848 [2024-10-01 08:46:34.382012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.848 [2024-10-01 08:46:34.382022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.848 qpair failed and we were unable to recover it. 00:31:42.848 [2024-10-01 08:46:34.382247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.848 [2024-10-01 08:46:34.382257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.848 qpair failed and we were unable to recover it. 00:31:42.848 [2024-10-01 08:46:34.382551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.848 [2024-10-01 08:46:34.382560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.848 qpair failed and we were unable to recover it. 00:31:42.848 [2024-10-01 08:46:34.382867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.848 [2024-10-01 08:46:34.382878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.848 qpair failed and we were unable to recover it. 00:31:42.848 [2024-10-01 08:46:34.383200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.848 [2024-10-01 08:46:34.383211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.848 qpair failed and we were unable to recover it. 00:31:42.848 [2024-10-01 08:46:34.383541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.848 [2024-10-01 08:46:34.383551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.848 qpair failed and we were unable to recover it. 00:31:42.848 [2024-10-01 08:46:34.383856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.848 [2024-10-01 08:46:34.383869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.848 qpair failed and we were unable to recover it. 00:31:42.848 [2024-10-01 08:46:34.384203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.848 [2024-10-01 08:46:34.384213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.848 qpair failed and we were unable to recover it. 00:31:42.848 [2024-10-01 08:46:34.384490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.848 [2024-10-01 08:46:34.384501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.848 qpair failed and we were unable to recover it. 00:31:42.848 [2024-10-01 08:46:34.384834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.848 [2024-10-01 08:46:34.384845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.848 qpair failed and we were unable to recover it. 00:31:42.848 [2024-10-01 08:46:34.385194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.848 [2024-10-01 08:46:34.385205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.848 qpair failed and we were unable to recover it. 00:31:42.848 [2024-10-01 08:46:34.385494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.848 [2024-10-01 08:46:34.385504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.848 qpair failed and we were unable to recover it. 00:31:42.848 [2024-10-01 08:46:34.385841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.848 [2024-10-01 08:46:34.385851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.848 qpair failed and we were unable to recover it. 00:31:42.848 [2024-10-01 08:46:34.386182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.848 [2024-10-01 08:46:34.386193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.848 qpair failed and we were unable to recover it. 00:31:42.848 [2024-10-01 08:46:34.386463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.848 [2024-10-01 08:46:34.386473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.848 qpair failed and we were unable to recover it. 00:31:42.848 [2024-10-01 08:46:34.386771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.848 [2024-10-01 08:46:34.386782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.848 qpair failed and we were unable to recover it. 00:31:42.848 [2024-10-01 08:46:34.387119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.848 [2024-10-01 08:46:34.387130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.848 qpair failed and we were unable to recover it. 00:31:42.848 [2024-10-01 08:46:34.387423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.849 [2024-10-01 08:46:34.387433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.849 qpair failed and we were unable to recover it. 00:31:42.849 [2024-10-01 08:46:34.387739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.849 [2024-10-01 08:46:34.387748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.849 qpair failed and we were unable to recover it. 00:31:42.849 [2024-10-01 08:46:34.388025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.849 [2024-10-01 08:46:34.388035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.849 qpair failed and we were unable to recover it. 00:31:42.849 [2024-10-01 08:46:34.388370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.849 [2024-10-01 08:46:34.388379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.849 qpair failed and we were unable to recover it. 00:31:42.849 [2024-10-01 08:46:34.388576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.849 [2024-10-01 08:46:34.388586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.849 qpair failed and we were unable to recover it. 00:31:42.849 [2024-10-01 08:46:34.388833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.849 [2024-10-01 08:46:34.388843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.849 qpair failed and we were unable to recover it. 00:31:42.849 [2024-10-01 08:46:34.389163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.849 [2024-10-01 08:46:34.389173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.849 qpair failed and we were unable to recover it. 00:31:42.849 [2024-10-01 08:46:34.389462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.849 [2024-10-01 08:46:34.389472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.849 qpair failed and we were unable to recover it. 00:31:42.849 [2024-10-01 08:46:34.389789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.849 [2024-10-01 08:46:34.389798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.849 qpair failed and we were unable to recover it. 00:31:42.849 [2024-10-01 08:46:34.390113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.849 [2024-10-01 08:46:34.390124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.849 qpair failed and we were unable to recover it. 00:31:42.849 [2024-10-01 08:46:34.390412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.849 [2024-10-01 08:46:34.390422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.849 qpair failed and we were unable to recover it. 00:31:42.849 [2024-10-01 08:46:34.390758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.849 [2024-10-01 08:46:34.390768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.849 qpair failed and we were unable to recover it. 00:31:42.849 [2024-10-01 08:46:34.390849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.849 [2024-10-01 08:46:34.390858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.849 qpair failed and we were unable to recover it. 00:31:42.849 [2024-10-01 08:46:34.391110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.849 [2024-10-01 08:46:34.391121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.849 qpair failed and we were unable to recover it. 00:31:42.849 [2024-10-01 08:46:34.391439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.849 [2024-10-01 08:46:34.391449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.849 qpair failed and we were unable to recover it. 00:31:42.849 [2024-10-01 08:46:34.391759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.849 [2024-10-01 08:46:34.391768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.849 qpair failed and we were unable to recover it. 00:31:42.849 [2024-10-01 08:46:34.392108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.849 [2024-10-01 08:46:34.392119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.849 qpair failed and we were unable to recover it. 00:31:42.849 [2024-10-01 08:46:34.392433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.849 [2024-10-01 08:46:34.392443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.849 qpair failed and we were unable to recover it. 00:31:42.849 [2024-10-01 08:46:34.392702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.849 [2024-10-01 08:46:34.392712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.849 qpair failed and we were unable to recover it. 00:31:42.849 [2024-10-01 08:46:34.393003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.849 [2024-10-01 08:46:34.393013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.849 qpair failed and we were unable to recover it. 00:31:42.849 [2024-10-01 08:46:34.393319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.849 [2024-10-01 08:46:34.393329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.849 qpair failed and we were unable to recover it. 00:31:42.849 [2024-10-01 08:46:34.393502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.849 [2024-10-01 08:46:34.393511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.849 qpair failed and we were unable to recover it. 00:31:42.849 [2024-10-01 08:46:34.393799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.849 [2024-10-01 08:46:34.393809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.849 qpair failed and we were unable to recover it. 00:31:42.849 [2024-10-01 08:46:34.394069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.849 [2024-10-01 08:46:34.394080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.849 qpair failed and we were unable to recover it. 00:31:42.849 [2024-10-01 08:46:34.394393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.849 [2024-10-01 08:46:34.394402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.849 qpair failed and we were unable to recover it. 00:31:42.849 [2024-10-01 08:46:34.394582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.849 [2024-10-01 08:46:34.394592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.849 qpair failed and we were unable to recover it. 00:31:42.849 [2024-10-01 08:46:34.394864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.849 [2024-10-01 08:46:34.394874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.849 qpair failed and we were unable to recover it. 00:31:42.849 [2024-10-01 08:46:34.395086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.849 [2024-10-01 08:46:34.395097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.849 qpair failed and we were unable to recover it. 00:31:42.849 [2024-10-01 08:46:34.395151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.849 [2024-10-01 08:46:34.395161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.849 qpair failed and we were unable to recover it. 00:31:42.849 [2024-10-01 08:46:34.395452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.849 [2024-10-01 08:46:34.395462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.849 qpair failed and we were unable to recover it. 00:31:42.849 [2024-10-01 08:46:34.395644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.849 [2024-10-01 08:46:34.395656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.849 qpair failed and we were unable to recover it. 00:31:42.849 [2024-10-01 08:46:34.395988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.849 [2024-10-01 08:46:34.396002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.849 qpair failed and we were unable to recover it. 00:31:42.849 [2024-10-01 08:46:34.396192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.849 [2024-10-01 08:46:34.396202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.849 qpair failed and we were unable to recover it. 00:31:42.849 [2024-10-01 08:46:34.396589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.849 [2024-10-01 08:46:34.396599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.849 qpair failed and we were unable to recover it. 00:31:42.849 [2024-10-01 08:46:34.396891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.849 [2024-10-01 08:46:34.396901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.849 qpair failed and we were unable to recover it. 00:31:42.849 [2024-10-01 08:46:34.397034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.849 [2024-10-01 08:46:34.397045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.849 qpair failed and we were unable to recover it. 00:31:42.849 [2024-10-01 08:46:34.397351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.849 [2024-10-01 08:46:34.397361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.849 qpair failed and we were unable to recover it. 00:31:42.849 [2024-10-01 08:46:34.397571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.849 [2024-10-01 08:46:34.397581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.849 qpair failed and we were unable to recover it. 00:31:42.849 [2024-10-01 08:46:34.397862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.850 [2024-10-01 08:46:34.397872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.850 qpair failed and we were unable to recover it. 00:31:42.850 [2024-10-01 08:46:34.398220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.850 [2024-10-01 08:46:34.398230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.850 qpair failed and we were unable to recover it. 00:31:42.850 [2024-10-01 08:46:34.398552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.850 [2024-10-01 08:46:34.398561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.850 qpair failed and we were unable to recover it. 00:31:42.850 [2024-10-01 08:46:34.398756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.850 [2024-10-01 08:46:34.398765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.850 qpair failed and we were unable to recover it. 00:31:42.850 [2024-10-01 08:46:34.398978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.850 [2024-10-01 08:46:34.398988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.850 qpair failed and we were unable to recover it. 00:31:42.850 [2024-10-01 08:46:34.399322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.850 [2024-10-01 08:46:34.399332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.850 qpair failed and we were unable to recover it. 00:31:42.850 [2024-10-01 08:46:34.399641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.850 [2024-10-01 08:46:34.399651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.850 qpair failed and we were unable to recover it. 00:31:42.850 [2024-10-01 08:46:34.399972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.850 [2024-10-01 08:46:34.399982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.850 qpair failed and we were unable to recover it. 00:31:42.850 [2024-10-01 08:46:34.400303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.850 [2024-10-01 08:46:34.400313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.850 qpair failed and we were unable to recover it. 00:31:42.850 [2024-10-01 08:46:34.400598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.850 [2024-10-01 08:46:34.400607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.850 qpair failed and we were unable to recover it. 00:31:42.850 [2024-10-01 08:46:34.400781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.850 [2024-10-01 08:46:34.400792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.850 qpair failed and we were unable to recover it. 00:31:42.850 [2024-10-01 08:46:34.400964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.850 [2024-10-01 08:46:34.400975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.850 qpair failed and we were unable to recover it. 00:31:42.850 [2024-10-01 08:46:34.401295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.850 [2024-10-01 08:46:34.401306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.850 qpair failed and we were unable to recover it. 00:31:42.850 [2024-10-01 08:46:34.401666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.850 [2024-10-01 08:46:34.401676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.850 qpair failed and we were unable to recover it. 00:31:42.850 [2024-10-01 08:46:34.402004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.850 [2024-10-01 08:46:34.402015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.850 qpair failed and we were unable to recover it. 00:31:42.850 [2024-10-01 08:46:34.402308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.850 [2024-10-01 08:46:34.402318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.850 qpair failed and we were unable to recover it. 00:31:42.850 [2024-10-01 08:46:34.402701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.850 [2024-10-01 08:46:34.402711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.850 qpair failed and we were unable to recover it. 00:31:42.850 [2024-10-01 08:46:34.402989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.850 [2024-10-01 08:46:34.403001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.850 qpair failed and we were unable to recover it. 00:31:42.850 [2024-10-01 08:46:34.403322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.850 [2024-10-01 08:46:34.403332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.850 qpair failed and we were unable to recover it. 00:31:42.850 [2024-10-01 08:46:34.403524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.850 [2024-10-01 08:46:34.403537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.850 qpair failed and we were unable to recover it. 00:31:42.850 [2024-10-01 08:46:34.403758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.850 [2024-10-01 08:46:34.403768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.850 qpair failed and we were unable to recover it. 00:31:42.850 [2024-10-01 08:46:34.404036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.850 [2024-10-01 08:46:34.404046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.850 qpair failed and we were unable to recover it. 00:31:42.850 [2024-10-01 08:46:34.404362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.850 [2024-10-01 08:46:34.404372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.850 qpair failed and we were unable to recover it. 00:31:42.850 [2024-10-01 08:46:34.404680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.850 [2024-10-01 08:46:34.404689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.850 qpair failed and we were unable to recover it. 00:31:42.850 [2024-10-01 08:46:34.405004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.850 [2024-10-01 08:46:34.405014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.850 qpair failed and we were unable to recover it. 00:31:42.850 [2024-10-01 08:46:34.405306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.850 [2024-10-01 08:46:34.405316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.850 qpair failed and we were unable to recover it. 00:31:42.850 [2024-10-01 08:46:34.405594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.850 [2024-10-01 08:46:34.405604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.850 qpair failed and we were unable to recover it. 00:31:42.850 [2024-10-01 08:46:34.405914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.850 [2024-10-01 08:46:34.405924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.850 qpair failed and we were unable to recover it. 00:31:42.850 [2024-10-01 08:46:34.406211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.850 [2024-10-01 08:46:34.406221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.850 qpair failed and we were unable to recover it. 00:31:42.850 [2024-10-01 08:46:34.406416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.850 [2024-10-01 08:46:34.406425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.850 qpair failed and we were unable to recover it. 00:31:42.850 [2024-10-01 08:46:34.406776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.850 [2024-10-01 08:46:34.406786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.850 qpair failed and we were unable to recover it. 00:31:42.850 [2024-10-01 08:46:34.406974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.850 [2024-10-01 08:46:34.406985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.850 qpair failed and we were unable to recover it. 00:31:42.850 [2024-10-01 08:46:34.407222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.850 [2024-10-01 08:46:34.407233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.850 qpair failed and we were unable to recover it. 00:31:42.850 [2024-10-01 08:46:34.407469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.850 [2024-10-01 08:46:34.407479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.850 qpair failed and we were unable to recover it. 00:31:42.851 [2024-10-01 08:46:34.407795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.851 [2024-10-01 08:46:34.407806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.851 qpair failed and we were unable to recover it. 00:31:42.851 [2024-10-01 08:46:34.408120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.851 [2024-10-01 08:46:34.408130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.851 qpair failed and we were unable to recover it. 00:31:42.851 [2024-10-01 08:46:34.408296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.851 [2024-10-01 08:46:34.408306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.851 qpair failed and we were unable to recover it. 00:31:42.851 [2024-10-01 08:46:34.408501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.851 [2024-10-01 08:46:34.408510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.851 qpair failed and we were unable to recover it. 00:31:42.851 [2024-10-01 08:46:34.408812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.851 [2024-10-01 08:46:34.408822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.851 qpair failed and we were unable to recover it. 00:31:42.851 [2024-10-01 08:46:34.409158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.851 [2024-10-01 08:46:34.409168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.851 qpair failed and we were unable to recover it. 00:31:42.851 [2024-10-01 08:46:34.409480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.851 [2024-10-01 08:46:34.409490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.851 qpair failed and we were unable to recover it. 00:31:42.851 [2024-10-01 08:46:34.409774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.851 [2024-10-01 08:46:34.409784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.851 qpair failed and we were unable to recover it. 00:31:42.851 [2024-10-01 08:46:34.410007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.851 [2024-10-01 08:46:34.410017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.851 qpair failed and we were unable to recover it. 00:31:42.851 [2024-10-01 08:46:34.410359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.851 [2024-10-01 08:46:34.410369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.851 qpair failed and we were unable to recover it. 00:31:42.851 [2024-10-01 08:46:34.410542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.851 [2024-10-01 08:46:34.410552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.851 qpair failed and we were unable to recover it. 00:31:42.851 [2024-10-01 08:46:34.410831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.851 [2024-10-01 08:46:34.410841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.851 qpair failed and we were unable to recover it. 00:31:42.851 [2024-10-01 08:46:34.411151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.851 [2024-10-01 08:46:34.411161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.851 qpair failed and we were unable to recover it. 00:31:42.851 [2024-10-01 08:46:34.411471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.851 [2024-10-01 08:46:34.411481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.851 qpair failed and we were unable to recover it. 00:31:42.851 [2024-10-01 08:46:34.411681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.851 [2024-10-01 08:46:34.411691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.851 qpair failed and we were unable to recover it. 00:31:42.851 [2024-10-01 08:46:34.412000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.851 [2024-10-01 08:46:34.412011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.851 qpair failed and we were unable to recover it. 00:31:42.851 [2024-10-01 08:46:34.412218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.851 [2024-10-01 08:46:34.412228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.851 qpair failed and we were unable to recover it. 00:31:42.851 [2024-10-01 08:46:34.412570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.851 [2024-10-01 08:46:34.412580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.851 qpair failed and we were unable to recover it. 00:31:42.851 [2024-10-01 08:46:34.412915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.851 [2024-10-01 08:46:34.412925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.851 qpair failed and we were unable to recover it. 00:31:42.851 [2024-10-01 08:46:34.413224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.851 [2024-10-01 08:46:34.413234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.851 qpair failed and we were unable to recover it. 00:31:42.851 [2024-10-01 08:46:34.413575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.851 [2024-10-01 08:46:34.413585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.851 qpair failed and we were unable to recover it. 00:31:42.851 [2024-10-01 08:46:34.413774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.851 [2024-10-01 08:46:34.413785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.851 qpair failed and we were unable to recover it. 00:31:42.851 [2024-10-01 08:46:34.414086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.851 [2024-10-01 08:46:34.414096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.851 qpair failed and we were unable to recover it. 00:31:42.851 [2024-10-01 08:46:34.414426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.851 [2024-10-01 08:46:34.414437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.851 qpair failed and we were unable to recover it. 00:31:42.851 [2024-10-01 08:46:34.414766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.851 [2024-10-01 08:46:34.414776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.851 qpair failed and we were unable to recover it. 00:31:42.851 [2024-10-01 08:46:34.415062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.851 [2024-10-01 08:46:34.415072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.851 qpair failed and we were unable to recover it. 00:31:42.851 [2024-10-01 08:46:34.415256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.851 [2024-10-01 08:46:34.415269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.851 qpair failed and we were unable to recover it. 00:31:42.851 [2024-10-01 08:46:34.415580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.851 [2024-10-01 08:46:34.415591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.851 qpair failed and we were unable to recover it. 00:31:42.851 [2024-10-01 08:46:34.415920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.851 [2024-10-01 08:46:34.415930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.851 qpair failed and we were unable to recover it. 00:31:42.851 [2024-10-01 08:46:34.416196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.851 [2024-10-01 08:46:34.416206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.851 qpair failed and we were unable to recover it. 00:31:42.851 [2024-10-01 08:46:34.416515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.851 [2024-10-01 08:46:34.416531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.851 qpair failed and we were unable to recover it. 00:31:42.851 [2024-10-01 08:46:34.416863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.851 [2024-10-01 08:46:34.416873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.851 qpair failed and we were unable to recover it. 00:31:42.851 [2024-10-01 08:46:34.417064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.851 [2024-10-01 08:46:34.417075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.851 qpair failed and we were unable to recover it. 00:31:42.851 [2024-10-01 08:46:34.417383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.851 [2024-10-01 08:46:34.417394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.851 qpair failed and we were unable to recover it. 00:31:42.851 [2024-10-01 08:46:34.417557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.851 [2024-10-01 08:46:34.417567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.851 qpair failed and we were unable to recover it. 00:31:42.851 [2024-10-01 08:46:34.417898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.851 [2024-10-01 08:46:34.417909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.851 qpair failed and we were unable to recover it. 00:31:42.851 [2024-10-01 08:46:34.418213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.851 [2024-10-01 08:46:34.418224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.851 qpair failed and we were unable to recover it. 00:31:42.851 [2024-10-01 08:46:34.418407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.851 [2024-10-01 08:46:34.418418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.851 qpair failed and we were unable to recover it. 00:31:42.851 [2024-10-01 08:46:34.418715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.851 [2024-10-01 08:46:34.418725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.851 qpair failed and we were unable to recover it. 00:31:42.852 [2024-10-01 08:46:34.419030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.852 [2024-10-01 08:46:34.419041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.852 qpair failed and we were unable to recover it. 00:31:42.852 [2024-10-01 08:46:34.419325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.852 [2024-10-01 08:46:34.419335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.852 qpair failed and we were unable to recover it. 00:31:42.852 [2024-10-01 08:46:34.419524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.852 [2024-10-01 08:46:34.419533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.852 qpair failed and we were unable to recover it. 00:31:42.852 [2024-10-01 08:46:34.419735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.852 [2024-10-01 08:46:34.419746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.852 qpair failed and we were unable to recover it. 00:31:42.852 [2024-10-01 08:46:34.420084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.852 [2024-10-01 08:46:34.420094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.852 qpair failed and we were unable to recover it. 00:31:42.852 [2024-10-01 08:46:34.420403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.852 [2024-10-01 08:46:34.420412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.852 qpair failed and we were unable to recover it. 00:31:42.852 [2024-10-01 08:46:34.420591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.852 [2024-10-01 08:46:34.420600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.852 qpair failed and we were unable to recover it. 00:31:42.852 [2024-10-01 08:46:34.420945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.852 [2024-10-01 08:46:34.420954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.852 qpair failed and we were unable to recover it. 00:31:42.852 [2024-10-01 08:46:34.421204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.852 [2024-10-01 08:46:34.421214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.852 qpair failed and we were unable to recover it. 00:31:42.852 [2024-10-01 08:46:34.421380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.852 [2024-10-01 08:46:34.421389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.852 qpair failed and we were unable to recover it. 00:31:42.852 [2024-10-01 08:46:34.421655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.852 [2024-10-01 08:46:34.421665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.852 qpair failed and we were unable to recover it. 00:31:42.852 [2024-10-01 08:46:34.421982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.852 [2024-10-01 08:46:34.421991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.852 qpair failed and we were unable to recover it. 00:31:42.852 [2024-10-01 08:46:34.422310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.852 [2024-10-01 08:46:34.422321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.852 qpair failed and we were unable to recover it. 00:31:42.852 [2024-10-01 08:46:34.422638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.852 [2024-10-01 08:46:34.422647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.852 qpair failed and we were unable to recover it. 00:31:42.852 [2024-10-01 08:46:34.422983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.852 [2024-10-01 08:46:34.423009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.852 qpair failed and we were unable to recover it. 00:31:42.852 [2024-10-01 08:46:34.423305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.852 [2024-10-01 08:46:34.423316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.852 qpair failed and we were unable to recover it. 00:31:42.852 [2024-10-01 08:46:34.423601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.852 [2024-10-01 08:46:34.423611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.852 qpair failed and we were unable to recover it. 00:31:42.852 [2024-10-01 08:46:34.423934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.852 [2024-10-01 08:46:34.423943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.852 qpair failed and we were unable to recover it. 00:31:42.852 [2024-10-01 08:46:34.424274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.852 [2024-10-01 08:46:34.424286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.852 qpair failed and we were unable to recover it. 00:31:42.852 [2024-10-01 08:46:34.424481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.852 [2024-10-01 08:46:34.424493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.852 qpair failed and we were unable to recover it. 00:31:42.852 [2024-10-01 08:46:34.424814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.852 [2024-10-01 08:46:34.424825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.852 qpair failed and we were unable to recover it. 00:31:42.852 [2024-10-01 08:46:34.425009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.852 [2024-10-01 08:46:34.425020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.852 qpair failed and we were unable to recover it. 00:31:42.852 [2024-10-01 08:46:34.425271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.852 [2024-10-01 08:46:34.425281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.852 qpair failed and we were unable to recover it. 00:31:42.852 [2024-10-01 08:46:34.425530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.852 [2024-10-01 08:46:34.425540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.852 qpair failed and we were unable to recover it. 00:31:42.852 [2024-10-01 08:46:34.425722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.852 [2024-10-01 08:46:34.425731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.852 qpair failed and we were unable to recover it. 00:31:42.852 [2024-10-01 08:46:34.426038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.852 [2024-10-01 08:46:34.426049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.852 qpair failed and we were unable to recover it. 00:31:42.852 [2024-10-01 08:46:34.426352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.852 [2024-10-01 08:46:34.426362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.852 qpair failed and we were unable to recover it. 00:31:42.852 [2024-10-01 08:46:34.426693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.852 [2024-10-01 08:46:34.426703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.852 qpair failed and we were unable to recover it. 00:31:42.852 [2024-10-01 08:46:34.427020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.852 [2024-10-01 08:46:34.427030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.852 qpair failed and we were unable to recover it. 00:31:42.852 [2024-10-01 08:46:34.427233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.852 [2024-10-01 08:46:34.427243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.852 qpair failed and we were unable to recover it. 00:31:42.852 [2024-10-01 08:46:34.427351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.852 [2024-10-01 08:46:34.427361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.852 qpair failed and we were unable to recover it. 00:31:42.852 [2024-10-01 08:46:34.427639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.852 [2024-10-01 08:46:34.427649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.852 qpair failed and we were unable to recover it. 00:31:42.852 [2024-10-01 08:46:34.427970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.852 [2024-10-01 08:46:34.427980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.852 qpair failed and we were unable to recover it. 00:31:42.852 [2024-10-01 08:46:34.428273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.852 [2024-10-01 08:46:34.428283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.852 qpair failed and we were unable to recover it. 00:31:42.852 [2024-10-01 08:46:34.428552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.852 [2024-10-01 08:46:34.428562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.852 qpair failed and we were unable to recover it. 00:31:42.852 [2024-10-01 08:46:34.428785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.852 [2024-10-01 08:46:34.428795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.852 qpair failed and we were unable to recover it. 00:31:42.852 [2024-10-01 08:46:34.428988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.852 [2024-10-01 08:46:34.429002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.852 qpair failed and we were unable to recover it. 00:31:42.852 [2024-10-01 08:46:34.429190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.852 [2024-10-01 08:46:34.429200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.852 qpair failed and we were unable to recover it. 00:31:42.852 [2024-10-01 08:46:34.429574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.853 [2024-10-01 08:46:34.429584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.853 qpair failed and we were unable to recover it. 00:31:42.853 [2024-10-01 08:46:34.429920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.853 [2024-10-01 08:46:34.429930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.853 qpair failed and we were unable to recover it. 00:31:42.853 [2024-10-01 08:46:34.430105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.853 [2024-10-01 08:46:34.430115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.853 qpair failed and we were unable to recover it. 00:31:42.853 [2024-10-01 08:46:34.430453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.853 [2024-10-01 08:46:34.430463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.853 qpair failed and we were unable to recover it. 00:31:42.853 [2024-10-01 08:46:34.430756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.853 [2024-10-01 08:46:34.430766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.853 qpair failed and we were unable to recover it. 00:31:42.853 [2024-10-01 08:46:34.431094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.853 [2024-10-01 08:46:34.431105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.853 qpair failed and we were unable to recover it. 00:31:42.853 [2024-10-01 08:46:34.431426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.853 [2024-10-01 08:46:34.431437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.853 qpair failed and we were unable to recover it. 00:31:42.853 [2024-10-01 08:46:34.431767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.853 [2024-10-01 08:46:34.431777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.853 qpair failed and we were unable to recover it. 00:31:42.853 [2024-10-01 08:46:34.431985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.853 [2024-10-01 08:46:34.431999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.853 qpair failed and we were unable to recover it. 00:31:42.853 [2024-10-01 08:46:34.432167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.853 [2024-10-01 08:46:34.432176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.853 qpair failed and we were unable to recover it. 00:31:42.853 [2024-10-01 08:46:34.432444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.853 [2024-10-01 08:46:34.432454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.853 qpair failed and we were unable to recover it. 00:31:42.853 [2024-10-01 08:46:34.432787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.853 [2024-10-01 08:46:34.432797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.853 qpair failed and we were unable to recover it. 00:31:42.853 [2024-10-01 08:46:34.433017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.853 [2024-10-01 08:46:34.433026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.853 qpair failed and we were unable to recover it. 00:31:42.853 [2024-10-01 08:46:34.433302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.853 [2024-10-01 08:46:34.433312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.853 qpair failed and we were unable to recover it. 00:31:42.853 [2024-10-01 08:46:34.433628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.853 [2024-10-01 08:46:34.433639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.853 qpair failed and we were unable to recover it. 00:31:42.853 [2024-10-01 08:46:34.433964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.853 [2024-10-01 08:46:34.433975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.853 qpair failed and we were unable to recover it. 00:31:42.853 [2024-10-01 08:46:34.434249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.853 [2024-10-01 08:46:34.434260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.853 qpair failed and we were unable to recover it. 00:31:42.853 [2024-10-01 08:46:34.434583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.853 [2024-10-01 08:46:34.434596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.853 qpair failed and we were unable to recover it. 00:31:42.853 [2024-10-01 08:46:34.434816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.853 [2024-10-01 08:46:34.434827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.853 qpair failed and we were unable to recover it. 00:31:42.853 [2024-10-01 08:46:34.435150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.853 [2024-10-01 08:46:34.435161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.853 qpair failed and we were unable to recover it. 00:31:42.853 [2024-10-01 08:46:34.435334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.853 [2024-10-01 08:46:34.435346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.853 qpair failed and we were unable to recover it. 00:31:42.853 [2024-10-01 08:46:34.435509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.853 [2024-10-01 08:46:34.435519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.853 qpair failed and we were unable to recover it. 00:31:42.853 [2024-10-01 08:46:34.435735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.853 [2024-10-01 08:46:34.435745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.853 qpair failed and we were unable to recover it. 00:31:42.853 [2024-10-01 08:46:34.436089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.853 [2024-10-01 08:46:34.436101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.853 qpair failed and we were unable to recover it. 00:31:42.853 [2024-10-01 08:46:34.436429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.853 [2024-10-01 08:46:34.436439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.853 qpair failed and we were unable to recover it. 00:31:42.853 [2024-10-01 08:46:34.436756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.853 [2024-10-01 08:46:34.436766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.853 qpair failed and we were unable to recover it. 00:31:42.853 [2024-10-01 08:46:34.437086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.853 [2024-10-01 08:46:34.437096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.853 qpair failed and we were unable to recover it. 00:31:42.853 [2024-10-01 08:46:34.437306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.853 [2024-10-01 08:46:34.437316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.853 qpair failed and we were unable to recover it. 00:31:42.853 [2024-10-01 08:46:34.437631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.853 [2024-10-01 08:46:34.437640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.853 qpair failed and we were unable to recover it. 00:31:42.853 [2024-10-01 08:46:34.437953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.853 [2024-10-01 08:46:34.437963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.853 qpair failed and we were unable to recover it. 00:31:42.853 [2024-10-01 08:46:34.438184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.853 [2024-10-01 08:46:34.438195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.853 qpair failed and we were unable to recover it. 00:31:42.853 [2024-10-01 08:46:34.438515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.853 [2024-10-01 08:46:34.438525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.853 qpair failed and we were unable to recover it. 00:31:42.853 [2024-10-01 08:46:34.438832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.853 [2024-10-01 08:46:34.438841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.853 qpair failed and we were unable to recover it. 00:31:42.853 [2024-10-01 08:46:34.439154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.853 [2024-10-01 08:46:34.439164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.853 qpair failed and we were unable to recover it. 00:31:42.853 [2024-10-01 08:46:34.439493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.853 [2024-10-01 08:46:34.439504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.853 qpair failed and we were unable to recover it. 00:31:42.853 [2024-10-01 08:46:34.439697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.853 [2024-10-01 08:46:34.439707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.853 qpair failed and we were unable to recover it. 00:31:42.853 [2024-10-01 08:46:34.440038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.853 [2024-10-01 08:46:34.440048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.853 qpair failed and we were unable to recover it. 00:31:42.853 [2024-10-01 08:46:34.440473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.853 [2024-10-01 08:46:34.440483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.853 qpair failed and we were unable to recover it. 00:31:42.853 [2024-10-01 08:46:34.440793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.853 [2024-10-01 08:46:34.440802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.854 qpair failed and we were unable to recover it. 00:31:42.854 [2024-10-01 08:46:34.441022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.854 [2024-10-01 08:46:34.441033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.854 qpair failed and we were unable to recover it. 00:31:42.854 [2024-10-01 08:46:34.441249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.854 [2024-10-01 08:46:34.441259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.854 qpair failed and we were unable to recover it. 00:31:42.854 [2024-10-01 08:46:34.441580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.854 [2024-10-01 08:46:34.441589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.854 qpair failed and we were unable to recover it. 00:31:42.854 [2024-10-01 08:46:34.441877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.854 [2024-10-01 08:46:34.441887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.854 qpair failed and we were unable to recover it. 00:31:42.854 [2024-10-01 08:46:34.442168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.854 [2024-10-01 08:46:34.442178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.854 qpair failed and we were unable to recover it. 00:31:42.854 [2024-10-01 08:46:34.442344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.854 [2024-10-01 08:46:34.442357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.854 qpair failed and we were unable to recover it. 00:31:42.854 [2024-10-01 08:46:34.442631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.854 [2024-10-01 08:46:34.442642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.854 qpair failed and we were unable to recover it. 00:31:42.854 [2024-10-01 08:46:34.443016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.854 [2024-10-01 08:46:34.443027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.854 qpair failed and we were unable to recover it. 00:31:42.854 [2024-10-01 08:46:34.443359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.854 [2024-10-01 08:46:34.443369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.854 qpair failed and we were unable to recover it. 00:31:42.854 [2024-10-01 08:46:34.443631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.854 [2024-10-01 08:46:34.443641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.854 qpair failed and we were unable to recover it. 00:31:42.854 [2024-10-01 08:46:34.443965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.854 [2024-10-01 08:46:34.443975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.854 qpair failed and we were unable to recover it. 00:31:42.854 [2024-10-01 08:46:34.444268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.854 [2024-10-01 08:46:34.444278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.854 qpair failed and we were unable to recover it. 00:31:42.854 [2024-10-01 08:46:34.444586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.854 [2024-10-01 08:46:34.444595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.854 qpair failed and we were unable to recover it. 00:31:42.854 [2024-10-01 08:46:34.444883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.854 [2024-10-01 08:46:34.444893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.854 qpair failed and we were unable to recover it. 00:31:42.854 [2024-10-01 08:46:34.445219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.854 [2024-10-01 08:46:34.445230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.854 qpair failed and we were unable to recover it. 00:31:42.854 [2024-10-01 08:46:34.445512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.854 [2024-10-01 08:46:34.445522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.854 qpair failed and we were unable to recover it. 00:31:42.854 [2024-10-01 08:46:34.445792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.854 [2024-10-01 08:46:34.445802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.854 qpair failed and we were unable to recover it. 00:31:42.854 [2024-10-01 08:46:34.446083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.854 [2024-10-01 08:46:34.446093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.854 qpair failed and we were unable to recover it. 00:31:42.854 [2024-10-01 08:46:34.446353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.854 [2024-10-01 08:46:34.446364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.854 qpair failed and we were unable to recover it. 00:31:42.854 [2024-10-01 08:46:34.446667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.854 [2024-10-01 08:46:34.446677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.854 qpair failed and we were unable to recover it. 00:31:42.854 [2024-10-01 08:46:34.446987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.854 [2024-10-01 08:46:34.447004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.854 qpair failed and we were unable to recover it. 00:31:42.854 [2024-10-01 08:46:34.447337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.854 [2024-10-01 08:46:34.447347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.854 qpair failed and we were unable to recover it. 00:31:42.854 [2024-10-01 08:46:34.447631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.854 [2024-10-01 08:46:34.447640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.854 qpair failed and we were unable to recover it. 00:31:42.854 [2024-10-01 08:46:34.447938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.854 [2024-10-01 08:46:34.447948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.854 qpair failed and we were unable to recover it. 00:31:42.854 [2024-10-01 08:46:34.448133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.854 [2024-10-01 08:46:34.448144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.854 qpair failed and we were unable to recover it. 00:31:42.854 [2024-10-01 08:46:34.448455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.854 [2024-10-01 08:46:34.448466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.854 qpair failed and we were unable to recover it. 00:31:42.854 [2024-10-01 08:46:34.448775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.854 [2024-10-01 08:46:34.448786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.854 qpair failed and we were unable to recover it. 00:31:42.854 [2024-10-01 08:46:34.449116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.854 [2024-10-01 08:46:34.449126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.854 qpair failed and we were unable to recover it. 00:31:42.854 [2024-10-01 08:46:34.449427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.854 [2024-10-01 08:46:34.449437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.854 qpair failed and we were unable to recover it. 00:31:42.854 [2024-10-01 08:46:34.449637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.854 [2024-10-01 08:46:34.449647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.854 qpair failed and we were unable to recover it. 00:31:42.854 [2024-10-01 08:46:34.449977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.854 [2024-10-01 08:46:34.449987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.854 qpair failed and we were unable to recover it. 00:31:42.854 [2024-10-01 08:46:34.450149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.854 [2024-10-01 08:46:34.450161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.854 qpair failed and we were unable to recover it. 00:31:42.854 [2024-10-01 08:46:34.450450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.854 [2024-10-01 08:46:34.450460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.854 qpair failed and we were unable to recover it. 00:31:42.854 [2024-10-01 08:46:34.450635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.854 [2024-10-01 08:46:34.450645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.854 qpair failed and we were unable to recover it. 00:31:42.855 [2024-10-01 08:46:34.450972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.855 [2024-10-01 08:46:34.450982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.855 qpair failed and we were unable to recover it. 00:31:42.855 [2024-10-01 08:46:34.451194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.855 [2024-10-01 08:46:34.451205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.855 qpair failed and we were unable to recover it. 00:31:42.855 [2024-10-01 08:46:34.451481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.855 [2024-10-01 08:46:34.451492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.855 qpair failed and we were unable to recover it. 00:31:42.855 [2024-10-01 08:46:34.451828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.855 [2024-10-01 08:46:34.451838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.855 qpair failed and we were unable to recover it. 00:31:42.855 [2024-10-01 08:46:34.452152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.855 [2024-10-01 08:46:34.452163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.855 qpair failed and we were unable to recover it. 00:31:42.855 [2024-10-01 08:46:34.452445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.855 [2024-10-01 08:46:34.452455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.855 qpair failed and we were unable to recover it. 00:31:42.855 [2024-10-01 08:46:34.452799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.855 [2024-10-01 08:46:34.452809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.855 qpair failed and we were unable to recover it. 00:31:42.855 [2024-10-01 08:46:34.453124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.855 [2024-10-01 08:46:34.453134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.855 qpair failed and we were unable to recover it. 00:31:42.855 [2024-10-01 08:46:34.453450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.855 [2024-10-01 08:46:34.453461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.855 qpair failed and we were unable to recover it. 00:31:42.855 [2024-10-01 08:46:34.453625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.855 [2024-10-01 08:46:34.453634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.855 qpair failed and we were unable to recover it. 00:31:42.855 [2024-10-01 08:46:34.453824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.855 [2024-10-01 08:46:34.453834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.855 qpair failed and we were unable to recover it. 00:31:42.855 [2024-10-01 08:46:34.454158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.855 [2024-10-01 08:46:34.454168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.855 qpair failed and we were unable to recover it. 00:31:42.855 [2024-10-01 08:46:34.454351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.855 [2024-10-01 08:46:34.454364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.855 qpair failed and we were unable to recover it. 00:31:42.855 [2024-10-01 08:46:34.454544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.855 [2024-10-01 08:46:34.454555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.855 qpair failed and we were unable to recover it. 00:31:42.855 [2024-10-01 08:46:34.454862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.855 [2024-10-01 08:46:34.454871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.855 qpair failed and we were unable to recover it. 00:31:42.855 [2024-10-01 08:46:34.455046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.855 [2024-10-01 08:46:34.455057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.855 qpair failed and we were unable to recover it. 00:31:42.855 [2024-10-01 08:46:34.455325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.855 [2024-10-01 08:46:34.455335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.855 qpair failed and we were unable to recover it. 00:31:42.855 [2024-10-01 08:46:34.455622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.855 [2024-10-01 08:46:34.455632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.855 qpair failed and we were unable to recover it. 00:31:42.855 [2024-10-01 08:46:34.455933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.855 [2024-10-01 08:46:34.455943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.855 qpair failed and we were unable to recover it. 00:31:42.855 [2024-10-01 08:46:34.456250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.855 [2024-10-01 08:46:34.456261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.855 qpair failed and we were unable to recover it. 00:31:42.855 [2024-10-01 08:46:34.456520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.855 [2024-10-01 08:46:34.456530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.855 qpair failed and we were unable to recover it. 00:31:42.855 [2024-10-01 08:46:34.456707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.855 [2024-10-01 08:46:34.456717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.855 qpair failed and we were unable to recover it. 00:31:42.855 [2024-10-01 08:46:34.457035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.855 [2024-10-01 08:46:34.457045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.855 qpair failed and we were unable to recover it. 00:31:42.855 [2024-10-01 08:46:34.457334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.855 [2024-10-01 08:46:34.457344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.855 qpair failed and we were unable to recover it. 00:31:42.855 [2024-10-01 08:46:34.457677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.855 [2024-10-01 08:46:34.457687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.855 qpair failed and we were unable to recover it. 00:31:42.855 [2024-10-01 08:46:34.457873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.855 [2024-10-01 08:46:34.457883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.855 qpair failed and we were unable to recover it. 00:31:42.855 [2024-10-01 08:46:34.458218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.855 [2024-10-01 08:46:34.458230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.855 qpair failed and we were unable to recover it. 00:31:42.855 [2024-10-01 08:46:34.458523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.855 [2024-10-01 08:46:34.458532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.855 qpair failed and we were unable to recover it. 00:31:42.855 [2024-10-01 08:46:34.458841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.855 [2024-10-01 08:46:34.458850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.855 qpair failed and we were unable to recover it. 00:31:42.855 [2024-10-01 08:46:34.459157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.855 [2024-10-01 08:46:34.459167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.855 qpair failed and we were unable to recover it. 00:31:42.855 [2024-10-01 08:46:34.459538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.855 [2024-10-01 08:46:34.459548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.855 qpair failed and we were unable to recover it. 00:31:42.855 [2024-10-01 08:46:34.459882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.855 [2024-10-01 08:46:34.459892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.855 qpair failed and we were unable to recover it. 00:31:42.855 [2024-10-01 08:46:34.460069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.855 [2024-10-01 08:46:34.460080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.855 qpair failed and we were unable to recover it. 00:31:42.855 [2024-10-01 08:46:34.460375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.855 [2024-10-01 08:46:34.460384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.855 qpair failed and we were unable to recover it. 00:31:42.855 [2024-10-01 08:46:34.460677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.855 [2024-10-01 08:46:34.460687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.855 qpair failed and we were unable to recover it. 00:31:42.855 [2024-10-01 08:46:34.460851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.855 [2024-10-01 08:46:34.460862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.855 qpair failed and we were unable to recover it. 00:31:42.855 [2024-10-01 08:46:34.461064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.856 [2024-10-01 08:46:34.461074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.856 qpair failed and we were unable to recover it. 00:31:42.856 [2024-10-01 08:46:34.461445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.856 [2024-10-01 08:46:34.461455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.856 qpair failed and we were unable to recover it. 00:31:42.856 [2024-10-01 08:46:34.461705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.856 [2024-10-01 08:46:34.461715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.856 qpair failed and we were unable to recover it. 00:31:42.856 [2024-10-01 08:46:34.462050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.856 [2024-10-01 08:46:34.462063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.856 qpair failed and we were unable to recover it. 00:31:42.856 [2024-10-01 08:46:34.462248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.856 [2024-10-01 08:46:34.462259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.856 qpair failed and we were unable to recover it. 00:31:42.856 [2024-10-01 08:46:34.462611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.856 [2024-10-01 08:46:34.462620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.856 qpair failed and we were unable to recover it. 00:31:42.856 [2024-10-01 08:46:34.462809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.856 [2024-10-01 08:46:34.462820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.856 qpair failed and we were unable to recover it. 00:31:42.856 [2024-10-01 08:46:34.463036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.856 [2024-10-01 08:46:34.463046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.856 qpair failed and we were unable to recover it. 00:31:42.856 [2024-10-01 08:46:34.463371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.856 [2024-10-01 08:46:34.463382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.856 qpair failed and we were unable to recover it. 00:31:42.856 [2024-10-01 08:46:34.463734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.856 [2024-10-01 08:46:34.463745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.856 qpair failed and we were unable to recover it. 00:31:42.856 [2024-10-01 08:46:34.464057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.856 [2024-10-01 08:46:34.464068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.856 qpair failed and we were unable to recover it. 00:31:42.856 [2024-10-01 08:46:34.464365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.856 [2024-10-01 08:46:34.464376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.856 qpair failed and we were unable to recover it. 00:31:42.856 [2024-10-01 08:46:34.464423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.856 [2024-10-01 08:46:34.464433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.856 qpair failed and we were unable to recover it. 00:31:42.856 [2024-10-01 08:46:34.464698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.856 [2024-10-01 08:46:34.464708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.856 qpair failed and we were unable to recover it. 00:31:42.856 [2024-10-01 08:46:34.465042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.856 [2024-10-01 08:46:34.465053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.856 qpair failed and we were unable to recover it. 00:31:42.856 [2024-10-01 08:46:34.465378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.856 [2024-10-01 08:46:34.465388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.856 qpair failed and we were unable to recover it. 00:31:42.856 [2024-10-01 08:46:34.465674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.856 [2024-10-01 08:46:34.465684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.856 qpair failed and we were unable to recover it. 00:31:42.856 [2024-10-01 08:46:34.465997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.856 [2024-10-01 08:46:34.466008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.856 qpair failed and we were unable to recover it. 00:31:42.856 [2024-10-01 08:46:34.466322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.856 [2024-10-01 08:46:34.466332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.856 qpair failed and we were unable to recover it. 00:31:42.856 [2024-10-01 08:46:34.466672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.856 [2024-10-01 08:46:34.466681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.856 qpair failed and we were unable to recover it. 00:31:42.856 [2024-10-01 08:46:34.467014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.856 [2024-10-01 08:46:34.467025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.856 qpair failed and we were unable to recover it. 00:31:42.856 [2024-10-01 08:46:34.467364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.856 [2024-10-01 08:46:34.467374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.856 qpair failed and we were unable to recover it. 00:31:42.856 [2024-10-01 08:46:34.467545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.856 [2024-10-01 08:46:34.467556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.856 qpair failed and we were unable to recover it. 00:31:42.856 [2024-10-01 08:46:34.467906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.856 [2024-10-01 08:46:34.467916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.856 qpair failed and we were unable to recover it. 00:31:42.856 [2024-10-01 08:46:34.468261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.856 [2024-10-01 08:46:34.468272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.856 qpair failed and we were unable to recover it. 00:31:42.856 [2024-10-01 08:46:34.468590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.856 [2024-10-01 08:46:34.468600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.856 qpair failed and we were unable to recover it. 00:31:42.856 [2024-10-01 08:46:34.468921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.856 [2024-10-01 08:46:34.468931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.856 qpair failed and we were unable to recover it. 00:31:42.856 [2024-10-01 08:46:34.469248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.856 [2024-10-01 08:46:34.469259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.856 qpair failed and we were unable to recover it. 00:31:42.856 [2024-10-01 08:46:34.469476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.856 [2024-10-01 08:46:34.469486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.856 qpair failed and we were unable to recover it. 00:31:42.856 [2024-10-01 08:46:34.469771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.856 [2024-10-01 08:46:34.469781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.856 qpair failed and we were unable to recover it. 00:31:42.856 [2024-10-01 08:46:34.470123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.856 [2024-10-01 08:46:34.470134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.856 qpair failed and we were unable to recover it. 00:31:42.856 [2024-10-01 08:46:34.470402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.856 [2024-10-01 08:46:34.470412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.856 qpair failed and we were unable to recover it. 00:31:42.856 [2024-10-01 08:46:34.470745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.856 [2024-10-01 08:46:34.470755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.856 qpair failed and we were unable to recover it. 00:31:42.856 [2024-10-01 08:46:34.470917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.856 [2024-10-01 08:46:34.470926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.856 qpair failed and we were unable to recover it. 00:31:42.856 [2024-10-01 08:46:34.471283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.856 [2024-10-01 08:46:34.471293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.856 qpair failed and we were unable to recover it. 00:31:42.856 [2024-10-01 08:46:34.471615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.856 [2024-10-01 08:46:34.471624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.856 qpair failed and we were unable to recover it. 00:31:42.856 [2024-10-01 08:46:34.471949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.856 [2024-10-01 08:46:34.471959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.856 qpair failed and we were unable to recover it. 00:31:42.856 [2024-10-01 08:46:34.472170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.856 [2024-10-01 08:46:34.472180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.856 qpair failed and we were unable to recover it. 00:31:42.857 [2024-10-01 08:46:34.472515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.857 [2024-10-01 08:46:34.472524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.857 qpair failed and we were unable to recover it. 00:31:42.857 [2024-10-01 08:46:34.472708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.857 [2024-10-01 08:46:34.472718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.857 qpair failed and we were unable to recover it. 00:31:42.857 [2024-10-01 08:46:34.473082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.857 [2024-10-01 08:46:34.473091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.857 qpair failed and we were unable to recover it. 00:31:42.857 [2024-10-01 08:46:34.473307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.857 [2024-10-01 08:46:34.473317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.857 qpair failed and we were unable to recover it. 00:31:42.857 [2024-10-01 08:46:34.473500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.857 [2024-10-01 08:46:34.473510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.857 qpair failed and we were unable to recover it. 00:31:42.857 [2024-10-01 08:46:34.473847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.857 [2024-10-01 08:46:34.473857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.857 qpair failed and we were unable to recover it. 00:31:42.857 [2024-10-01 08:46:34.474039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.857 [2024-10-01 08:46:34.474050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.857 qpair failed and we were unable to recover it. 00:31:42.857 [2024-10-01 08:46:34.474340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.857 [2024-10-01 08:46:34.474349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.857 qpair failed and we were unable to recover it. 00:31:42.857 [2024-10-01 08:46:34.474658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.857 [2024-10-01 08:46:34.474668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.857 qpair failed and we were unable to recover it. 00:31:42.857 [2024-10-01 08:46:34.474856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.857 [2024-10-01 08:46:34.474866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.857 qpair failed and we were unable to recover it. 00:31:42.857 [2024-10-01 08:46:34.475207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.857 [2024-10-01 08:46:34.475217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.857 qpair failed and we were unable to recover it. 00:31:42.857 [2024-10-01 08:46:34.475559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.857 [2024-10-01 08:46:34.475569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.857 qpair failed and we were unable to recover it. 00:31:42.857 [2024-10-01 08:46:34.475876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.857 [2024-10-01 08:46:34.475887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.857 qpair failed and we were unable to recover it. 00:31:42.857 [2024-10-01 08:46:34.476166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.857 [2024-10-01 08:46:34.476176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.857 qpair failed and we were unable to recover it. 00:31:42.857 [2024-10-01 08:46:34.476473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.857 [2024-10-01 08:46:34.476483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.857 qpair failed and we were unable to recover it. 00:31:42.857 [2024-10-01 08:46:34.476804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.857 [2024-10-01 08:46:34.476815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.857 qpair failed and we were unable to recover it. 00:31:42.857 [2024-10-01 08:46:34.477108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.857 [2024-10-01 08:46:34.477117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.857 qpair failed and we were unable to recover it. 00:31:42.857 [2024-10-01 08:46:34.477424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.857 [2024-10-01 08:46:34.477434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.857 qpair failed and we were unable to recover it. 00:31:42.857 [2024-10-01 08:46:34.477701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.857 [2024-10-01 08:46:34.477710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.857 qpair failed and we were unable to recover it. 00:31:42.857 [2024-10-01 08:46:34.477906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.857 [2024-10-01 08:46:34.477916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.857 qpair failed and we were unable to recover it. 00:31:42.857 [2024-10-01 08:46:34.478103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.857 [2024-10-01 08:46:34.478114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.857 qpair failed and we were unable to recover it. 00:31:42.857 [2024-10-01 08:46:34.478277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.857 [2024-10-01 08:46:34.478287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.857 qpair failed and we were unable to recover it. 00:31:42.857 [2024-10-01 08:46:34.478482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.857 [2024-10-01 08:46:34.478491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.857 qpair failed and we were unable to recover it. 00:31:42.857 [2024-10-01 08:46:34.478892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.857 [2024-10-01 08:46:34.478902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.857 qpair failed and we were unable to recover it. 00:31:42.857 [2024-10-01 08:46:34.479213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.857 [2024-10-01 08:46:34.479223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.857 qpair failed and we were unable to recover it. 00:31:42.857 [2024-10-01 08:46:34.479534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.857 [2024-10-01 08:46:34.479543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.857 qpair failed and we were unable to recover it. 00:31:42.857 [2024-10-01 08:46:34.479859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.857 [2024-10-01 08:46:34.479869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.857 qpair failed and we were unable to recover it. 00:31:42.857 [2024-10-01 08:46:34.480181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.857 [2024-10-01 08:46:34.480191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.857 qpair failed and we were unable to recover it. 00:31:42.857 [2024-10-01 08:46:34.480467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.857 [2024-10-01 08:46:34.480476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.857 qpair failed and we were unable to recover it. 00:31:42.857 [2024-10-01 08:46:34.480788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.857 [2024-10-01 08:46:34.480798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.857 qpair failed and we were unable to recover it. 00:31:42.857 [2024-10-01 08:46:34.481072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.857 [2024-10-01 08:46:34.481082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.857 qpair failed and we were unable to recover it. 00:31:42.857 [2024-10-01 08:46:34.481280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.857 [2024-10-01 08:46:34.481289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.857 qpair failed and we were unable to recover it. 00:31:42.857 [2024-10-01 08:46:34.481566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.857 [2024-10-01 08:46:34.481576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.857 qpair failed and we were unable to recover it. 00:31:42.857 [2024-10-01 08:46:34.481867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.857 [2024-10-01 08:46:34.481884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.857 qpair failed and we were unable to recover it. 00:31:42.857 [2024-10-01 08:46:34.482220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.857 [2024-10-01 08:46:34.482230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.857 qpair failed and we were unable to recover it. 00:31:42.857 [2024-10-01 08:46:34.482526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.857 [2024-10-01 08:46:34.482535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.857 qpair failed and we were unable to recover it. 00:31:42.857 [2024-10-01 08:46:34.482865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.857 [2024-10-01 08:46:34.482875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.857 qpair failed and we were unable to recover it. 00:31:42.857 [2024-10-01 08:46:34.483153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.858 [2024-10-01 08:46:34.483162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.858 qpair failed and we were unable to recover it. 00:31:42.858 [2024-10-01 08:46:34.483471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.858 [2024-10-01 08:46:34.483482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.858 qpair failed and we were unable to recover it. 00:31:42.858 [2024-10-01 08:46:34.483793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.858 [2024-10-01 08:46:34.483804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.858 qpair failed and we were unable to recover it. 00:31:42.858 [2024-10-01 08:46:34.484114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.858 [2024-10-01 08:46:34.484125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.858 qpair failed and we were unable to recover it. 00:31:42.858 [2024-10-01 08:46:34.484476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.858 [2024-10-01 08:46:34.484487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.858 qpair failed and we were unable to recover it. 00:31:42.858 [2024-10-01 08:46:34.484841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.858 [2024-10-01 08:46:34.484850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.858 qpair failed and we were unable to recover it. 00:31:42.858 [2024-10-01 08:46:34.485137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.858 [2024-10-01 08:46:34.485148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.858 qpair failed and we were unable to recover it. 00:31:42.858 [2024-10-01 08:46:34.485437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.858 [2024-10-01 08:46:34.485446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.858 qpair failed and we were unable to recover it. 00:31:42.858 [2024-10-01 08:46:34.485609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.858 [2024-10-01 08:46:34.485618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.858 qpair failed and we were unable to recover it. 00:31:42.858 [2024-10-01 08:46:34.485893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.858 [2024-10-01 08:46:34.485902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1180 with addr=10.0.0.2, port=4420 00:31:42.858 qpair failed and we were unable to recover it. 00:31:42.858 [2024-10-01 08:46:34.486414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.858 [2024-10-01 08:46:34.486456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.858 qpair failed and we were unable to recover it. 00:31:42.858 [2024-10-01 08:46:34.486787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.858 [2024-10-01 08:46:34.486795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.858 qpair failed and we were unable to recover it. 00:31:42.858 [2024-10-01 08:46:34.487230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.858 [2024-10-01 08:46:34.487258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.858 qpair failed and we were unable to recover it. 00:31:42.858 [2024-10-01 08:46:34.487559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.858 [2024-10-01 08:46:34.487567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.858 qpair failed and we were unable to recover it. 00:31:42.858 [2024-10-01 08:46:34.487893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.858 [2024-10-01 08:46:34.487900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.858 qpair failed and we were unable to recover it. 00:31:42.858 [2024-10-01 08:46:34.488103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.858 [2024-10-01 08:46:34.488110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.858 qpair failed and we were unable to recover it. 00:31:42.858 [2024-10-01 08:46:34.488296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.858 [2024-10-01 08:46:34.488303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.858 qpair failed and we were unable to recover it. 00:31:42.858 [2024-10-01 08:46:34.488655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.858 [2024-10-01 08:46:34.488662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.858 qpair failed and we were unable to recover it. 00:31:42.858 [2024-10-01 08:46:34.488981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.858 [2024-10-01 08:46:34.488988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.858 qpair failed and we were unable to recover it. 00:31:42.858 [2024-10-01 08:46:34.489315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.858 [2024-10-01 08:46:34.489322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.858 qpair failed and we were unable to recover it. 00:31:42.858 [2024-10-01 08:46:34.489591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.858 [2024-10-01 08:46:34.489597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.858 qpair failed and we were unable to recover it. 00:31:42.858 [2024-10-01 08:46:34.489911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.858 [2024-10-01 08:46:34.489918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.858 qpair failed and we were unable to recover it. 00:31:42.858 [2024-10-01 08:46:34.490235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.858 [2024-10-01 08:46:34.490243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.858 qpair failed and we were unable to recover it. 00:31:42.858 [2024-10-01 08:46:34.490489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.858 [2024-10-01 08:46:34.490501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.858 qpair failed and we were unable to recover it. 00:31:42.858 [2024-10-01 08:46:34.490800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.858 [2024-10-01 08:46:34.490808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.858 qpair failed and we were unable to recover it. 00:31:42.858 [2024-10-01 08:46:34.491110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.858 [2024-10-01 08:46:34.491117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.858 qpair failed and we were unable to recover it. 00:31:42.858 [2024-10-01 08:46:34.491305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.858 [2024-10-01 08:46:34.491312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.858 qpair failed and we were unable to recover it. 00:31:42.858 [2024-10-01 08:46:34.491634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.858 [2024-10-01 08:46:34.491641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.858 qpair failed and we were unable to recover it. 00:31:42.858 [2024-10-01 08:46:34.492017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.858 [2024-10-01 08:46:34.492024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.858 qpair failed and we were unable to recover it. 00:31:42.858 [2024-10-01 08:46:34.492302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.858 [2024-10-01 08:46:34.492309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.858 qpair failed and we were unable to recover it. 00:31:42.858 [2024-10-01 08:46:34.492502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.858 [2024-10-01 08:46:34.492510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.858 qpair failed and we were unable to recover it. 00:31:42.858 [2024-10-01 08:46:34.492795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.858 [2024-10-01 08:46:34.492801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.858 qpair failed and we were unable to recover it. 00:31:42.858 [2024-10-01 08:46:34.493117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.858 [2024-10-01 08:46:34.493124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.858 qpair failed and we were unable to recover it. 00:31:42.858 [2024-10-01 08:46:34.493449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.858 [2024-10-01 08:46:34.493455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.858 qpair failed and we were unable to recover it. 00:31:42.858 [2024-10-01 08:46:34.493651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.858 [2024-10-01 08:46:34.493658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.858 qpair failed and we were unable to recover it. 00:31:42.858 [2024-10-01 08:46:34.493962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.858 [2024-10-01 08:46:34.493969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.858 qpair failed and we were unable to recover it. 00:31:42.858 [2024-10-01 08:46:34.494282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.858 [2024-10-01 08:46:34.494289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.858 qpair failed and we were unable to recover it. 00:31:42.858 [2024-10-01 08:46:34.494603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.859 [2024-10-01 08:46:34.494610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.859 qpair failed and we were unable to recover it. 00:31:42.859 [2024-10-01 08:46:34.494900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.859 [2024-10-01 08:46:34.494907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.859 qpair failed and we were unable to recover it. 00:31:42.859 [2024-10-01 08:46:34.495217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.859 [2024-10-01 08:46:34.495225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.859 qpair failed and we were unable to recover it. 00:31:42.859 [2024-10-01 08:46:34.495542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.859 [2024-10-01 08:46:34.495550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.859 qpair failed and we were unable to recover it. 00:31:42.859 [2024-10-01 08:46:34.495858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.859 [2024-10-01 08:46:34.495865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.859 qpair failed and we were unable to recover it. 00:31:42.859 [2024-10-01 08:46:34.496063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.859 [2024-10-01 08:46:34.496071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.859 qpair failed and we were unable to recover it. 00:31:42.859 [2024-10-01 08:46:34.496352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.859 [2024-10-01 08:46:34.496360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.859 qpair failed and we were unable to recover it. 00:31:42.859 [2024-10-01 08:46:34.496688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.859 [2024-10-01 08:46:34.496694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.859 qpair failed and we were unable to recover it. 00:31:42.859 [2024-10-01 08:46:34.496970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.859 [2024-10-01 08:46:34.496977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.859 qpair failed and we were unable to recover it. 00:31:42.859 [2024-10-01 08:46:34.497291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.859 [2024-10-01 08:46:34.497298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.859 qpair failed and we were unable to recover it. 00:31:42.859 [2024-10-01 08:46:34.497610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.859 [2024-10-01 08:46:34.497616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.859 qpair failed and we were unable to recover it. 00:31:42.859 [2024-10-01 08:46:34.497959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.859 [2024-10-01 08:46:34.497966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.859 qpair failed and we were unable to recover it. 00:31:42.859 [2024-10-01 08:46:34.498283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.859 [2024-10-01 08:46:34.498291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.859 qpair failed and we were unable to recover it. 00:31:42.859 [2024-10-01 08:46:34.498615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.859 [2024-10-01 08:46:34.498622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.859 qpair failed and we were unable to recover it. 00:31:42.859 [2024-10-01 08:46:34.498944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.859 [2024-10-01 08:46:34.498951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.859 qpair failed and we were unable to recover it. 00:31:42.859 [2024-10-01 08:46:34.499160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.859 [2024-10-01 08:46:34.499168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.859 qpair failed and we were unable to recover it. 00:31:42.859 [2024-10-01 08:46:34.499409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.859 [2024-10-01 08:46:34.499415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.859 qpair failed and we were unable to recover it. 00:31:42.859 [2024-10-01 08:46:34.499731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.859 [2024-10-01 08:46:34.499737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.859 qpair failed and we were unable to recover it. 00:31:42.859 [2024-10-01 08:46:34.500084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.859 [2024-10-01 08:46:34.500091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.859 qpair failed and we were unable to recover it. 00:31:42.859 [2024-10-01 08:46:34.500474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.859 [2024-10-01 08:46:34.500481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.859 qpair failed and we were unable to recover it. 00:31:42.859 [2024-10-01 08:46:34.500789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.859 [2024-10-01 08:46:34.500796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.859 qpair failed and we were unable to recover it. 00:31:42.859 [2024-10-01 08:46:34.501017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.859 [2024-10-01 08:46:34.501024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.859 qpair failed and we were unable to recover it. 00:31:42.859 [2024-10-01 08:46:34.501284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.859 [2024-10-01 08:46:34.501291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.859 qpair failed and we were unable to recover it. 00:31:42.859 [2024-10-01 08:46:34.501570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.859 [2024-10-01 08:46:34.501577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.859 qpair failed and we were unable to recover it. 00:31:42.859 [2024-10-01 08:46:34.501883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.859 [2024-10-01 08:46:34.501890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.859 qpair failed and we were unable to recover it. 00:31:42.859 [2024-10-01 08:46:34.502099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.859 [2024-10-01 08:46:34.502106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.859 qpair failed and we were unable to recover it. 00:31:42.859 [2024-10-01 08:46:34.502407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.859 [2024-10-01 08:46:34.502416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.859 qpair failed and we were unable to recover it. 00:31:42.859 [2024-10-01 08:46:34.502713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.859 [2024-10-01 08:46:34.502720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.859 qpair failed and we were unable to recover it. 00:31:42.859 [2024-10-01 08:46:34.503025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.859 [2024-10-01 08:46:34.503032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.859 qpair failed and we were unable to recover it. 00:31:42.859 [2024-10-01 08:46:34.503383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.859 [2024-10-01 08:46:34.503390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.859 qpair failed and we were unable to recover it. 00:31:42.859 [2024-10-01 08:46:34.503677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.859 [2024-10-01 08:46:34.503685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.859 qpair failed and we were unable to recover it. 00:31:42.859 [2024-10-01 08:46:34.504017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.859 [2024-10-01 08:46:34.504025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.859 qpair failed and we were unable to recover it. 00:31:42.859 [2024-10-01 08:46:34.504331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.859 [2024-10-01 08:46:34.504338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.859 qpair failed and we were unable to recover it. 00:31:42.859 [2024-10-01 08:46:34.504683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.859 [2024-10-01 08:46:34.504691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.860 qpair failed and we were unable to recover it. 00:31:42.860 [2024-10-01 08:46:34.505007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.860 [2024-10-01 08:46:34.505013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.860 qpair failed and we were unable to recover it. 00:31:42.860 [2024-10-01 08:46:34.505323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.860 [2024-10-01 08:46:34.505330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.860 qpair failed and we were unable to recover it. 00:31:42.860 [2024-10-01 08:46:34.505724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.860 [2024-10-01 08:46:34.505731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.860 qpair failed and we were unable to recover it. 00:31:42.860 [2024-10-01 08:46:34.505893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.860 [2024-10-01 08:46:34.505901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.860 qpair failed and we were unable to recover it. 00:31:42.860 [2024-10-01 08:46:34.506274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.860 [2024-10-01 08:46:34.506281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.860 qpair failed and we were unable to recover it. 00:31:42.860 [2024-10-01 08:46:34.506492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.860 [2024-10-01 08:46:34.506499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.860 qpair failed and we were unable to recover it. 00:31:42.860 [2024-10-01 08:46:34.506824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.860 [2024-10-01 08:46:34.506830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.860 qpair failed and we were unable to recover it. 00:31:42.860 [2024-10-01 08:46:34.507126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.860 [2024-10-01 08:46:34.507134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.860 qpair failed and we were unable to recover it. 00:31:42.860 [2024-10-01 08:46:34.507460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.860 [2024-10-01 08:46:34.507466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.860 qpair failed and we were unable to recover it. 00:31:42.860 [2024-10-01 08:46:34.507779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.860 [2024-10-01 08:46:34.507786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.860 qpair failed and we were unable to recover it. 00:31:42.860 [2024-10-01 08:46:34.508107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.860 [2024-10-01 08:46:34.508114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.860 qpair failed and we were unable to recover it. 00:31:42.860 [2024-10-01 08:46:34.508280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.860 [2024-10-01 08:46:34.508287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.860 qpair failed and we were unable to recover it. 00:31:42.860 [2024-10-01 08:46:34.508465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.860 [2024-10-01 08:46:34.508472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.860 qpair failed and we were unable to recover it. 00:31:42.860 [2024-10-01 08:46:34.508671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.860 [2024-10-01 08:46:34.508678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.860 qpair failed and we were unable to recover it. 00:31:42.860 [2024-10-01 08:46:34.508869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.860 [2024-10-01 08:46:34.508876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.860 qpair failed and we were unable to recover it. 00:31:42.860 [2024-10-01 08:46:34.509185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.860 [2024-10-01 08:46:34.509192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.860 qpair failed and we were unable to recover it. 00:31:42.860 [2024-10-01 08:46:34.509509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.860 [2024-10-01 08:46:34.509516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.860 qpair failed and we were unable to recover it. 00:31:42.860 [2024-10-01 08:46:34.509801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.860 [2024-10-01 08:46:34.509809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.860 qpair failed and we were unable to recover it. 00:31:42.860 [2024-10-01 08:46:34.510123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.860 [2024-10-01 08:46:34.510130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.860 qpair failed and we were unable to recover it. 00:31:42.860 [2024-10-01 08:46:34.510424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.860 [2024-10-01 08:46:34.510430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.860 qpair failed and we were unable to recover it. 00:31:42.860 [2024-10-01 08:46:34.510718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.860 [2024-10-01 08:46:34.510724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.860 qpair failed and we were unable to recover it. 00:31:42.860 [2024-10-01 08:46:34.511000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.860 [2024-10-01 08:46:34.511007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.860 qpair failed and we were unable to recover it. 00:31:42.860 [2024-10-01 08:46:34.511280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.860 [2024-10-01 08:46:34.511288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.860 qpair failed and we were unable to recover it. 00:31:42.860 [2024-10-01 08:46:34.511617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.860 [2024-10-01 08:46:34.511624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.860 qpair failed and we were unable to recover it. 00:31:42.860 [2024-10-01 08:46:34.511883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.860 [2024-10-01 08:46:34.511890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.860 qpair failed and we were unable to recover it. 00:31:42.860 [2024-10-01 08:46:34.512208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.860 [2024-10-01 08:46:34.512215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.860 qpair failed and we were unable to recover it. 00:31:42.860 [2024-10-01 08:46:34.512515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.860 [2024-10-01 08:46:34.512522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.860 qpair failed and we were unable to recover it. 00:31:42.860 [2024-10-01 08:46:34.512830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.860 [2024-10-01 08:46:34.512837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.860 qpair failed and we were unable to recover it. 00:31:42.860 [2024-10-01 08:46:34.513133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.860 [2024-10-01 08:46:34.513141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.860 qpair failed and we were unable to recover it. 00:31:42.860 [2024-10-01 08:46:34.513490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.860 [2024-10-01 08:46:34.513497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.860 qpair failed and we were unable to recover it. 00:31:42.860 [2024-10-01 08:46:34.513810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.860 [2024-10-01 08:46:34.513817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.860 qpair failed and we were unable to recover it. 00:31:42.860 [2024-10-01 08:46:34.513990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.860 [2024-10-01 08:46:34.514001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.860 qpair failed and we were unable to recover it. 00:31:42.860 [2024-10-01 08:46:34.514353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.860 [2024-10-01 08:46:34.514360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.860 qpair failed and we were unable to recover it. 00:31:42.860 [2024-10-01 08:46:34.514661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.860 [2024-10-01 08:46:34.514668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.860 qpair failed and we were unable to recover it. 00:31:42.860 [2024-10-01 08:46:34.514978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.860 [2024-10-01 08:46:34.514984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.860 qpair failed and we were unable to recover it. 00:31:42.860 [2024-10-01 08:46:34.515276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.860 [2024-10-01 08:46:34.515284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.860 qpair failed and we were unable to recover it. 00:31:42.860 [2024-10-01 08:46:34.515455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.860 [2024-10-01 08:46:34.515461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.860 qpair failed and we were unable to recover it. 00:31:42.860 [2024-10-01 08:46:34.515759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.860 [2024-10-01 08:46:34.515766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.860 qpair failed and we were unable to recover it. 00:31:42.861 [2024-10-01 08:46:34.516087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.861 [2024-10-01 08:46:34.516095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.861 qpair failed and we were unable to recover it. 00:31:42.861 [2024-10-01 08:46:34.516411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.861 [2024-10-01 08:46:34.516418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.861 qpair failed and we were unable to recover it. 00:31:42.861 [2024-10-01 08:46:34.516664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.861 [2024-10-01 08:46:34.516670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.861 qpair failed and we were unable to recover it. 00:31:42.861 [2024-10-01 08:46:34.516859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.861 [2024-10-01 08:46:34.516866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.861 qpair failed and we were unable to recover it. 00:31:42.861 [2024-10-01 08:46:34.517149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.861 [2024-10-01 08:46:34.517156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.861 qpair failed and we were unable to recover it. 00:31:42.861 [2024-10-01 08:46:34.517313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.861 [2024-10-01 08:46:34.517320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.861 qpair failed and we were unable to recover it. 00:31:42.861 [2024-10-01 08:46:34.517607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.861 [2024-10-01 08:46:34.517614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.861 qpair failed and we were unable to recover it. 00:31:42.861 [2024-10-01 08:46:34.517652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.861 [2024-10-01 08:46:34.517659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.861 qpair failed and we were unable to recover it. 00:31:42.861 [2024-10-01 08:46:34.517971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.861 [2024-10-01 08:46:34.517978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.861 qpair failed and we were unable to recover it. 00:31:42.861 [2024-10-01 08:46:34.518290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.861 [2024-10-01 08:46:34.518297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.861 qpair failed and we were unable to recover it. 00:31:42.861 [2024-10-01 08:46:34.518476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.861 [2024-10-01 08:46:34.518483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.861 qpair failed and we were unable to recover it. 00:31:42.861 [2024-10-01 08:46:34.518791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.861 [2024-10-01 08:46:34.518798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.861 qpair failed and we were unable to recover it. 00:31:42.861 [2024-10-01 08:46:34.518837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.861 [2024-10-01 08:46:34.518843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.861 qpair failed and we were unable to recover it. 00:31:42.861 [2024-10-01 08:46:34.519104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.861 [2024-10-01 08:46:34.519111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.861 qpair failed and we were unable to recover it. 00:31:42.861 [2024-10-01 08:46:34.519271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.861 [2024-10-01 08:46:34.519279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.861 qpair failed and we were unable to recover it. 00:31:42.861 [2024-10-01 08:46:34.519603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.861 [2024-10-01 08:46:34.519609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.861 qpair failed and we were unable to recover it. 00:31:42.861 [2024-10-01 08:46:34.519806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.861 [2024-10-01 08:46:34.519813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.861 qpair failed and we were unable to recover it. 00:31:42.861 [2024-10-01 08:46:34.520112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.861 [2024-10-01 08:46:34.520119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.861 qpair failed and we were unable to recover it. 00:31:42.861 [2024-10-01 08:46:34.520446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.861 [2024-10-01 08:46:34.520453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.861 qpair failed and we were unable to recover it. 00:31:42.861 [2024-10-01 08:46:34.520803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.861 [2024-10-01 08:46:34.520810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.861 qpair failed and we were unable to recover it. 00:31:42.861 [2024-10-01 08:46:34.521096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.861 [2024-10-01 08:46:34.521102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.861 qpair failed and we were unable to recover it. 00:31:42.861 [2024-10-01 08:46:34.521263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.861 [2024-10-01 08:46:34.521271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.861 qpair failed and we were unable to recover it. 00:31:42.861 [2024-10-01 08:46:34.521565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.861 [2024-10-01 08:46:34.521572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.861 qpair failed and we were unable to recover it. 00:31:42.861 [2024-10-01 08:46:34.521891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.861 [2024-10-01 08:46:34.521898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.861 qpair failed and we were unable to recover it. 00:31:42.861 [2024-10-01 08:46:34.522206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.861 [2024-10-01 08:46:34.522214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.861 qpair failed and we were unable to recover it. 00:31:42.861 [2024-10-01 08:46:34.522516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.861 [2024-10-01 08:46:34.522523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.861 qpair failed and we were unable to recover it. 00:31:42.861 [2024-10-01 08:46:34.522808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.861 [2024-10-01 08:46:34.522814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.861 qpair failed and we were unable to recover it. 00:31:42.861 [2024-10-01 08:46:34.523158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.861 [2024-10-01 08:46:34.523165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.861 qpair failed and we were unable to recover it. 00:31:42.861 [2024-10-01 08:46:34.523473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.861 [2024-10-01 08:46:34.523480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.861 qpair failed and we were unable to recover it. 00:31:42.861 [2024-10-01 08:46:34.523766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.861 [2024-10-01 08:46:34.523773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.861 qpair failed and we were unable to recover it. 00:31:42.861 [2024-10-01 08:46:34.523927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.861 [2024-10-01 08:46:34.523935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.861 qpair failed and we were unable to recover it. 00:31:42.861 [2024-10-01 08:46:34.524133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.861 [2024-10-01 08:46:34.524140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.861 qpair failed and we were unable to recover it. 00:31:42.861 [2024-10-01 08:46:34.524471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.861 [2024-10-01 08:46:34.524477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.861 qpair failed and we were unable to recover it. 00:31:42.861 [2024-10-01 08:46:34.524756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.861 [2024-10-01 08:46:34.524763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.861 qpair failed and we were unable to recover it. 00:31:42.861 [2024-10-01 08:46:34.525033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.861 [2024-10-01 08:46:34.525040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.861 qpair failed and we were unable to recover it. 00:31:42.861 [2024-10-01 08:46:34.525369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.861 [2024-10-01 08:46:34.525384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.861 qpair failed and we were unable to recover it. 00:31:42.861 [2024-10-01 08:46:34.525659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.861 [2024-10-01 08:46:34.525666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.861 qpair failed and we were unable to recover it. 00:31:42.861 [2024-10-01 08:46:34.525987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.861 [2024-10-01 08:46:34.525995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.861 qpair failed and we were unable to recover it. 00:31:42.862 [2024-10-01 08:46:34.526282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.862 [2024-10-01 08:46:34.526288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.862 qpair failed and we were unable to recover it. 00:31:42.862 [2024-10-01 08:46:34.526557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.862 [2024-10-01 08:46:34.526563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.862 qpair failed and we were unable to recover it. 00:31:42.862 [2024-10-01 08:46:34.526876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.862 [2024-10-01 08:46:34.526883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.862 qpair failed and we were unable to recover it. 00:31:42.862 [2024-10-01 08:46:34.527216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.862 [2024-10-01 08:46:34.527223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.862 qpair failed and we were unable to recover it. 00:31:42.862 [2024-10-01 08:46:34.527290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.862 [2024-10-01 08:46:34.527297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.862 qpair failed and we were unable to recover it. 00:31:42.862 [2024-10-01 08:46:34.527571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.862 [2024-10-01 08:46:34.527578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.862 qpair failed and we were unable to recover it. 00:31:42.862 [2024-10-01 08:46:34.527917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.862 [2024-10-01 08:46:34.527924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.862 qpair failed and we were unable to recover it. 00:31:42.862 [2024-10-01 08:46:34.528226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.862 [2024-10-01 08:46:34.528232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.862 qpair failed and we were unable to recover it. 00:31:42.862 [2024-10-01 08:46:34.528398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.862 [2024-10-01 08:46:34.528404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.862 qpair failed and we were unable to recover it. 00:31:42.862 [2024-10-01 08:46:34.528568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.862 [2024-10-01 08:46:34.528575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.862 qpair failed and we were unable to recover it. 00:31:42.862 [2024-10-01 08:46:34.528875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.862 [2024-10-01 08:46:34.528883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.862 qpair failed and we were unable to recover it. 00:31:42.862 [2024-10-01 08:46:34.529209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.862 [2024-10-01 08:46:34.529216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.862 qpair failed and we were unable to recover it. 00:31:42.862 [2024-10-01 08:46:34.529496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.862 [2024-10-01 08:46:34.529503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.862 qpair failed and we were unable to recover it. 00:31:42.862 [2024-10-01 08:46:34.529822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.862 [2024-10-01 08:46:34.529829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.862 qpair failed and we were unable to recover it. 00:31:42.862 [2024-10-01 08:46:34.530130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.862 [2024-10-01 08:46:34.530137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.862 qpair failed and we were unable to recover it. 00:31:42.862 [2024-10-01 08:46:34.530442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.862 [2024-10-01 08:46:34.530449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.862 qpair failed and we were unable to recover it. 00:31:42.862 [2024-10-01 08:46:34.530631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.862 [2024-10-01 08:46:34.530638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.862 qpair failed and we were unable to recover it. 00:31:42.862 [2024-10-01 08:46:34.530816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.862 [2024-10-01 08:46:34.530822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.862 qpair failed and we were unable to recover it. 00:31:42.862 [2024-10-01 08:46:34.531099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.862 [2024-10-01 08:46:34.531106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.862 qpair failed and we were unable to recover it. 00:31:42.862 [2024-10-01 08:46:34.531430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.862 [2024-10-01 08:46:34.531436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.862 qpair failed and we were unable to recover it. 00:31:42.862 [2024-10-01 08:46:34.531625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.862 [2024-10-01 08:46:34.531632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.862 qpair failed and we were unable to recover it. 00:31:42.862 [2024-10-01 08:46:34.531946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.862 [2024-10-01 08:46:34.531953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.862 qpair failed and we were unable to recover it. 00:31:42.862 [2024-10-01 08:46:34.532141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.862 [2024-10-01 08:46:34.532148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.862 qpair failed and we were unable to recover it. 00:31:42.862 [2024-10-01 08:46:34.532522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.862 [2024-10-01 08:46:34.532531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.862 qpair failed and we were unable to recover it. 00:31:42.862 [2024-10-01 08:46:34.532686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.862 [2024-10-01 08:46:34.532692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.862 qpair failed and we were unable to recover it. 00:31:42.862 [2024-10-01 08:46:34.532850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.862 [2024-10-01 08:46:34.532857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.862 qpair failed and we were unable to recover it. 00:31:42.862 [2024-10-01 08:46:34.533170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.862 [2024-10-01 08:46:34.533178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.862 qpair failed and we were unable to recover it. 00:31:42.862 [2024-10-01 08:46:34.533479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.862 [2024-10-01 08:46:34.533486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.862 qpair failed and we were unable to recover it. 00:31:42.862 [2024-10-01 08:46:34.533652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.862 [2024-10-01 08:46:34.533658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.862 qpair failed and we were unable to recover it. 00:31:42.862 [2024-10-01 08:46:34.534034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.862 [2024-10-01 08:46:34.534040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.862 qpair failed and we were unable to recover it. 00:31:42.862 [2024-10-01 08:46:34.534207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.862 [2024-10-01 08:46:34.534213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.862 qpair failed and we were unable to recover it. 00:31:42.862 [2024-10-01 08:46:34.534529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.862 [2024-10-01 08:46:34.534544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.862 qpair failed and we were unable to recover it. 00:31:42.862 [2024-10-01 08:46:34.534857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.862 [2024-10-01 08:46:34.534864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.862 qpair failed and we were unable to recover it. 00:31:42.862 [2024-10-01 08:46:34.535151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.862 [2024-10-01 08:46:34.535158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.862 qpair failed and we were unable to recover it. 00:31:42.862 [2024-10-01 08:46:34.535384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.862 [2024-10-01 08:46:34.535390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.862 qpair failed and we were unable to recover it. 00:31:42.862 [2024-10-01 08:46:34.535660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.862 [2024-10-01 08:46:34.535666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.862 qpair failed and we were unable to recover it. 00:31:42.862 [2024-10-01 08:46:34.535992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.862 [2024-10-01 08:46:34.536001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.862 qpair failed and we were unable to recover it. 00:31:42.862 [2024-10-01 08:46:34.536183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.863 [2024-10-01 08:46:34.536189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.863 qpair failed and we were unable to recover it. 00:31:42.863 [2024-10-01 08:46:34.536506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.863 [2024-10-01 08:46:34.536514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.863 qpair failed and we were unable to recover it. 00:31:42.863 [2024-10-01 08:46:34.536736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.863 [2024-10-01 08:46:34.536743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.863 qpair failed and we were unable to recover it. 00:31:42.863 [2024-10-01 08:46:34.537056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.863 [2024-10-01 08:46:34.537063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.863 qpair failed and we were unable to recover it. 00:31:42.863 [2024-10-01 08:46:34.537395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.863 [2024-10-01 08:46:34.537402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.863 qpair failed and we were unable to recover it. 00:31:42.863 [2024-10-01 08:46:34.537657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.863 [2024-10-01 08:46:34.537664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.863 qpair failed and we were unable to recover it. 00:31:42.863 [2024-10-01 08:46:34.537966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.863 [2024-10-01 08:46:34.537973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.863 qpair failed and we were unable to recover it. 00:31:42.863 [2024-10-01 08:46:34.538302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.863 [2024-10-01 08:46:34.538309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.863 qpair failed and we were unable to recover it. 00:31:42.863 [2024-10-01 08:46:34.538635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.863 [2024-10-01 08:46:34.538642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.863 qpair failed and we were unable to recover it. 00:31:42.863 [2024-10-01 08:46:34.538932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.863 [2024-10-01 08:46:34.538939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.863 qpair failed and we were unable to recover it. 00:31:42.863 [2024-10-01 08:46:34.539255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.863 [2024-10-01 08:46:34.539262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.863 qpair failed and we were unable to recover it. 00:31:42.863 [2024-10-01 08:46:34.539435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.863 [2024-10-01 08:46:34.539442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.863 qpair failed and we were unable to recover it. 00:31:42.863 [2024-10-01 08:46:34.539783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.863 [2024-10-01 08:46:34.539790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.863 qpair failed and we were unable to recover it. 00:31:42.863 [2024-10-01 08:46:34.539970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.863 [2024-10-01 08:46:34.539978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.863 qpair failed and we were unable to recover it. 00:31:42.863 [2024-10-01 08:46:34.540267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.863 [2024-10-01 08:46:34.540276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.863 qpair failed and we were unable to recover it. 00:31:42.863 [2024-10-01 08:46:34.540455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.863 [2024-10-01 08:46:34.540462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.863 qpair failed and we were unable to recover it. 00:31:42.863 [2024-10-01 08:46:34.540772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.863 [2024-10-01 08:46:34.540780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.863 qpair failed and we were unable to recover it. 00:31:42.863 [2024-10-01 08:46:34.541173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.863 [2024-10-01 08:46:34.541181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.863 qpair failed and we were unable to recover it. 00:31:42.863 [2024-10-01 08:46:34.541464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.863 [2024-10-01 08:46:34.541472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.863 qpair failed and we were unable to recover it. 00:31:42.863 [2024-10-01 08:46:34.541777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.863 [2024-10-01 08:46:34.541783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.863 qpair failed and we were unable to recover it. 00:31:42.863 [2024-10-01 08:46:34.542112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.863 [2024-10-01 08:46:34.542119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.863 qpair failed and we were unable to recover it. 00:31:42.863 [2024-10-01 08:46:34.542454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.863 [2024-10-01 08:46:34.542460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.863 qpair failed and we were unable to recover it. 00:31:42.863 [2024-10-01 08:46:34.542794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.863 [2024-10-01 08:46:34.542800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.863 qpair failed and we were unable to recover it. 00:31:42.863 [2024-10-01 08:46:34.542978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.863 [2024-10-01 08:46:34.542985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.863 qpair failed and we were unable to recover it. 00:31:42.863 [2024-10-01 08:46:34.543294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.863 [2024-10-01 08:46:34.543301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.863 qpair failed and we were unable to recover it. 00:31:42.863 [2024-10-01 08:46:34.543656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.863 [2024-10-01 08:46:34.543663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.863 qpair failed and we were unable to recover it. 00:31:42.863 [2024-10-01 08:46:34.544002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.863 [2024-10-01 08:46:34.544010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.863 qpair failed and we were unable to recover it. 00:31:42.863 [2024-10-01 08:46:34.544047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.863 [2024-10-01 08:46:34.544054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.863 qpair failed and we were unable to recover it. 00:31:42.863 [2024-10-01 08:46:34.544346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.863 [2024-10-01 08:46:34.544353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.863 qpair failed and we were unable to recover it. 00:31:42.863 [2024-10-01 08:46:34.544391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.863 [2024-10-01 08:46:34.544398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.863 qpair failed and we were unable to recover it. 00:31:42.863 [2024-10-01 08:46:34.544667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.863 [2024-10-01 08:46:34.544674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.863 qpair failed and we were unable to recover it. 00:31:42.863 [2024-10-01 08:46:34.545079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.863 [2024-10-01 08:46:34.545087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.863 qpair failed and we were unable to recover it. 00:31:42.863 [2024-10-01 08:46:34.545387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.863 [2024-10-01 08:46:34.545393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.863 qpair failed and we were unable to recover it. 00:31:42.863 [2024-10-01 08:46:34.545669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.863 [2024-10-01 08:46:34.545676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.863 qpair failed and we were unable to recover it. 00:31:42.863 [2024-10-01 08:46:34.545983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.863 [2024-10-01 08:46:34.545989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.863 qpair failed and we were unable to recover it. 00:31:42.863 [2024-10-01 08:46:34.546191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.864 [2024-10-01 08:46:34.546198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.864 qpair failed and we were unable to recover it. 00:31:42.864 [2024-10-01 08:46:34.546369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.864 [2024-10-01 08:46:34.546376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.864 qpair failed and we were unable to recover it. 00:31:42.864 [2024-10-01 08:46:34.546716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.864 [2024-10-01 08:46:34.546723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.864 qpair failed and we were unable to recover it. 00:31:42.864 [2024-10-01 08:46:34.546998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.864 [2024-10-01 08:46:34.547005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.864 qpair failed and we were unable to recover it. 00:31:42.864 [2024-10-01 08:46:34.547324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.864 [2024-10-01 08:46:34.547331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.864 qpair failed and we were unable to recover it. 00:31:42.864 [2024-10-01 08:46:34.547609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.864 [2024-10-01 08:46:34.547617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.864 qpair failed and we were unable to recover it. 00:31:42.864 [2024-10-01 08:46:34.547909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.864 [2024-10-01 08:46:34.547915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.864 qpair failed and we were unable to recover it. 00:31:42.864 [2024-10-01 08:46:34.548113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.864 [2024-10-01 08:46:34.548120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.864 qpair failed and we were unable to recover it. 00:31:42.864 [2024-10-01 08:46:34.548388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.864 [2024-10-01 08:46:34.548395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.864 qpair failed and we were unable to recover it. 00:31:42.864 [2024-10-01 08:46:34.548745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.864 [2024-10-01 08:46:34.548752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.864 qpair failed and we were unable to recover it. 00:31:42.864 [2024-10-01 08:46:34.549060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.864 [2024-10-01 08:46:34.549067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.864 qpair failed and we were unable to recover it. 00:31:42.864 [2024-10-01 08:46:34.549277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.864 [2024-10-01 08:46:34.549284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.864 qpair failed and we were unable to recover it. 00:31:42.864 [2024-10-01 08:46:34.549446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.864 [2024-10-01 08:46:34.549454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.864 qpair failed and we were unable to recover it. 00:31:42.864 [2024-10-01 08:46:34.549793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.864 [2024-10-01 08:46:34.549800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.864 qpair failed and we were unable to recover it. 00:31:42.864 [2024-10-01 08:46:34.549964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.864 [2024-10-01 08:46:34.549971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.864 qpair failed and we were unable to recover it. 00:31:42.864 [2024-10-01 08:46:34.550329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.864 [2024-10-01 08:46:34.550336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.864 qpair failed and we were unable to recover it. 00:31:42.864 [2024-10-01 08:46:34.550603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.864 [2024-10-01 08:46:34.550610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.864 qpair failed and we were unable to recover it. 00:31:42.864 [2024-10-01 08:46:34.550935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.864 [2024-10-01 08:46:34.550942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.864 qpair failed and we were unable to recover it. 00:31:42.864 [2024-10-01 08:46:34.551252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.864 [2024-10-01 08:46:34.551260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.864 qpair failed and we were unable to recover it. 00:31:42.864 [2024-10-01 08:46:34.551575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.864 [2024-10-01 08:46:34.551583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.864 qpair failed and we were unable to recover it. 00:31:42.864 [2024-10-01 08:46:34.551887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.864 [2024-10-01 08:46:34.551894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.864 qpair failed and we were unable to recover it. 00:31:42.864 [2024-10-01 08:46:34.552207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.864 [2024-10-01 08:46:34.552214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.864 qpair failed and we were unable to recover it. 00:31:42.864 [2024-10-01 08:46:34.552542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.864 [2024-10-01 08:46:34.552548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.864 qpair failed and we were unable to recover it. 00:31:42.864 [2024-10-01 08:46:34.552819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.864 [2024-10-01 08:46:34.552826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.864 qpair failed and we were unable to recover it. 00:31:42.864 [2024-10-01 08:46:34.553034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.864 [2024-10-01 08:46:34.553041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.864 qpair failed and we were unable to recover it. 00:31:42.864 [2024-10-01 08:46:34.553233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.864 [2024-10-01 08:46:34.553239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.864 qpair failed and we were unable to recover it. 00:31:42.864 [2024-10-01 08:46:34.553410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.864 [2024-10-01 08:46:34.553417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.864 qpair failed and we were unable to recover it. 00:31:42.864 [2024-10-01 08:46:34.553587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.864 [2024-10-01 08:46:34.553593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.864 qpair failed and we were unable to recover it. 00:31:42.864 [2024-10-01 08:46:34.553885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.864 [2024-10-01 08:46:34.553893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.864 qpair failed and we were unable to recover it. 00:31:42.864 [2024-10-01 08:46:34.554208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.864 [2024-10-01 08:46:34.554215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.864 qpair failed and we were unable to recover it. 00:31:42.864 [2024-10-01 08:46:34.554511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.864 [2024-10-01 08:46:34.554523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.864 qpair failed and we were unable to recover it. 00:31:42.864 [2024-10-01 08:46:34.554821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.864 [2024-10-01 08:46:34.554830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.864 qpair failed and we were unable to recover it. 00:31:42.864 [2024-10-01 08:46:34.555107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.864 [2024-10-01 08:46:34.555114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.864 qpair failed and we were unable to recover it. 00:31:42.864 [2024-10-01 08:46:34.555268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.864 [2024-10-01 08:46:34.555275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.864 qpair failed and we were unable to recover it. 00:31:42.864 [2024-10-01 08:46:34.555479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.864 [2024-10-01 08:46:34.555486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.864 qpair failed and we were unable to recover it. 00:31:42.864 [2024-10-01 08:46:34.555661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.864 [2024-10-01 08:46:34.555667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.864 qpair failed and we were unable to recover it. 00:31:42.864 [2024-10-01 08:46:34.555969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.865 [2024-10-01 08:46:34.555975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.865 qpair failed and we were unable to recover it. 00:31:42.865 [2024-10-01 08:46:34.556268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.865 [2024-10-01 08:46:34.556275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.865 qpair failed and we were unable to recover it. 00:31:42.865 [2024-10-01 08:46:34.556479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.865 [2024-10-01 08:46:34.556486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.865 qpair failed and we were unable to recover it. 00:31:42.865 [2024-10-01 08:46:34.556797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.865 [2024-10-01 08:46:34.556804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.865 qpair failed and we were unable to recover it. 00:31:42.865 [2024-10-01 08:46:34.557112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.865 [2024-10-01 08:46:34.557120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.865 qpair failed and we were unable to recover it. 00:31:42.865 [2024-10-01 08:46:34.557452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.865 [2024-10-01 08:46:34.557459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.865 qpair failed and we were unable to recover it. 00:31:42.865 [2024-10-01 08:46:34.557783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.865 [2024-10-01 08:46:34.557790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.865 qpair failed and we were unable to recover it. 00:31:42.865 [2024-10-01 08:46:34.558140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.865 [2024-10-01 08:46:34.558147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.865 qpair failed and we were unable to recover it. 00:31:42.865 [2024-10-01 08:46:34.558329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.865 [2024-10-01 08:46:34.558336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.865 qpair failed and we were unable to recover it. 00:31:42.865 [2024-10-01 08:46:34.558684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.865 [2024-10-01 08:46:34.558690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.865 qpair failed and we were unable to recover it. 00:31:42.865 [2024-10-01 08:46:34.558997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.865 [2024-10-01 08:46:34.559004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.865 qpair failed and we were unable to recover it. 00:31:42.865 [2024-10-01 08:46:34.559329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.865 [2024-10-01 08:46:34.559335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.865 qpair failed and we were unable to recover it. 00:31:42.865 [2024-10-01 08:46:34.559548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.865 [2024-10-01 08:46:34.559555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.865 qpair failed and we were unable to recover it. 00:31:42.865 [2024-10-01 08:46:34.559731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.865 [2024-10-01 08:46:34.559737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.865 qpair failed and we were unable to recover it. 00:31:42.865 [2024-10-01 08:46:34.559929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.865 [2024-10-01 08:46:34.559936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.865 qpair failed and we were unable to recover it. 00:31:42.865 [2024-10-01 08:46:34.560089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.865 [2024-10-01 08:46:34.560095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.865 qpair failed and we were unable to recover it. 00:31:42.865 [2024-10-01 08:46:34.560386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.865 [2024-10-01 08:46:34.560393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.865 qpair failed and we were unable to recover it. 00:31:42.865 [2024-10-01 08:46:34.560583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.865 [2024-10-01 08:46:34.560591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.865 qpair failed and we were unable to recover it. 00:31:42.865 [2024-10-01 08:46:34.560875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.865 [2024-10-01 08:46:34.560881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.865 qpair failed and we were unable to recover it. 00:31:42.865 [2024-10-01 08:46:34.561204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.865 [2024-10-01 08:46:34.561212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.865 qpair failed and we were unable to recover it. 00:31:42.865 [2024-10-01 08:46:34.561571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.865 [2024-10-01 08:46:34.561578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.865 qpair failed and we were unable to recover it. 00:31:42.865 [2024-10-01 08:46:34.561779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.865 [2024-10-01 08:46:34.561786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.865 qpair failed and we were unable to recover it. 00:31:42.865 [2024-10-01 08:46:34.561945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.865 [2024-10-01 08:46:34.561952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.865 qpair failed and we were unable to recover it. 00:31:42.865 [2024-10-01 08:46:34.562146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.865 [2024-10-01 08:46:34.562153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.865 qpair failed and we were unable to recover it. 00:31:42.865 [2024-10-01 08:46:34.562473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.865 [2024-10-01 08:46:34.562480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.865 qpair failed and we were unable to recover it. 00:31:42.865 [2024-10-01 08:46:34.562760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.865 [2024-10-01 08:46:34.562766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.865 qpair failed and we were unable to recover it. 00:31:42.865 [2024-10-01 08:46:34.562971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.865 [2024-10-01 08:46:34.562978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.865 qpair failed and we were unable to recover it. 00:31:42.865 [2024-10-01 08:46:34.563264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.865 [2024-10-01 08:46:34.563272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.865 qpair failed and we were unable to recover it. 00:31:42.865 [2024-10-01 08:46:34.563572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.865 [2024-10-01 08:46:34.563579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.865 qpair failed and we were unable to recover it. 00:31:42.865 [2024-10-01 08:46:34.563910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.865 [2024-10-01 08:46:34.563917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.865 qpair failed and we were unable to recover it. 00:31:42.865 [2024-10-01 08:46:34.564143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.865 [2024-10-01 08:46:34.564151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.865 qpair failed and we were unable to recover it. 00:31:42.865 [2024-10-01 08:46:34.564458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.865 [2024-10-01 08:46:34.564466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.865 qpair failed and we were unable to recover it. 00:31:42.865 [2024-10-01 08:46:34.564779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.865 [2024-10-01 08:46:34.564787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.865 qpair failed and we were unable to recover it. 00:31:42.865 [2024-10-01 08:46:34.565069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.865 [2024-10-01 08:46:34.565077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.865 qpair failed and we were unable to recover it. 00:31:42.865 [2024-10-01 08:46:34.565289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.865 [2024-10-01 08:46:34.565295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.865 qpair failed and we were unable to recover it. 00:31:42.865 [2024-10-01 08:46:34.565513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.865 [2024-10-01 08:46:34.565521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.865 qpair failed and we were unable to recover it. 00:31:42.865 [2024-10-01 08:46:34.565893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.865 [2024-10-01 08:46:34.565899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.865 qpair failed and we were unable to recover it. 00:31:42.866 [2024-10-01 08:46:34.566196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.866 [2024-10-01 08:46:34.566203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.866 qpair failed and we were unable to recover it. 00:31:42.866 [2024-10-01 08:46:34.566524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.866 [2024-10-01 08:46:34.566530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.866 qpair failed and we were unable to recover it. 00:31:42.866 [2024-10-01 08:46:34.566847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.866 [2024-10-01 08:46:34.566853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.866 qpair failed and we were unable to recover it. 00:31:42.866 [2024-10-01 08:46:34.567159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.866 [2024-10-01 08:46:34.567166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.866 qpair failed and we were unable to recover it. 00:31:42.866 [2024-10-01 08:46:34.567478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.866 [2024-10-01 08:46:34.567485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.866 qpair failed and we were unable to recover it. 00:31:42.866 [2024-10-01 08:46:34.567710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.866 [2024-10-01 08:46:34.567716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.866 qpair failed and we were unable to recover it. 00:31:42.866 [2024-10-01 08:46:34.568037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.866 [2024-10-01 08:46:34.568044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.866 qpair failed and we were unable to recover it. 00:31:42.866 [2024-10-01 08:46:34.568363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.866 [2024-10-01 08:46:34.568370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.866 qpair failed and we were unable to recover it. 00:31:42.866 [2024-10-01 08:46:34.568716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.866 [2024-10-01 08:46:34.568722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.866 qpair failed and we were unable to recover it. 00:31:42.866 [2024-10-01 08:46:34.569050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.866 [2024-10-01 08:46:34.569057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.866 qpair failed and we were unable to recover it. 00:31:42.866 [2024-10-01 08:46:34.569370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.866 [2024-10-01 08:46:34.569377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.866 qpair failed and we were unable to recover it. 00:31:42.866 [2024-10-01 08:46:34.569676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.866 [2024-10-01 08:46:34.569683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.866 qpair failed and we were unable to recover it. 00:31:42.866 [2024-10-01 08:46:34.570008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.866 [2024-10-01 08:46:34.570016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.866 qpair failed and we were unable to recover it. 00:31:42.866 [2024-10-01 08:46:34.570267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.866 [2024-10-01 08:46:34.570274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.866 qpair failed and we were unable to recover it. 00:31:42.866 [2024-10-01 08:46:34.570627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.866 [2024-10-01 08:46:34.570634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.866 qpair failed and we were unable to recover it. 00:31:42.866 [2024-10-01 08:46:34.570945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.866 [2024-10-01 08:46:34.570960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.866 qpair failed and we were unable to recover it. 00:31:42.866 [2024-10-01 08:46:34.571316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.866 [2024-10-01 08:46:34.571322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.866 qpair failed and we were unable to recover it. 00:31:42.866 [2024-10-01 08:46:34.571620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.866 [2024-10-01 08:46:34.571627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.866 qpair failed and we were unable to recover it. 00:31:42.866 [2024-10-01 08:46:34.571825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.866 [2024-10-01 08:46:34.571833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.866 qpair failed and we were unable to recover it. 00:31:42.866 [2024-10-01 08:46:34.572107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.866 [2024-10-01 08:46:34.572114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.866 qpair failed and we were unable to recover it. 00:31:42.866 [2024-10-01 08:46:34.572445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.866 [2024-10-01 08:46:34.572452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.866 qpair failed and we were unable to recover it. 00:31:42.866 [2024-10-01 08:46:34.572751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.866 [2024-10-01 08:46:34.572759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.866 qpair failed and we were unable to recover it. 00:31:42.866 [2024-10-01 08:46:34.572916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.866 [2024-10-01 08:46:34.572923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.866 qpair failed and we were unable to recover it. 00:31:42.866 [2024-10-01 08:46:34.573223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.866 [2024-10-01 08:46:34.573230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.866 qpair failed and we were unable to recover it. 00:31:42.866 [2024-10-01 08:46:34.573545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.866 [2024-10-01 08:46:34.573552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.866 qpair failed and we were unable to recover it. 00:31:42.866 [2024-10-01 08:46:34.573801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.866 [2024-10-01 08:46:34.573808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.866 qpair failed and we were unable to recover it. 00:31:42.866 [2024-10-01 08:46:34.574003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.866 [2024-10-01 08:46:34.574011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.866 qpair failed and we were unable to recover it. 00:31:42.866 [2024-10-01 08:46:34.574323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.866 [2024-10-01 08:46:34.574330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.866 qpair failed and we were unable to recover it. 00:31:42.866 [2024-10-01 08:46:34.574653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.866 [2024-10-01 08:46:34.574660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.866 qpair failed and we were unable to recover it. 00:31:42.866 [2024-10-01 08:46:34.575001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.866 [2024-10-01 08:46:34.575008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.866 qpair failed and we were unable to recover it. 00:31:42.866 [2024-10-01 08:46:34.575335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.866 [2024-10-01 08:46:34.575342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.866 qpair failed and we were unable to recover it. 00:31:42.866 [2024-10-01 08:46:34.575653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.866 [2024-10-01 08:46:34.575659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.866 qpair failed and we were unable to recover it. 00:31:42.866 [2024-10-01 08:46:34.575977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.866 [2024-10-01 08:46:34.575984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.866 qpair failed and we were unable to recover it. 00:31:42.866 [2024-10-01 08:46:34.576282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.866 [2024-10-01 08:46:34.576289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.866 qpair failed and we were unable to recover it. 00:31:42.866 [2024-10-01 08:46:34.576483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.866 [2024-10-01 08:46:34.576490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.866 qpair failed and we were unable to recover it. 00:31:42.866 [2024-10-01 08:46:34.576782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.866 [2024-10-01 08:46:34.576788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.866 qpair failed and we were unable to recover it. 00:31:42.866 [2024-10-01 08:46:34.577107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.866 [2024-10-01 08:46:34.577115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.867 qpair failed and we were unable to recover it. 00:31:42.867 [2024-10-01 08:46:34.577427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.867 [2024-10-01 08:46:34.577433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.867 qpair failed and we were unable to recover it. 00:31:42.867 [2024-10-01 08:46:34.577719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.867 [2024-10-01 08:46:34.577728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.867 qpair failed and we were unable to recover it. 00:31:42.867 [2024-10-01 08:46:34.578043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.867 [2024-10-01 08:46:34.578050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.867 qpair failed and we were unable to recover it. 00:31:42.867 [2024-10-01 08:46:34.578237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.867 [2024-10-01 08:46:34.578244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.867 qpair failed and we were unable to recover it. 00:31:42.867 [2024-10-01 08:46:34.578548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.867 [2024-10-01 08:46:34.578554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.867 qpair failed and we were unable to recover it. 00:31:42.867 [2024-10-01 08:46:34.578871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.867 [2024-10-01 08:46:34.578878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.867 qpair failed and we were unable to recover it. 00:31:42.867 [2024-10-01 08:46:34.579180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.867 [2024-10-01 08:46:34.579188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.867 qpair failed and we were unable to recover it. 00:31:42.867 [2024-10-01 08:46:34.579378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.867 [2024-10-01 08:46:34.579385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.867 qpair failed and we were unable to recover it. 00:31:42.867 [2024-10-01 08:46:34.579701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.867 [2024-10-01 08:46:34.579708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.867 qpair failed and we were unable to recover it. 00:31:42.867 [2024-10-01 08:46:34.580008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.867 [2024-10-01 08:46:34.580016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.867 qpair failed and we were unable to recover it. 00:31:42.867 [2024-10-01 08:46:34.580055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.867 [2024-10-01 08:46:34.580062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.867 qpair failed and we were unable to recover it. 00:31:42.867 [2024-10-01 08:46:34.580218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.867 [2024-10-01 08:46:34.580225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.867 qpair failed and we were unable to recover it. 00:31:42.867 [2024-10-01 08:46:34.580547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.867 [2024-10-01 08:46:34.580553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.867 qpair failed and we were unable to recover it. 00:31:42.867 [2024-10-01 08:46:34.580817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.867 [2024-10-01 08:46:34.580824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.867 qpair failed and we were unable to recover it. 00:31:42.867 [2024-10-01 08:46:34.581148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.867 [2024-10-01 08:46:34.581155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.867 qpair failed and we were unable to recover it. 00:31:42.867 [2024-10-01 08:46:34.581440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.867 [2024-10-01 08:46:34.581447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.867 qpair failed and we were unable to recover it. 00:31:42.867 [2024-10-01 08:46:34.581757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.867 [2024-10-01 08:46:34.581764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.867 qpair failed and we were unable to recover it. 00:31:42.867 [2024-10-01 08:46:34.582049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.867 [2024-10-01 08:46:34.582056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.867 qpair failed and we were unable to recover it. 00:31:42.867 [2024-10-01 08:46:34.582371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.867 [2024-10-01 08:46:34.582378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.867 qpair failed and we were unable to recover it. 00:31:42.867 [2024-10-01 08:46:34.582656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.867 [2024-10-01 08:46:34.582663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.867 qpair failed and we were unable to recover it. 00:31:42.867 [2024-10-01 08:46:34.582998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.867 [2024-10-01 08:46:34.583005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.867 qpair failed and we were unable to recover it. 00:31:42.867 [2024-10-01 08:46:34.583326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.867 [2024-10-01 08:46:34.583332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.867 qpair failed and we were unable to recover it. 00:31:42.867 [2024-10-01 08:46:34.583648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.867 [2024-10-01 08:46:34.583654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.867 qpair failed and we were unable to recover it. 00:31:42.867 [2024-10-01 08:46:34.583966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.867 [2024-10-01 08:46:34.583973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.867 qpair failed and we were unable to recover it. 00:31:42.867 [2024-10-01 08:46:34.584278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.867 [2024-10-01 08:46:34.584285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.867 qpair failed and we were unable to recover it. 00:31:42.867 [2024-10-01 08:46:34.584592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.867 [2024-10-01 08:46:34.584598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.867 qpair failed and we were unable to recover it. 00:31:42.867 [2024-10-01 08:46:34.584905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.867 [2024-10-01 08:46:34.584911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.867 qpair failed and we were unable to recover it. 00:31:42.867 [2024-10-01 08:46:34.585103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.867 [2024-10-01 08:46:34.585110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.867 qpair failed and we were unable to recover it. 00:31:42.867 [2024-10-01 08:46:34.585423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.867 [2024-10-01 08:46:34.585431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.867 qpair failed and we were unable to recover it. 00:31:42.867 [2024-10-01 08:46:34.585763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.867 [2024-10-01 08:46:34.585769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.867 qpair failed and we were unable to recover it. 00:31:42.867 [2024-10-01 08:46:34.585931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.867 [2024-10-01 08:46:34.585938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.867 qpair failed and we were unable to recover it. 00:31:42.867 [2024-10-01 08:46:34.586206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.867 [2024-10-01 08:46:34.586213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.867 qpair failed and we were unable to recover it. 00:31:42.867 [2024-10-01 08:46:34.586426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.867 [2024-10-01 08:46:34.586433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.867 qpair failed and we were unable to recover it. 00:31:42.867 [2024-10-01 08:46:34.586694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.867 [2024-10-01 08:46:34.586701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.867 qpair failed and we were unable to recover it. 00:31:42.867 [2024-10-01 08:46:34.586865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.867 [2024-10-01 08:46:34.586871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.867 qpair failed and we were unable to recover it. 00:31:42.867 [2024-10-01 08:46:34.587179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.867 [2024-10-01 08:46:34.587186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.867 qpair failed and we were unable to recover it. 00:31:42.867 [2024-10-01 08:46:34.587519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.867 [2024-10-01 08:46:34.587525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.867 qpair failed and we were unable to recover it. 00:31:42.867 [2024-10-01 08:46:34.587836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.867 [2024-10-01 08:46:34.587843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.868 qpair failed and we were unable to recover it. 00:31:42.868 [2024-10-01 08:46:34.588025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.868 [2024-10-01 08:46:34.588032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.868 qpair failed and we were unable to recover it. 00:31:42.868 [2024-10-01 08:46:34.588302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.868 [2024-10-01 08:46:34.588309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.868 qpair failed and we were unable to recover it. 00:31:42.868 [2024-10-01 08:46:34.588502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.868 [2024-10-01 08:46:34.588509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.868 qpair failed and we were unable to recover it. 00:31:42.868 [2024-10-01 08:46:34.588560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.868 [2024-10-01 08:46:34.588568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.868 qpair failed and we were unable to recover it. 00:31:42.868 [2024-10-01 08:46:34.588857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.868 [2024-10-01 08:46:34.588863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.868 qpair failed and we were unable to recover it. 00:31:42.868 [2024-10-01 08:46:34.589205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.868 [2024-10-01 08:46:34.589213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.868 qpair failed and we were unable to recover it. 00:31:42.868 [2024-10-01 08:46:34.589280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.868 [2024-10-01 08:46:34.589286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.868 qpair failed and we were unable to recover it. 00:31:42.868 [2024-10-01 08:46:34.589578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.868 [2024-10-01 08:46:34.589584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.868 qpair failed and we were unable to recover it. 00:31:42.868 [2024-10-01 08:46:34.589904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.868 [2024-10-01 08:46:34.589911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.868 qpair failed and we were unable to recover it. 00:31:42.868 [2024-10-01 08:46:34.590247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.868 [2024-10-01 08:46:34.590254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.868 qpair failed and we were unable to recover it. 00:31:42.868 [2024-10-01 08:46:34.590532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.868 [2024-10-01 08:46:34.590538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.868 qpair failed and we were unable to recover it. 00:31:42.868 [2024-10-01 08:46:34.590850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.868 [2024-10-01 08:46:34.590857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.868 qpair failed and we were unable to recover it. 00:31:42.868 [2024-10-01 08:46:34.591136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.868 [2024-10-01 08:46:34.591143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.868 qpair failed and we were unable to recover it. 00:31:42.868 [2024-10-01 08:46:34.591414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.868 [2024-10-01 08:46:34.591421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.868 qpair failed and we were unable to recover it. 00:31:42.868 [2024-10-01 08:46:34.591701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.868 [2024-10-01 08:46:34.591707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.868 qpair failed and we were unable to recover it. 00:31:42.868 [2024-10-01 08:46:34.592031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.868 [2024-10-01 08:46:34.592039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.868 qpair failed and we were unable to recover it. 00:31:42.868 [2024-10-01 08:46:34.592350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.868 [2024-10-01 08:46:34.592357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.868 qpair failed and we were unable to recover it. 00:31:42.868 [2024-10-01 08:46:34.592650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.868 [2024-10-01 08:46:34.592663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.868 qpair failed and we were unable to recover it. 00:31:42.868 [2024-10-01 08:46:34.592928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.868 [2024-10-01 08:46:34.592935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.868 qpair failed and we were unable to recover it. 00:31:42.868 [2024-10-01 08:46:34.593117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.868 [2024-10-01 08:46:34.593126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.868 qpair failed and we were unable to recover it. 00:31:42.868 [2024-10-01 08:46:34.593293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.868 [2024-10-01 08:46:34.593300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.868 qpair failed and we were unable to recover it. 00:31:42.868 [2024-10-01 08:46:34.593410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.868 [2024-10-01 08:46:34.593417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.868 qpair failed and we were unable to recover it. 00:31:42.868 [2024-10-01 08:46:34.593736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.868 [2024-10-01 08:46:34.593743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.868 qpair failed and we were unable to recover it. 00:31:42.868 [2024-10-01 08:46:34.594024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.868 [2024-10-01 08:46:34.594032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.868 qpair failed and we were unable to recover it. 00:31:42.868 [2024-10-01 08:46:34.594366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.868 [2024-10-01 08:46:34.594373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.868 qpair failed and we were unable to recover it. 00:31:42.868 [2024-10-01 08:46:34.594652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.868 [2024-10-01 08:46:34.594658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.868 qpair failed and we were unable to recover it. 00:31:42.868 [2024-10-01 08:46:34.594927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.868 [2024-10-01 08:46:34.594934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.868 qpair failed and we were unable to recover it. 00:31:42.868 [2024-10-01 08:46:34.595233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.868 [2024-10-01 08:46:34.595240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.868 qpair failed and we were unable to recover it. 00:31:42.868 [2024-10-01 08:46:34.595545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.868 [2024-10-01 08:46:34.595551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.868 qpair failed and we were unable to recover it. 00:31:42.868 [2024-10-01 08:46:34.595888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.868 [2024-10-01 08:46:34.595895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.868 qpair failed and we were unable to recover it. 00:31:42.868 [2024-10-01 08:46:34.596181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.868 [2024-10-01 08:46:34.596188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.868 qpair failed and we were unable to recover it. 00:31:42.868 [2024-10-01 08:46:34.596354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.868 [2024-10-01 08:46:34.596361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.868 qpair failed and we were unable to recover it. 00:31:42.868 [2024-10-01 08:46:34.596571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.868 [2024-10-01 08:46:34.596577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.868 qpair failed and we were unable to recover it. 00:31:42.869 [2024-10-01 08:46:34.596733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.869 [2024-10-01 08:46:34.596741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.869 qpair failed and we were unable to recover it. 00:31:42.869 [2024-10-01 08:46:34.596911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.869 [2024-10-01 08:46:34.596917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.869 qpair failed and we were unable to recover it. 00:31:42.869 [2024-10-01 08:46:34.597098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.869 [2024-10-01 08:46:34.597105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.869 qpair failed and we were unable to recover it. 00:31:42.869 [2024-10-01 08:46:34.597449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.869 [2024-10-01 08:46:34.597456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.869 qpair failed and we were unable to recover it. 00:31:42.869 [2024-10-01 08:46:34.597780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.869 [2024-10-01 08:46:34.597787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.869 qpair failed and we were unable to recover it. 00:31:42.869 [2024-10-01 08:46:34.598088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.869 [2024-10-01 08:46:34.598095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.869 qpair failed and we were unable to recover it. 00:31:42.869 [2024-10-01 08:46:34.598411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.869 [2024-10-01 08:46:34.598418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.869 qpair failed and we were unable to recover it. 00:31:42.869 [2024-10-01 08:46:34.598645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.869 [2024-10-01 08:46:34.598651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.869 qpair failed and we were unable to recover it. 00:31:42.869 [2024-10-01 08:46:34.599003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.869 [2024-10-01 08:46:34.599010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.869 qpair failed and we were unable to recover it. 00:31:42.869 [2024-10-01 08:46:34.599359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.869 [2024-10-01 08:46:34.599366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.869 qpair failed and we were unable to recover it. 00:31:42.869 [2024-10-01 08:46:34.599691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.869 [2024-10-01 08:46:34.599701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.869 qpair failed and we were unable to recover it. 00:31:42.869 [2024-10-01 08:46:34.600004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.869 [2024-10-01 08:46:34.600012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.869 qpair failed and we were unable to recover it. 00:31:42.869 [2024-10-01 08:46:34.600310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.869 [2024-10-01 08:46:34.600317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.869 qpair failed and we were unable to recover it. 00:31:42.869 [2024-10-01 08:46:34.600598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.869 [2024-10-01 08:46:34.600606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.869 qpair failed and we were unable to recover it. 00:31:42.869 [2024-10-01 08:46:34.600963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.869 [2024-10-01 08:46:34.600970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.869 qpair failed and we were unable to recover it. 00:31:42.869 [2024-10-01 08:46:34.601274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.869 [2024-10-01 08:46:34.601281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.869 qpair failed and we were unable to recover it. 00:31:42.869 [2024-10-01 08:46:34.601611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.869 [2024-10-01 08:46:34.601617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.869 qpair failed and we were unable to recover it. 00:31:42.869 [2024-10-01 08:46:34.601941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.869 [2024-10-01 08:46:34.601950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.869 qpair failed and we were unable to recover it. 00:31:42.869 [2024-10-01 08:46:34.602293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.869 [2024-10-01 08:46:34.602301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.869 qpair failed and we were unable to recover it. 00:31:42.869 [2024-10-01 08:46:34.602597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.869 [2024-10-01 08:46:34.602604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.869 qpair failed and we were unable to recover it. 00:31:42.869 [2024-10-01 08:46:34.602926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.869 [2024-10-01 08:46:34.602933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.869 qpair failed and we were unable to recover it. 00:31:42.869 [2024-10-01 08:46:34.603238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.869 [2024-10-01 08:46:34.603246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.869 qpair failed and we were unable to recover it. 00:31:42.869 [2024-10-01 08:46:34.603551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.869 [2024-10-01 08:46:34.603558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.869 qpair failed and we were unable to recover it. 00:31:42.869 [2024-10-01 08:46:34.603803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.869 [2024-10-01 08:46:34.603810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.869 qpair failed and we were unable to recover it. 00:31:42.869 [2024-10-01 08:46:34.604117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.869 [2024-10-01 08:46:34.604124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.869 qpair failed and we were unable to recover it. 00:31:42.869 [2024-10-01 08:46:34.604397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.869 [2024-10-01 08:46:34.604403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.869 qpair failed and we were unable to recover it. 00:31:42.869 [2024-10-01 08:46:34.604574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.869 [2024-10-01 08:46:34.604580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.869 qpair failed and we were unable to recover it. 00:31:42.869 [2024-10-01 08:46:34.604750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.869 [2024-10-01 08:46:34.604756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.869 qpair failed and we were unable to recover it. 00:31:42.869 [2024-10-01 08:46:34.605029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.869 [2024-10-01 08:46:34.605036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.869 qpair failed and we were unable to recover it. 00:31:42.869 [2024-10-01 08:46:34.605075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.869 [2024-10-01 08:46:34.605082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.869 qpair failed and we were unable to recover it. 00:31:42.869 [2024-10-01 08:46:34.605379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.869 [2024-10-01 08:46:34.605385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.869 qpair failed and we were unable to recover it. 00:31:42.869 [2024-10-01 08:46:34.605676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.869 [2024-10-01 08:46:34.605684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.869 qpair failed and we were unable to recover it. 00:31:42.869 [2024-10-01 08:46:34.605860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.869 [2024-10-01 08:46:34.605867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.869 qpair failed and we were unable to recover it. 00:31:42.869 [2024-10-01 08:46:34.606159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.869 [2024-10-01 08:46:34.606167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.869 qpair failed and we were unable to recover it. 00:31:42.869 [2024-10-01 08:46:34.606447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.869 [2024-10-01 08:46:34.606453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.869 qpair failed and we were unable to recover it. 00:31:42.869 [2024-10-01 08:46:34.606737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.869 [2024-10-01 08:46:34.606745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.869 qpair failed and we were unable to recover it. 00:31:42.869 [2024-10-01 08:46:34.607060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.869 [2024-10-01 08:46:34.607067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.869 qpair failed and we were unable to recover it. 00:31:42.869 [2024-10-01 08:46:34.607389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.870 [2024-10-01 08:46:34.607401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.870 qpair failed and we were unable to recover it. 00:31:42.870 [2024-10-01 08:46:34.607741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.870 [2024-10-01 08:46:34.607749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.870 qpair failed and we were unable to recover it. 00:31:42.870 [2024-10-01 08:46:34.608097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.870 [2024-10-01 08:46:34.608104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.870 qpair failed and we were unable to recover it. 00:31:42.870 [2024-10-01 08:46:34.608412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.870 [2024-10-01 08:46:34.608419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.870 qpair failed and we were unable to recover it. 00:31:42.870 [2024-10-01 08:46:34.608738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.870 [2024-10-01 08:46:34.608747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.870 qpair failed and we were unable to recover it. 00:31:42.870 [2024-10-01 08:46:34.609045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.870 [2024-10-01 08:46:34.609052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.870 qpair failed and we were unable to recover it. 00:31:42.870 [2024-10-01 08:46:34.609408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.870 [2024-10-01 08:46:34.609415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.870 qpair failed and we were unable to recover it. 00:31:42.870 [2024-10-01 08:46:34.609574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.870 [2024-10-01 08:46:34.609582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.870 qpair failed and we were unable to recover it. 00:31:42.870 [2024-10-01 08:46:34.609971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.870 [2024-10-01 08:46:34.609978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.870 qpair failed and we were unable to recover it. 00:31:42.870 [2024-10-01 08:46:34.610353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.870 [2024-10-01 08:46:34.610360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.870 qpair failed and we were unable to recover it. 00:31:42.870 [2024-10-01 08:46:34.610696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.870 [2024-10-01 08:46:34.610703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.870 qpair failed and we were unable to recover it. 00:31:42.870 [2024-10-01 08:46:34.611025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.870 [2024-10-01 08:46:34.611032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.870 qpair failed and we were unable to recover it. 00:31:42.870 [2024-10-01 08:46:34.611355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.870 [2024-10-01 08:46:34.611363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.870 qpair failed and we were unable to recover it. 00:31:42.870 [2024-10-01 08:46:34.611698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.870 [2024-10-01 08:46:34.611707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.870 qpair failed and we were unable to recover it. 00:31:42.870 [2024-10-01 08:46:34.611997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.870 [2024-10-01 08:46:34.612004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.870 qpair failed and we were unable to recover it. 00:31:42.870 [2024-10-01 08:46:34.612280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.870 [2024-10-01 08:46:34.612286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.870 qpair failed and we were unable to recover it. 00:31:42.870 [2024-10-01 08:46:34.612603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.870 [2024-10-01 08:46:34.612609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.870 qpair failed and we were unable to recover it. 00:31:42.870 [2024-10-01 08:46:34.612896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.870 [2024-10-01 08:46:34.612902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.870 qpair failed and we were unable to recover it. 00:31:42.870 [2024-10-01 08:46:34.613198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.870 [2024-10-01 08:46:34.613206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.870 qpair failed and we were unable to recover it. 00:31:42.870 [2024-10-01 08:46:34.613387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.870 [2024-10-01 08:46:34.613395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.870 qpair failed and we were unable to recover it. 00:31:42.870 [2024-10-01 08:46:34.613711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.870 [2024-10-01 08:46:34.613718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.870 qpair failed and we were unable to recover it. 00:31:42.870 [2024-10-01 08:46:34.613872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.870 [2024-10-01 08:46:34.613879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.870 qpair failed and we were unable to recover it. 00:31:42.870 [2024-10-01 08:46:34.614253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.870 [2024-10-01 08:46:34.614260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.870 qpair failed and we were unable to recover it. 00:31:42.870 [2024-10-01 08:46:34.614563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.870 [2024-10-01 08:46:34.614570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.870 qpair failed and we were unable to recover it. 00:31:42.870 [2024-10-01 08:46:34.614898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.870 [2024-10-01 08:46:34.614905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.870 qpair failed and we were unable to recover it. 00:31:42.870 [2024-10-01 08:46:34.615186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.870 [2024-10-01 08:46:34.615193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.870 qpair failed and we were unable to recover it. 00:31:42.870 [2024-10-01 08:46:34.615524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.870 [2024-10-01 08:46:34.615531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.870 qpair failed and we were unable to recover it. 00:31:42.870 [2024-10-01 08:46:34.615742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.870 [2024-10-01 08:46:34.615748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.870 qpair failed and we were unable to recover it. 00:31:42.870 [2024-10-01 08:46:34.616087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.870 [2024-10-01 08:46:34.616095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.870 qpair failed and we were unable to recover it. 00:31:42.870 [2024-10-01 08:46:34.616356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.870 [2024-10-01 08:46:34.616363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.870 qpair failed and we were unable to recover it. 00:31:42.870 [2024-10-01 08:46:34.616721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.870 [2024-10-01 08:46:34.616728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.870 qpair failed and we were unable to recover it. 00:31:42.870 [2024-10-01 08:46:34.616943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.870 [2024-10-01 08:46:34.616951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.870 qpair failed and we were unable to recover it. 00:31:42.870 [2024-10-01 08:46:34.617272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.870 [2024-10-01 08:46:34.617279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.870 qpair failed and we were unable to recover it. 00:31:42.870 [2024-10-01 08:46:34.617563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.870 [2024-10-01 08:46:34.617570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.870 qpair failed and we were unable to recover it. 00:31:42.870 [2024-10-01 08:46:34.617863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.870 [2024-10-01 08:46:34.617871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.870 qpair failed and we were unable to recover it. 00:31:42.870 [2024-10-01 08:46:34.618177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.870 [2024-10-01 08:46:34.618185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.870 qpair failed and we were unable to recover it. 00:31:42.870 [2024-10-01 08:46:34.618305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.870 [2024-10-01 08:46:34.618313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.870 qpair failed and we were unable to recover it. 00:31:42.870 [2024-10-01 08:46:34.618606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.870 [2024-10-01 08:46:34.618613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.871 qpair failed and we were unable to recover it. 00:31:42.871 [2024-10-01 08:46:34.618781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.871 [2024-10-01 08:46:34.618789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.871 qpair failed and we were unable to recover it. 00:31:42.871 [2024-10-01 08:46:34.619064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.871 [2024-10-01 08:46:34.619072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.871 qpair failed and we were unable to recover it. 00:31:42.871 [2024-10-01 08:46:34.619399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.871 [2024-10-01 08:46:34.619405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.871 qpair failed and we were unable to recover it. 00:31:42.871 [2024-10-01 08:46:34.619568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.871 [2024-10-01 08:46:34.619575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.871 qpair failed and we were unable to recover it. 00:31:42.871 [2024-10-01 08:46:34.619870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.871 [2024-10-01 08:46:34.619877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.871 qpair failed and we were unable to recover it. 00:31:42.871 [2024-10-01 08:46:34.620037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.871 [2024-10-01 08:46:34.620045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.871 qpair failed and we were unable to recover it. 00:31:42.871 [2024-10-01 08:46:34.620202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.871 [2024-10-01 08:46:34.620208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.871 qpair failed and we were unable to recover it. 00:31:42.871 [2024-10-01 08:46:34.620381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.871 [2024-10-01 08:46:34.620387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.871 qpair failed and we were unable to recover it. 00:31:42.871 [2024-10-01 08:46:34.620690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.871 [2024-10-01 08:46:34.620697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.871 qpair failed and we were unable to recover it. 00:31:42.871 [2024-10-01 08:46:34.620989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.871 [2024-10-01 08:46:34.620999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.871 qpair failed and we were unable to recover it. 00:31:42.871 [2024-10-01 08:46:34.621275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.871 [2024-10-01 08:46:34.621282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.871 qpair failed and we were unable to recover it. 00:31:42.871 [2024-10-01 08:46:34.621606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.871 [2024-10-01 08:46:34.621613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.871 qpair failed and we were unable to recover it. 00:31:42.871 [2024-10-01 08:46:34.621897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.871 [2024-10-01 08:46:34.621903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.871 qpair failed and we were unable to recover it. 00:31:42.871 [2024-10-01 08:46:34.622201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.871 [2024-10-01 08:46:34.622208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.871 qpair failed and we were unable to recover it. 00:31:42.871 [2024-10-01 08:46:34.622489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.871 [2024-10-01 08:46:34.622496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.871 qpair failed and we were unable to recover it. 00:31:42.871 [2024-10-01 08:46:34.622663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.871 [2024-10-01 08:46:34.622672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.871 qpair failed and we were unable to recover it. 00:31:42.871 [2024-10-01 08:46:34.622842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.871 [2024-10-01 08:46:34.622848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbda4000b90 with addr=10.0.0.2, port=4420 00:31:42.871 qpair failed and we were unable to recover it. 00:31:42.871 A controller has encountered a failure and is being reset. 00:31:42.871 [2024-10-01 08:46:34.623343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.871 [2024-10-01 08:46:34.623386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd7ed0 with addr=10.0.0.2, port=4420 00:31:42.871 [2024-10-01 08:46:34.623398] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdd7ed0 is same with the state(6) to be set 00:31:42.871 [2024-10-01 08:46:34.623414] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdd7ed0 (9): Bad file descriptor 00:31:42.871 [2024-10-01 08:46:34.623424] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:42.871 [2024-10-01 08:46:34.623435] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:42.871 [2024-10-01 08:46:34.623446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:42.871 Unable to reset the controller. 00:31:43.447 08:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:43.447 08:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:31:43.447 08:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:31:43.447 08:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:43.447 08:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:43.447 08:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:43.447 08:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:43.447 08:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.447 08:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:43.447 Malloc0 00:31:43.447 08:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.447 08:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:31:43.447 08:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.447 08:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:43.447 [2024-10-01 08:46:35.079897] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:43.447 08:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.447 08:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:43.447 08:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.447 08:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:43.447 08:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.447 08:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:43.447 08:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.447 08:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:43.447 08:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.447 08:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:43.447 08:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.447 08:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:43.447 [2024-10-01 08:46:35.120215] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:43.447 08:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.447 08:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:43.447 08:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.447 08:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:43.447 08:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.447 08:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3943644 00:31:44.015 Controller properly reset. 00:31:49.295 Initializing NVMe Controllers 00:31:49.295 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:49.295 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:49.295 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:31:49.295 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:31:49.295 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:31:49.295 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:31:49.295 Initialization complete. Launching workers. 00:31:49.295 Starting thread on core 1 00:31:49.295 Starting thread on core 2 00:31:49.295 Starting thread on core 3 00:31:49.295 Starting thread on core 0 00:31:49.295 08:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:31:49.295 00:31:49.295 real 0m11.344s 00:31:49.295 user 0m36.847s 00:31:49.295 sys 0m5.167s 00:31:49.295 08:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:49.295 08:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:49.295 ************************************ 00:31:49.295 END TEST nvmf_target_disconnect_tc2 00:31:49.295 ************************************ 00:31:49.295 08:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:31:49.295 08:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:31:49.295 08:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:31:49.295 08:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@512 -- # nvmfcleanup 00:31:49.295 08:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:31:49.295 08:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:49.295 08:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:31:49.295 08:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:49.295 08:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:49.295 rmmod nvme_tcp 00:31:49.295 rmmod nvme_fabrics 00:31:49.295 rmmod nvme_keyring 00:31:49.295 08:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:49.295 08:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:31:49.295 08:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:31:49.295 08:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@513 -- # '[' -n 3944534 ']' 00:31:49.295 08:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@514 -- # killprocess 3944534 00:31:49.295 08:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 3944534 ']' 00:31:49.295 08:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 3944534 00:31:49.295 08:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:31:49.295 08:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:49.295 08:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3944534 00:31:49.295 08:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:31:49.295 08:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:31:49.295 08:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3944534' 00:31:49.295 killing process with pid 3944534 00:31:49.295 08:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 3944534 00:31:49.295 08:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 3944534 00:31:49.295 08:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:31:49.295 08:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:31:49.295 08:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:31:49.295 08:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:31:49.295 08:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@787 -- # iptables-save 00:31:49.295 08:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:31:49.295 08:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@787 -- # iptables-restore 00:31:49.295 08:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:49.295 08:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:49.295 08:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:49.295 08:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:49.295 08:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:51.208 08:46:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:51.208 00:31:51.208 real 0m21.532s 00:31:51.208 user 1m3.888s 00:31:51.208 sys 0m11.575s 00:31:51.208 08:46:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:51.208 08:46:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:51.208 ************************************ 00:31:51.208 END TEST nvmf_target_disconnect 00:31:51.208 ************************************ 00:31:51.208 08:46:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:31:51.208 00:31:51.208 real 6m27.097s 00:31:51.208 user 11m33.460s 00:31:51.208 sys 2m12.422s 00:31:51.208 08:46:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:51.208 08:46:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.208 ************************************ 00:31:51.208 END TEST nvmf_host 00:31:51.208 ************************************ 00:31:51.208 08:46:42 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:31:51.208 08:46:42 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:31:51.208 08:46:42 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:31:51.208 08:46:42 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:31:51.208 08:46:42 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:51.208 08:46:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:51.208 ************************************ 00:31:51.208 START TEST nvmf_target_core_interrupt_mode 00:31:51.208 ************************************ 00:31:51.208 08:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:31:51.470 * Looking for test storage... 00:31:51.470 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:31:51.470 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:51.470 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:51.470 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # lcov --version 00:31:51.470 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:51.470 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:51.470 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:51.470 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:51.470 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:31:51.470 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:31:51.470 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:31:51.470 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:31:51.470 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:31:51.470 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:31:51.470 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:31:51.470 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:51.470 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:31:51.470 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:31:51.470 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:51.470 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:51.470 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:31:51.470 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:31:51.470 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:51.470 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:31:51.470 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:31:51.470 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:31:51.470 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:31:51.470 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:51.470 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:31:51.470 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:31:51.470 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:51.470 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:51.470 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:31:51.470 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:51.470 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:51.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:51.470 --rc genhtml_branch_coverage=1 00:31:51.470 --rc genhtml_function_coverage=1 00:31:51.470 --rc genhtml_legend=1 00:31:51.470 --rc geninfo_all_blocks=1 00:31:51.470 --rc geninfo_unexecuted_blocks=1 00:31:51.470 00:31:51.470 ' 00:31:51.470 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:51.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:51.470 --rc genhtml_branch_coverage=1 00:31:51.470 --rc genhtml_function_coverage=1 00:31:51.470 --rc genhtml_legend=1 00:31:51.470 --rc geninfo_all_blocks=1 00:31:51.470 --rc geninfo_unexecuted_blocks=1 00:31:51.470 00:31:51.470 ' 00:31:51.470 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:51.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:51.470 --rc genhtml_branch_coverage=1 00:31:51.470 --rc genhtml_function_coverage=1 00:31:51.470 --rc genhtml_legend=1 00:31:51.470 --rc geninfo_all_blocks=1 00:31:51.470 --rc geninfo_unexecuted_blocks=1 00:31:51.470 00:31:51.470 ' 00:31:51.470 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:51.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:51.470 --rc genhtml_branch_coverage=1 00:31:51.470 --rc genhtml_function_coverage=1 00:31:51.470 --rc genhtml_legend=1 00:31:51.470 --rc geninfo_all_blocks=1 00:31:51.470 --rc geninfo_unexecuted_blocks=1 00:31:51.470 00:31:51.470 ' 00:31:51.470 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:31:51.470 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:31:51.470 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:51.470 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:31:51.470 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:51.470 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:51.470 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:51.471 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:51.471 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:51.471 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:51.471 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:51.471 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:51.471 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:51.471 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:51.471 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:51.471 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:51.471 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:51.471 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:51.471 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:51.471 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:51.471 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:51.471 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:31:51.471 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:51.471 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:51.471 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:51.471 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:51.471 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:51.471 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:51.471 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:31:51.471 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:51.471 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:31:51.471 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:51.471 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:51.471 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:51.471 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:51.471 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:51.471 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:51.471 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:51.471 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:51.471 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:51.471 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:51.471 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:31:51.471 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:31:51.471 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:31:51.471 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:31:51.471 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:31:51.471 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:51.471 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:51.471 ************************************ 00:31:51.471 START TEST nvmf_abort 00:31:51.471 ************************************ 00:31:51.471 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:31:51.732 * Looking for test storage... 00:31:51.732 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:51.732 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:51.732 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # lcov --version 00:31:51.732 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:51.732 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:51.732 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:51.732 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:51.732 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:51.732 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:31:51.732 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:31:51.732 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:31:51.732 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:31:51.732 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:31:51.732 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:31:51.732 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:31:51.732 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:51.732 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:31:51.732 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:31:51.732 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:51.732 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:51.732 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:31:51.732 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:31:51.732 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:51.732 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:31:51.732 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:31:51.732 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:31:51.732 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:31:51.732 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:51.732 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:31:51.732 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:31:51.732 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:51.732 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:51.732 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:31:51.732 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:51.732 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:51.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:51.732 --rc genhtml_branch_coverage=1 00:31:51.732 --rc genhtml_function_coverage=1 00:31:51.732 --rc genhtml_legend=1 00:31:51.732 --rc geninfo_all_blocks=1 00:31:51.732 --rc geninfo_unexecuted_blocks=1 00:31:51.732 00:31:51.732 ' 00:31:51.732 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:51.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:51.733 --rc genhtml_branch_coverage=1 00:31:51.733 --rc genhtml_function_coverage=1 00:31:51.733 --rc genhtml_legend=1 00:31:51.733 --rc geninfo_all_blocks=1 00:31:51.733 --rc geninfo_unexecuted_blocks=1 00:31:51.733 00:31:51.733 ' 00:31:51.733 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:51.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:51.733 --rc genhtml_branch_coverage=1 00:31:51.733 --rc genhtml_function_coverage=1 00:31:51.733 --rc genhtml_legend=1 00:31:51.733 --rc geninfo_all_blocks=1 00:31:51.733 --rc geninfo_unexecuted_blocks=1 00:31:51.733 00:31:51.733 ' 00:31:51.733 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:51.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:51.733 --rc genhtml_branch_coverage=1 00:31:51.733 --rc genhtml_function_coverage=1 00:31:51.733 --rc genhtml_legend=1 00:31:51.733 --rc geninfo_all_blocks=1 00:31:51.733 --rc geninfo_unexecuted_blocks=1 00:31:51.733 00:31:51.733 ' 00:31:51.733 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:51.733 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:31:51.733 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:51.733 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:51.733 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:51.733 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:51.733 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:51.733 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:51.733 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:51.733 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:51.733 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:51.733 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:51.733 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:51.733 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:51.733 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:51.733 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:51.733 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:51.733 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:51.733 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:51.733 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:31:51.733 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:51.733 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:51.733 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:51.733 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:51.733 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:51.733 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:51.733 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:31:51.733 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:51.733 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:31:51.733 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:51.733 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:51.733 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:51.733 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:51.733 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:51.733 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:51.733 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:51.733 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:51.733 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:51.733 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:51.733 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:51.733 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:31:51.733 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:31:51.733 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:31:51.733 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:51.733 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@472 -- # prepare_net_devs 00:31:51.733 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@434 -- # local -g is_hw=no 00:31:51.733 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@436 -- # remove_spdk_ns 00:31:51.733 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:51.733 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:51.733 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:51.733 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:31:51.733 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:31:51.733 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:31:51.733 08:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:59.873 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:59.873 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:31:59.873 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:59.873 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:59.873 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:59.873 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:59.873 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:59.873 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:31:59.873 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:59.873 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:31:59.873 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:31:59.873 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:31:59.873 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:31:59.873 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:31:59.873 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:31:59.873 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:59.873 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:59.873 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:59.873 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:59.873 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:59.873 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:59.873 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:59.873 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:59.873 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:59.873 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:59.873 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:59.873 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:31:59.873 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:31:59.873 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:31:59.873 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:31:59.873 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:31:59.873 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:31:59.873 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:31:59.873 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:59.873 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:59.873 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:31:59.873 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:31:59.873 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:59.873 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:59.873 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:31:59.873 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:31:59.873 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:59.873 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:59.873 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:31:59.873 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:31:59.873 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:59.873 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:59.873 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:31:59.873 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:31:59.873 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:31:59.873 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:31:59.873 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:31:59.873 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:59.873 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:31:59.873 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:59.873 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ up == up ]] 00:31:59.873 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:31:59.873 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:59.873 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:59.874 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:59.874 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:31:59.874 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:31:59.874 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:59.874 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:31:59.874 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:59.874 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ up == up ]] 00:31:59.874 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:31:59.874 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:59.874 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:59.874 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:59.874 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:31:59.874 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:31:59.874 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # is_hw=yes 00:31:59.874 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:31:59.874 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:31:59.874 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:31:59.874 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:59.874 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:59.874 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:59.874 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:59.874 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:59.874 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:59.874 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:59.874 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:59.874 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:59.874 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:59.874 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:59.874 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:59.874 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:59.874 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:59.874 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:59.874 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:59.874 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:59.874 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:59.874 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:59.874 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:59.874 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:59.874 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:59.874 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:59.874 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:59.874 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.602 ms 00:31:59.874 00:31:59.874 --- 10.0.0.2 ping statistics --- 00:31:59.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:59.874 rtt min/avg/max/mdev = 0.602/0.602/0.602/0.000 ms 00:31:59.874 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:59.874 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:59.874 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:31:59.874 00:31:59.874 --- 10.0.0.1 ping statistics --- 00:31:59.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:59.874 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:31:59.874 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:59.874 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # return 0 00:31:59.874 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:31:59.874 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:59.874 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:31:59.874 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:31:59.874 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:59.874 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:31:59.874 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:31:59.874 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:31:59.874 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:31:59.874 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:59.874 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:59.874 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@505 -- # nvmfpid=3950007 00:31:59.874 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@506 -- # waitforlisten 3950007 00:31:59.874 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:31:59.874 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 3950007 ']' 00:31:59.874 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:59.874 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:59.874 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:59.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:59.874 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:59.874 08:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:59.874 [2024-10-01 08:46:50.786733] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:59.874 [2024-10-01 08:46:50.787719] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:31:59.874 [2024-10-01 08:46:50.787757] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:59.874 [2024-10-01 08:46:50.873895] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:59.874 [2024-10-01 08:46:50.953527] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:59.874 [2024-10-01 08:46:50.953589] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:59.874 [2024-10-01 08:46:50.953597] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:59.874 [2024-10-01 08:46:50.953604] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:59.874 [2024-10-01 08:46:50.953611] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:59.874 [2024-10-01 08:46:50.955099] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:31:59.874 [2024-10-01 08:46:50.955445] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:31:59.874 [2024-10-01 08:46:50.955450] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:31:59.874 [2024-10-01 08:46:51.037409] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:59.874 [2024-10-01 08:46:51.037492] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:59.874 [2024-10-01 08:46:51.038080] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:59.874 [2024-10-01 08:46:51.038373] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:59.874 08:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:59.874 08:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:31:59.874 08:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:31:59.874 08:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:59.874 08:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:59.874 08:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:59.874 08:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:31:59.874 08:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.875 08:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:59.875 [2024-10-01 08:46:51.624524] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:59.875 08:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.875 08:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:31:59.875 08:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.875 08:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:59.875 Malloc0 00:31:59.875 08:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.875 08:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:59.875 08:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.875 08:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:59.875 Delay0 00:31:59.875 08:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.875 08:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:59.875 08:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.875 08:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:59.875 08:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.875 08:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:31:59.875 08:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.875 08:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:00.135 08:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.135 08:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:00.135 08:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.135 08:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:00.135 [2024-10-01 08:46:51.708478] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:00.135 08:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.135 08:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:00.135 08:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.135 08:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:00.135 08:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.135 08:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:32:00.135 [2024-10-01 08:46:51.823387] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:32:02.680 Initializing NVMe Controllers 00:32:02.680 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:32:02.680 controller IO queue size 128 less than required 00:32:02.680 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:32:02.680 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:32:02.680 Initialization complete. Launching workers. 00:32:02.680 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 29139 00:32:02.680 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29196, failed to submit 66 00:32:02.680 success 29139, unsuccessful 57, failed 0 00:32:02.680 08:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:02.680 08:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.680 08:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:02.680 08:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.680 08:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:32:02.680 08:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:32:02.680 08:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # nvmfcleanup 00:32:02.680 08:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:32:02.680 08:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:02.680 08:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:32:02.680 08:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:02.680 08:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:02.680 rmmod nvme_tcp 00:32:02.680 rmmod nvme_fabrics 00:32:02.680 rmmod nvme_keyring 00:32:02.680 08:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:02.680 08:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:32:02.680 08:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:32:02.680 08:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@513 -- # '[' -n 3950007 ']' 00:32:02.680 08:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@514 -- # killprocess 3950007 00:32:02.680 08:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 3950007 ']' 00:32:02.680 08:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 3950007 00:32:02.681 08:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:32:02.681 08:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:02.681 08:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3950007 00:32:02.681 08:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:02.681 08:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:02.681 08:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3950007' 00:32:02.681 killing process with pid 3950007 00:32:02.681 08:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@969 -- # kill 3950007 00:32:02.681 08:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@974 -- # wait 3950007 00:32:02.681 08:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:32:02.681 08:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:32:02.681 08:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:32:02.681 08:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:32:02.681 08:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@787 -- # iptables-save 00:32:02.681 08:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:32:02.681 08:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@787 -- # iptables-restore 00:32:02.681 08:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:02.681 08:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:02.681 08:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:02.681 08:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:02.681 08:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:04.591 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:04.591 00:32:04.591 real 0m13.123s 00:32:04.591 user 0m11.002s 00:32:04.591 sys 0m6.788s 00:32:04.591 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:04.592 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:04.592 ************************************ 00:32:04.592 END TEST nvmf_abort 00:32:04.592 ************************************ 00:32:04.853 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:32:04.853 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:32:04.853 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:04.853 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:04.853 ************************************ 00:32:04.853 START TEST nvmf_ns_hotplug_stress 00:32:04.853 ************************************ 00:32:04.853 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:32:04.853 * Looking for test storage... 00:32:04.854 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:04.854 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:04.854 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:32:04.854 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:04.854 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:04.854 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:04.854 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:04.854 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:04.854 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:32:04.854 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:32:04.854 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:32:04.854 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:32:04.854 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:32:04.854 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:32:04.854 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:32:04.854 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:04.854 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:32:04.854 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:32:04.854 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:04.854 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:04.854 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:32:04.854 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:32:04.854 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:04.854 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:32:04.854 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:32:04.854 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:32:04.854 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:32:04.854 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:04.854 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:32:04.854 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:32:04.854 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:04.854 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:04.854 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:32:04.854 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:04.854 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:04.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:04.854 --rc genhtml_branch_coverage=1 00:32:04.854 --rc genhtml_function_coverage=1 00:32:04.854 --rc genhtml_legend=1 00:32:04.854 --rc geninfo_all_blocks=1 00:32:04.854 --rc geninfo_unexecuted_blocks=1 00:32:04.854 00:32:04.854 ' 00:32:04.854 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:04.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:04.854 --rc genhtml_branch_coverage=1 00:32:04.854 --rc genhtml_function_coverage=1 00:32:04.854 --rc genhtml_legend=1 00:32:04.854 --rc geninfo_all_blocks=1 00:32:04.854 --rc geninfo_unexecuted_blocks=1 00:32:04.854 00:32:04.854 ' 00:32:04.854 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:04.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:04.854 --rc genhtml_branch_coverage=1 00:32:04.854 --rc genhtml_function_coverage=1 00:32:04.854 --rc genhtml_legend=1 00:32:04.854 --rc geninfo_all_blocks=1 00:32:04.854 --rc geninfo_unexecuted_blocks=1 00:32:04.854 00:32:04.854 ' 00:32:04.854 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:04.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:04.854 --rc genhtml_branch_coverage=1 00:32:04.854 --rc genhtml_function_coverage=1 00:32:04.854 --rc genhtml_legend=1 00:32:04.854 --rc geninfo_all_blocks=1 00:32:04.854 --rc geninfo_unexecuted_blocks=1 00:32:04.854 00:32:04.854 ' 00:32:04.854 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:04.854 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:32:04.854 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:04.854 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:04.854 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:04.854 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:04.854 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:04.854 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:04.854 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:04.854 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:04.854 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:04.854 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:05.116 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:05.116 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:05.116 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:05.116 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:05.116 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:05.116 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:05.116 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:05.116 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:32:05.116 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:05.116 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:05.116 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:05.116 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.116 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.116 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.116 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:32:05.116 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.116 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:32:05.116 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:05.116 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:05.116 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:05.116 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:05.116 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:05.116 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:05.116 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:05.116 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:05.116 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:05.116 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:05.116 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:05.116 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:32:05.116 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:32:05.116 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:05.116 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # prepare_net_devs 00:32:05.116 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@434 -- # local -g is_hw=no 00:32:05.116 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # remove_spdk_ns 00:32:05.116 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:05.116 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:05.116 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:05.116 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:32:05.116 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:32:05.116 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:32:05.116 08:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:32:13.255 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:13.255 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:32:13.255 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:13.255 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:13.255 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:13.255 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:13.255 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:13.255 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:32:13.255 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:13.255 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:32:13.255 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:32:13.255 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:32:13.255 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:32:13.255 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:32:13.255 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:32:13.255 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:13.255 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:13.255 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:13.255 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:13.255 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:13.255 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:13.255 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:13.255 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:13.255 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:13.255 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:13.255 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:13.255 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:32:13.255 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:32:13.255 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:32:13.255 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:32:13.255 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:32:13.255 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:32:13.255 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:32:13.255 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:13.255 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:13.255 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:32:13.255 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:32:13.255 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:13.255 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:13.255 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:32:13.255 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:32:13.256 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:13.256 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:13.256 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:32:13.256 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:32:13.256 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:13.256 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:13.256 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:32:13.256 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:32:13.256 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:32:13.256 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:32:13.256 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:32:13.256 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:13.256 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:32:13.256 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:13.256 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:32:13.256 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:32:13.256 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:13.256 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:13.256 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:13.256 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:32:13.256 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:32:13.256 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:13.256 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:32:13.256 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:13.256 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:32:13.256 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:32:13.256 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:13.256 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:13.256 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:13.256 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:32:13.256 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:32:13.256 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # is_hw=yes 00:32:13.256 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:32:13.256 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:32:13.256 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:32:13.256 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:13.256 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:13.256 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:13.256 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:13.256 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:13.256 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:13.256 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:13.256 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:13.256 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:13.256 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:13.256 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:13.256 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:13.256 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:13.256 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:13.256 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:13.256 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:13.256 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:13.256 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:13.256 08:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:13.256 08:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:13.256 08:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:13.256 08:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:13.256 08:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:13.256 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:13.256 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:32:13.256 00:32:13.256 --- 10.0.0.2 ping statistics --- 00:32:13.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:13.256 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:32:13.256 08:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:13.256 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:13.256 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:32:13.256 00:32:13.256 --- 10.0.0.1 ping statistics --- 00:32:13.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:13.256 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:32:13.256 08:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:13.256 08:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # return 0 00:32:13.256 08:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:32:13.256 08:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:13.256 08:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:32:13.256 08:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:32:13.256 08:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:13.256 08:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:32:13.256 08:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:32:13.256 08:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:32:13.256 08:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:32:13.256 08:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:13.256 08:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:32:13.256 08:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # nvmfpid=3954699 00:32:13.256 08:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # waitforlisten 3954699 00:32:13.256 08:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:32:13.256 08:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 3954699 ']' 00:32:13.256 08:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:13.256 08:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:13.256 08:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:13.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:13.256 08:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:13.256 08:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:32:13.256 [2024-10-01 08:47:04.203093] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:13.256 [2024-10-01 08:47:04.204224] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:32:13.256 [2024-10-01 08:47:04.204278] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:13.256 [2024-10-01 08:47:04.294261] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:13.256 [2024-10-01 08:47:04.387745] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:13.256 [2024-10-01 08:47:04.387803] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:13.256 [2024-10-01 08:47:04.387812] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:13.256 [2024-10-01 08:47:04.387819] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:13.256 [2024-10-01 08:47:04.387826] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:13.256 [2024-10-01 08:47:04.389072] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:32:13.256 [2024-10-01 08:47:04.389244] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:32:13.256 [2024-10-01 08:47:04.389342] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:32:13.256 [2024-10-01 08:47:04.486472] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:13.256 [2024-10-01 08:47:04.486474] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:13.256 [2024-10-01 08:47:04.487091] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:13.256 [2024-10-01 08:47:04.487384] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:13.256 08:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:13.256 08:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:32:13.257 08:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:32:13.257 08:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:13.257 08:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:32:13.257 08:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:13.257 08:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:32:13.257 08:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:13.517 [2024-10-01 08:47:05.218376] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:13.517 08:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:32:13.777 08:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:13.777 [2024-10-01 08:47:05.583165] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:14.037 08:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:14.038 08:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:32:14.298 Malloc0 00:32:14.298 08:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:32:14.298 Delay0 00:32:14.558 08:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:14.558 08:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:32:14.817 NULL1 00:32:14.817 08:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:32:15.077 08:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3955348 00:32:15.077 08:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3955348 00:32:15.077 08:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:32:15.077 08:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:15.077 08:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:15.337 08:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:32:15.337 08:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:32:15.597 true 00:32:15.597 08:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3955348 00:32:15.597 08:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:15.597 08:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:15.857 08:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:32:15.858 08:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:32:16.118 true 00:32:16.118 08:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3955348 00:32:16.118 08:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:16.118 08:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:16.379 08:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:32:16.379 08:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:32:16.639 true 00:32:16.639 08:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3955348 00:32:16.639 08:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:16.899 08:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:16.899 08:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:32:16.899 08:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:32:17.160 true 00:32:17.160 08:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3955348 00:32:17.160 08:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:18.543 Read completed with error (sct=0, sc=11) 00:32:18.543 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:18.543 08:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:18.543 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:18.543 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:18.543 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:18.543 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:18.543 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:18.543 08:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:32:18.543 08:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:32:18.543 true 00:32:18.802 08:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3955348 00:32:18.802 08:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:19.375 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:19.635 08:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:19.635 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:19.635 08:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:32:19.635 08:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:32:19.897 true 00:32:19.897 08:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3955348 00:32:19.897 08:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:20.157 08:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:20.157 08:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:32:20.157 08:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:32:20.417 true 00:32:20.418 08:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3955348 00:32:20.418 08:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:20.678 08:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:20.678 08:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:32:20.678 08:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:32:20.939 true 00:32:20.939 08:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3955348 00:32:20.939 08:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:21.200 08:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:21.461 08:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:32:21.461 08:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:32:21.461 true 00:32:21.461 08:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3955348 00:32:21.461 08:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:21.722 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:21.722 08:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:21.722 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:21.722 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:21.722 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:21.722 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:21.722 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:21.983 08:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:32:21.983 08:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:32:21.983 true 00:32:21.983 08:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3955348 00:32:21.983 08:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:22.924 08:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:23.185 08:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:32:23.185 08:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:32:23.185 true 00:32:23.185 08:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3955348 00:32:23.185 08:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:23.446 08:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:23.708 08:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:32:23.708 08:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:32:23.708 true 00:32:23.708 08:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3955348 00:32:23.708 08:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:25.093 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:25.093 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:25.093 08:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:25.093 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:25.093 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:25.093 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:25.093 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:25.093 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:25.093 08:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:32:25.093 08:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:32:25.354 true 00:32:25.354 08:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3955348 00:32:25.354 08:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:26.296 08:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:26.296 08:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:32:26.296 08:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:32:26.557 true 00:32:26.557 08:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3955348 00:32:26.557 08:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:26.819 08:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:26.819 08:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:32:26.819 08:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:32:27.080 true 00:32:27.080 08:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3955348 00:32:27.080 08:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:28.467 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:28.467 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:28.467 08:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:28.467 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:28.467 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:28.467 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:28.467 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:28.467 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:28.467 08:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:32:28.467 08:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:32:28.731 true 00:32:28.731 08:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3955348 00:32:28.731 08:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:29.681 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:29.681 08:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:29.681 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:29.681 08:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:32:29.681 08:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:32:29.942 true 00:32:29.942 08:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3955348 00:32:29.942 08:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:29.942 08:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:30.205 08:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:32:30.205 08:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:32:30.466 true 00:32:30.466 08:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3955348 00:32:30.466 08:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:30.466 08:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:30.727 08:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:32:30.727 08:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:32:30.988 true 00:32:30.988 08:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3955348 00:32:30.988 08:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:31.249 08:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:31.249 08:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:32:31.249 08:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:32:31.509 true 00:32:31.509 08:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3955348 00:32:31.509 08:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:31.770 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:31.770 08:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:31.770 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:31.770 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:31.770 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:31.770 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:31.770 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:31.770 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:31.770 08:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:32:31.770 08:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:32:32.031 true 00:32:32.031 08:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3955348 00:32:32.031 08:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:32.975 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:32.975 08:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:32.975 08:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:32:32.975 08:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:32:33.236 true 00:32:33.236 08:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3955348 00:32:33.236 08:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:33.528 08:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:33.528 08:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:32:33.528 08:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:32:33.789 true 00:32:33.789 08:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3955348 00:32:33.789 08:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:35.175 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:35.175 08:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:35.175 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:35.176 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:35.176 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:35.176 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:35.176 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:35.176 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:35.176 08:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:32:35.176 08:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:32:35.176 true 00:32:35.436 08:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3955348 00:32:35.436 08:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:36.008 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:36.269 08:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:36.269 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:36.269 08:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:32:36.269 08:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:32:36.530 true 00:32:36.530 08:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3955348 00:32:36.530 08:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:36.791 08:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:36.791 08:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:32:36.791 08:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:32:37.052 true 00:32:37.052 08:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3955348 00:32:37.052 08:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:37.314 08:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:37.314 08:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:32:37.314 08:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:32:37.576 true 00:32:37.577 08:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3955348 00:32:37.577 08:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:37.838 08:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:38.100 08:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:32:38.100 08:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:32:38.100 true 00:32:38.100 08:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3955348 00:32:38.100 08:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:38.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:38.362 08:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:38.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:38.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:38.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:38.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:38.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:38.623 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:38.623 08:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:32:38.623 08:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:32:38.623 true 00:32:38.623 08:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3955348 00:32:38.623 08:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:39.683 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:39.683 08:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:39.683 08:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:32:39.683 08:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:32:39.950 true 00:32:39.950 08:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3955348 00:32:39.950 08:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:40.211 08:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:40.211 08:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:32:40.211 08:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:32:40.472 true 00:32:40.472 08:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3955348 00:32:40.472 08:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:40.732 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:40.732 08:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:40.732 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:40.732 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:40.732 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:40.732 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:40.732 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:40.732 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:40.732 08:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:32:40.732 08:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:32:40.992 true 00:32:40.992 08:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3955348 00:32:40.992 08:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:41.933 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:41.933 08:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:41.933 08:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:32:41.933 08:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:32:42.194 true 00:32:42.194 08:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3955348 00:32:42.194 08:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:42.455 08:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:42.715 08:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:32:42.715 08:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:32:42.715 true 00:32:42.715 08:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3955348 00:32:42.715 08:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:42.976 08:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:43.236 08:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:32:43.236 08:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:32:43.236 true 00:32:43.236 08:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3955348 00:32:43.236 08:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:43.496 08:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:43.756 08:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:32:43.757 08:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:32:43.757 true 00:32:43.757 08:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3955348 00:32:43.757 08:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:44.016 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:44.016 08:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:44.016 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:44.016 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:44.016 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:44.016 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:44.276 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:44.276 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:44.276 08:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:32:44.276 08:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:32:44.276 true 00:32:44.537 08:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3955348 00:32:44.537 08:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:45.106 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:45.366 08:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:45.366 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:45.366 Initializing NVMe Controllers 00:32:45.366 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:45.366 Controller IO queue size 128, less than required. 00:32:45.366 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:45.366 Controller IO queue size 128, less than required. 00:32:45.366 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:45.366 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:45.366 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:32:45.366 Initialization complete. Launching workers. 00:32:45.366 ======================================================== 00:32:45.366 Latency(us) 00:32:45.366 Device Information : IOPS MiB/s Average min max 00:32:45.366 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2458.35 1.20 27331.73 1594.87 1050722.59 00:32:45.366 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 15390.95 7.52 8288.86 1560.55 400971.94 00:32:45.366 ======================================================== 00:32:45.366 Total : 17849.30 8.72 10911.60 1560.55 1050722.59 00:32:45.366 00:32:45.366 08:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:32:45.366 08:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:32:45.626 true 00:32:45.626 08:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3955348 00:32:45.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3955348) - No such process 00:32:45.626 08:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3955348 00:32:45.626 08:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:45.886 08:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:45.886 08:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:32:45.886 08:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:32:45.886 08:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:32:45.886 08:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:45.886 08:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:32:46.146 null0 00:32:46.146 08:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:46.146 08:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:46.146 08:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:32:46.146 null1 00:32:46.406 08:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:46.406 08:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:46.406 08:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:32:46.406 null2 00:32:46.406 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:46.406 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:46.406 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:32:46.666 null3 00:32:46.666 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:46.666 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:46.666 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:32:46.666 null4 00:32:46.666 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:46.666 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:46.666 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:32:46.926 null5 00:32:46.926 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:46.926 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:46.926 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:32:47.186 null6 00:32:47.186 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:32:47.187 null7 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3961512 3961513 3961515 3961517 3961519 3961521 3961523 3961525 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:47.187 08:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:47.447 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:47.447 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:47.447 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:47.447 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:47.447 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:47.447 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:47.447 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:47.447 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:47.708 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:47.708 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:47.708 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:47.708 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:47.708 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:47.708 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:47.708 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:47.708 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:47.708 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:47.708 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:47.708 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:47.708 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:47.709 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:47.709 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:47.709 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:47.709 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:47.709 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:47.709 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:47.709 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:47.709 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:47.709 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:47.709 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:47.709 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:47.709 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:47.709 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:47.709 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:47.969 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:47.969 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:47.969 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:47.969 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:47.969 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:47.969 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:47.969 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:47.969 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:47.969 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:47.969 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:47.969 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:47.969 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:47.969 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:47.969 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:47.969 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:47.969 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:47.969 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:47.969 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:47.969 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:47.969 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:47.969 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:47.969 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:47.969 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:47.969 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:48.230 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:48.230 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:48.230 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:48.230 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:48.230 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:48.230 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:48.230 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:48.230 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:48.230 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:48.230 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:48.230 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:48.230 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:48.230 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:48.230 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:48.230 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:48.230 08:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:48.230 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:48.491 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:48.491 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:48.491 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:48.491 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:48.491 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:48.491 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:48.491 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:48.491 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:48.491 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:48.491 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:48.491 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:48.491 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:48.491 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:48.491 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:48.491 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:48.491 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:48.491 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:48.491 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:48.491 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:48.491 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:48.491 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:48.491 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:48.491 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:48.491 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:48.491 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:48.753 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:48.753 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:48.753 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:48.753 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:48.753 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:48.753 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:48.753 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:48.753 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:48.753 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:48.753 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:48.753 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:48.753 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:48.753 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:48.753 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:48.753 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:48.753 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:48.753 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:48.753 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:48.753 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:48.753 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:48.753 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:48.753 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:48.753 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:48.753 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:48.753 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:48.753 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:49.013 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:49.013 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:49.013 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:49.013 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:49.013 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:49.013 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:49.013 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:49.013 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:49.013 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:49.013 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:49.013 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:49.013 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:49.013 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:49.013 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:49.013 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:49.013 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:49.274 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:49.274 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:49.274 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:49.274 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:49.274 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:49.274 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:49.274 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:49.274 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:49.274 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:49.274 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:49.274 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:49.274 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:49.274 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:49.274 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:49.274 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:49.274 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:49.274 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:49.274 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:49.274 08:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:49.274 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:49.274 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:49.274 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:49.274 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:49.274 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:49.274 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:49.274 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:49.274 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:49.534 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:49.534 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:49.534 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:49.534 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:49.535 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:49.535 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:49.535 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:49.535 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:49.535 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:49.535 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:49.535 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:49.535 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:49.535 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:49.535 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:49.535 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:49.535 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:49.535 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:49.535 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:49.535 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:49.535 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:49.535 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:49.535 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:49.535 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:49.535 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:49.829 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:49.829 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:49.829 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:49.829 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:49.829 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:49.829 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:49.829 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:49.829 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:49.829 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:49.829 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:49.829 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:49.829 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:49.829 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:49.829 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:49.829 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:49.829 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:49.829 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:49.829 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:49.829 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:49.829 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:49.829 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:49.829 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:49.829 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:49.829 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:49.829 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:49.830 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:49.830 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:49.830 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:49.830 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:50.089 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:50.089 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:50.089 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:50.089 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:50.089 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:50.089 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:50.089 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:50.089 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:50.089 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:50.089 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:50.089 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:50.089 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:50.089 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:50.349 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:50.349 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:50.349 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:50.349 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:50.349 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:50.349 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:50.349 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:50.349 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:50.349 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:50.349 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:50.349 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:50.349 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:50.349 08:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:50.350 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:50.350 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:50.350 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:50.350 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:50.350 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:50.350 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:50.350 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:50.350 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:50.350 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:50.350 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:50.350 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:50.350 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:50.350 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:50.350 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:50.350 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:50.610 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:50.610 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:50.610 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:50.610 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:50.610 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:50.610 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:50.610 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:50.610 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:50.610 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:50.610 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:50.610 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:50.610 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:50.610 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:50.610 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:50.610 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:50.610 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:50.610 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:50.610 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:50.610 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:50.610 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:50.610 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:50.610 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:50.610 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:50.610 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:50.870 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:50.870 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:50.870 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:50.870 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:50.870 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:50.870 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:50.870 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:50.870 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:50.870 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:50.870 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:50.870 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:50.870 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:50.870 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:51.130 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:51.130 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:51.130 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:51.130 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:51.130 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:51.130 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:51.130 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:51.130 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:51.130 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:51.130 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:51.130 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:51.130 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:51.130 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:32:51.130 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:32:51.130 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # nvmfcleanup 00:32:51.130 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:32:51.130 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:51.130 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:32:51.130 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:51.130 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:51.130 rmmod nvme_tcp 00:32:51.130 rmmod nvme_fabrics 00:32:51.130 rmmod nvme_keyring 00:32:51.130 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:51.130 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:32:51.130 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:32:51.130 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@513 -- # '[' -n 3954699 ']' 00:32:51.130 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # killprocess 3954699 00:32:51.130 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 3954699 ']' 00:32:51.130 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 3954699 00:32:51.130 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:32:51.130 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:51.130 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3954699 00:32:51.130 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:51.130 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:51.130 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3954699' 00:32:51.130 killing process with pid 3954699 00:32:51.130 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 3954699 00:32:51.130 08:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 3954699 00:32:51.391 08:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:32:51.391 08:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:32:51.391 08:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:32:51.391 08:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:32:51.391 08:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # iptables-save 00:32:51.391 08:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:32:51.391 08:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # iptables-restore 00:32:51.391 08:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:51.391 08:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:51.391 08:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:51.391 08:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:51.391 08:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:53.939 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:53.939 00:32:53.939 real 0m48.684s 00:32:53.939 user 2m59.838s 00:32:53.939 sys 0m21.257s 00:32:53.939 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:53.939 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:32:53.939 ************************************ 00:32:53.939 END TEST nvmf_ns_hotplug_stress 00:32:53.939 ************************************ 00:32:53.939 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:32:53.939 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:32:53.939 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:53.939 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:53.939 ************************************ 00:32:53.939 START TEST nvmf_delete_subsystem 00:32:53.939 ************************************ 00:32:53.939 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:32:53.939 * Looking for test storage... 00:32:53.939 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:53.939 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:53.939 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lcov --version 00:32:53.939 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:53.939 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:53.939 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:53.939 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:53.939 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:53.939 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:32:53.939 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:32:53.939 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:32:53.939 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:32:53.939 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:32:53.939 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:32:53.939 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:32:53.939 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:53.939 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:32:53.939 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:32:53.939 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:53.939 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:53.939 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:32:53.939 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:32:53.939 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:53.939 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:32:53.939 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:32:53.939 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:32:53.939 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:32:53.939 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:53.939 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:32:53.939 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:32:53.939 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:53.939 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:53.939 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:32:53.939 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:53.939 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:53.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:53.939 --rc genhtml_branch_coverage=1 00:32:53.939 --rc genhtml_function_coverage=1 00:32:53.939 --rc genhtml_legend=1 00:32:53.939 --rc geninfo_all_blocks=1 00:32:53.939 --rc geninfo_unexecuted_blocks=1 00:32:53.939 00:32:53.939 ' 00:32:53.939 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:53.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:53.939 --rc genhtml_branch_coverage=1 00:32:53.939 --rc genhtml_function_coverage=1 00:32:53.939 --rc genhtml_legend=1 00:32:53.939 --rc geninfo_all_blocks=1 00:32:53.939 --rc geninfo_unexecuted_blocks=1 00:32:53.939 00:32:53.939 ' 00:32:53.939 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:53.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:53.939 --rc genhtml_branch_coverage=1 00:32:53.939 --rc genhtml_function_coverage=1 00:32:53.939 --rc genhtml_legend=1 00:32:53.939 --rc geninfo_all_blocks=1 00:32:53.939 --rc geninfo_unexecuted_blocks=1 00:32:53.939 00:32:53.939 ' 00:32:53.939 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:53.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:53.939 --rc genhtml_branch_coverage=1 00:32:53.939 --rc genhtml_function_coverage=1 00:32:53.939 --rc genhtml_legend=1 00:32:53.939 --rc geninfo_all_blocks=1 00:32:53.939 --rc geninfo_unexecuted_blocks=1 00:32:53.939 00:32:53.939 ' 00:32:53.939 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:53.939 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:32:53.939 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:53.939 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:53.939 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:53.939 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:53.939 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:53.939 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:53.940 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:53.940 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:53.940 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:53.940 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:53.940 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:53.940 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:53.940 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:53.940 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:53.940 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:53.940 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:53.940 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:53.940 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:32:53.940 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:53.940 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:53.940 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:53.940 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.940 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.940 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.940 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:32:53.940 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.940 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:32:53.940 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:53.940 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:53.940 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:53.940 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:53.940 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:53.940 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:53.940 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:53.940 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:53.940 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:53.940 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:53.940 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:32:53.940 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:32:53.940 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:53.940 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # prepare_net_devs 00:32:53.940 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@434 -- # local -g is_hw=no 00:32:53.940 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # remove_spdk_ns 00:32:53.940 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:53.940 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:53.940 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:53.940 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:32:53.940 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:32:53.940 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:32:53.940 08:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:02.089 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:02.089 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:33:02.089 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:02.089 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:02.089 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:02.089 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:02.089 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:02.089 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:33:02.089 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:02.089 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:33:02.089 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:33:02.089 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:33:02.089 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:33:02.089 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:33:02.089 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:33:02.089 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:02.089 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:02.089 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:02.089 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:02.089 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:02.089 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:02.089 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:02.089 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:02.089 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:02.089 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:02.089 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:02.089 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:33:02.089 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:33:02.089 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:33:02.089 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:33:02.089 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:33:02.089 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:33:02.089 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:33:02.089 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:02.089 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:02.089 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:33:02.089 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:33:02.089 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:02.089 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:02.089 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:33:02.089 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:33:02.089 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:02.089 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:02.089 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:33:02.089 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:33:02.089 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:02.089 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:02.089 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:33:02.089 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:33:02.089 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:02.090 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:02.090 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # is_hw=yes 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:02.090 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:02.090 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.612 ms 00:33:02.090 00:33:02.090 --- 10.0.0.2 ping statistics --- 00:33:02.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:02.090 rtt min/avg/max/mdev = 0.612/0.612/0.612/0.000 ms 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:02.090 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:02.090 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:33:02.090 00:33:02.090 --- 10.0.0.1 ping statistics --- 00:33:02.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:02.090 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # return 0 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # nvmfpid=3966588 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # waitforlisten 3966588 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 3966588 ']' 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:02.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:02.090 08:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:02.090 [2024-10-01 08:47:52.897559] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:02.090 [2024-10-01 08:47:52.898691] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:33:02.090 [2024-10-01 08:47:52.898745] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:02.091 [2024-10-01 08:47:52.970404] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:02.091 [2024-10-01 08:47:53.043772] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:02.091 [2024-10-01 08:47:53.043809] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:02.091 [2024-10-01 08:47:53.043817] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:02.091 [2024-10-01 08:47:53.043823] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:02.091 [2024-10-01 08:47:53.043829] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:02.091 [2024-10-01 08:47:53.044733] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:33:02.091 [2024-10-01 08:47:53.044734] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:33:02.091 [2024-10-01 08:47:53.099457] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:02.091 [2024-10-01 08:47:53.100009] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:02.091 [2024-10-01 08:47:53.100354] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:02.091 08:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:02.091 08:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:33:02.091 08:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:33:02.091 08:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:02.091 08:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:02.091 08:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:02.091 08:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:02.091 08:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.091 08:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:02.091 [2024-10-01 08:47:53.741409] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:02.091 08:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.091 08:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:33:02.091 08:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.091 08:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:02.091 08:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.091 08:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:02.091 08:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.091 08:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:02.091 [2024-10-01 08:47:53.773591] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:02.091 08:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.091 08:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:33:02.091 08:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.091 08:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:02.091 NULL1 00:33:02.091 08:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.091 08:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:33:02.091 08:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.091 08:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:02.091 Delay0 00:33:02.091 08:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.091 08:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:02.091 08:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.091 08:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:02.091 08:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.091 08:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3966697 00:33:02.091 08:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:33:02.091 08:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:33:02.091 [2024-10-01 08:47:53.862653] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:33:04.002 08:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:04.002 08:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:04.002 08:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:04.263 Write completed with error (sct=0, sc=8) 00:33:04.263 Read completed with error (sct=0, sc=8) 00:33:04.263 Read completed with error (sct=0, sc=8) 00:33:04.263 Read completed with error (sct=0, sc=8) 00:33:04.263 starting I/O failed: -6 00:33:04.263 Read completed with error (sct=0, sc=8) 00:33:04.263 Write completed with error (sct=0, sc=8) 00:33:04.263 Read completed with error (sct=0, sc=8) 00:33:04.263 Read completed with error (sct=0, sc=8) 00:33:04.263 starting I/O failed: -6 00:33:04.263 Write completed with error (sct=0, sc=8) 00:33:04.263 Read completed with error (sct=0, sc=8) 00:33:04.263 Read completed with error (sct=0, sc=8) 00:33:04.263 Read completed with error (sct=0, sc=8) 00:33:04.263 starting I/O failed: -6 00:33:04.263 Read completed with error (sct=0, sc=8) 00:33:04.263 Write completed with error (sct=0, sc=8) 00:33:04.263 Read completed with error (sct=0, sc=8) 00:33:04.263 Read completed with error (sct=0, sc=8) 00:33:04.263 starting I/O failed: -6 00:33:04.263 Read completed with error (sct=0, sc=8) 00:33:04.263 Read completed with error (sct=0, sc=8) 00:33:04.263 Write completed with error (sct=0, sc=8) 00:33:04.263 Read completed with error (sct=0, sc=8) 00:33:04.263 starting I/O failed: -6 00:33:04.263 Write completed with error (sct=0, sc=8) 00:33:04.263 Write completed with error (sct=0, sc=8) 00:33:04.263 Read completed with error (sct=0, sc=8) 00:33:04.263 Read completed with error (sct=0, sc=8) 00:33:04.263 starting I/O failed: -6 00:33:04.263 Read completed with error (sct=0, sc=8) 00:33:04.263 Read completed with error (sct=0, sc=8) 00:33:04.263 Read completed with error (sct=0, sc=8) 00:33:04.263 Write completed with error (sct=0, sc=8) 00:33:04.263 starting I/O failed: -6 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Write completed with error (sct=0, sc=8) 00:33:04.264 starting I/O failed: -6 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Write completed with error (sct=0, sc=8) 00:33:04.264 starting I/O failed: -6 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 starting I/O failed: -6 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Write completed with error (sct=0, sc=8) 00:33:04.264 starting I/O failed: -6 00:33:04.264 Write completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Write completed with error (sct=0, sc=8) 00:33:04.264 [2024-10-01 08:47:56.063600] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5750 is same with the state(6) to be set 00:33:04.264 Write completed with error (sct=0, sc=8) 00:33:04.264 Write completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Write completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Write completed with error (sct=0, sc=8) 00:33:04.264 Write completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Write completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Write completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Write completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Write completed with error (sct=0, sc=8) 00:33:04.264 Write completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Write completed with error (sct=0, sc=8) 00:33:04.264 Write completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Write completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Write completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Write completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Write completed with error (sct=0, sc=8) 00:33:04.264 Write completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Write completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Write completed with error (sct=0, sc=8) 00:33:04.264 Write completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Write completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Write completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 starting I/O failed: -6 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Write completed with error (sct=0, sc=8) 00:33:04.264 Write completed with error (sct=0, sc=8) 00:33:04.264 starting I/O failed: -6 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Write completed with error (sct=0, sc=8) 00:33:04.264 starting I/O failed: -6 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Write completed with error (sct=0, sc=8) 00:33:04.264 Write completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 starting I/O failed: -6 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Write completed with error (sct=0, sc=8) 00:33:04.264 Write completed with error (sct=0, sc=8) 00:33:04.264 Write completed with error (sct=0, sc=8) 00:33:04.264 starting I/O failed: -6 00:33:04.264 Write completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Write completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 starting I/O failed: -6 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Write completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Write completed with error (sct=0, sc=8) 00:33:04.264 starting I/O failed: -6 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Write completed with error (sct=0, sc=8) 00:33:04.264 starting I/O failed: -6 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Write completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Write completed with error (sct=0, sc=8) 00:33:04.264 starting I/O failed: -6 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Write completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 starting I/O failed: -6 00:33:04.264 Write completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 [2024-10-01 08:47:56.067168] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc48400d450 is same with the state(6) to be set 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Write completed with error (sct=0, sc=8) 00:33:04.264 Write completed with error (sct=0, sc=8) 00:33:04.264 Write completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Write completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Write completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Write completed with error (sct=0, sc=8) 00:33:04.264 Write completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Write completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Write completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Write completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Write completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Write completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Read completed with error (sct=0, sc=8) 00:33:04.264 Write completed with error (sct=0, sc=8) 00:33:05.652 [2024-10-01 08:47:57.042094] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6a70 is same with the state(6) to be set 00:33:05.652 Read completed with error (sct=0, sc=8) 00:33:05.652 Read completed with error (sct=0, sc=8) 00:33:05.652 Read completed with error (sct=0, sc=8) 00:33:05.652 Read completed with error (sct=0, sc=8) 00:33:05.652 Write completed with error (sct=0, sc=8) 00:33:05.652 Read completed with error (sct=0, sc=8) 00:33:05.652 Read completed with error (sct=0, sc=8) 00:33:05.652 Write completed with error (sct=0, sc=8) 00:33:05.652 Write completed with error (sct=0, sc=8) 00:33:05.652 Read completed with error (sct=0, sc=8) 00:33:05.652 Write completed with error (sct=0, sc=8) 00:33:05.652 Read completed with error (sct=0, sc=8) 00:33:05.652 Read completed with error (sct=0, sc=8) 00:33:05.652 Write completed with error (sct=0, sc=8) 00:33:05.652 Write completed with error (sct=0, sc=8) 00:33:05.652 Write completed with error (sct=0, sc=8) 00:33:05.652 Read completed with error (sct=0, sc=8) 00:33:05.652 Write completed with error (sct=0, sc=8) 00:33:05.652 Read completed with error (sct=0, sc=8) 00:33:05.652 Write completed with error (sct=0, sc=8) 00:33:05.652 Write completed with error (sct=0, sc=8) 00:33:05.652 Read completed with error (sct=0, sc=8) 00:33:05.652 Read completed with error (sct=0, sc=8) 00:33:05.652 Read completed with error (sct=0, sc=8) 00:33:05.652 Write completed with error (sct=0, sc=8) 00:33:05.652 Read completed with error (sct=0, sc=8) 00:33:05.652 Read completed with error (sct=0, sc=8) 00:33:05.652 [2024-10-01 08:47:57.067528] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5570 is same with the state(6) to be set 00:33:05.652 Write completed with error (sct=0, sc=8) 00:33:05.652 Read completed with error (sct=0, sc=8) 00:33:05.652 Read completed with error (sct=0, sc=8) 00:33:05.652 Read completed with error (sct=0, sc=8) 00:33:05.652 Read completed with error (sct=0, sc=8) 00:33:05.652 Read completed with error (sct=0, sc=8) 00:33:05.652 Read completed with error (sct=0, sc=8) 00:33:05.652 Read completed with error (sct=0, sc=8) 00:33:05.652 Write completed with error (sct=0, sc=8) 00:33:05.652 Read completed with error (sct=0, sc=8) 00:33:05.652 Write completed with error (sct=0, sc=8) 00:33:05.652 Read completed with error (sct=0, sc=8) 00:33:05.652 Read completed with error (sct=0, sc=8) 00:33:05.652 Write completed with error (sct=0, sc=8) 00:33:05.652 Read completed with error (sct=0, sc=8) 00:33:05.652 Read completed with error (sct=0, sc=8) 00:33:05.652 Read completed with error (sct=0, sc=8) 00:33:05.652 Read completed with error (sct=0, sc=8) 00:33:05.652 Read completed with error (sct=0, sc=8) 00:33:05.652 Read completed with error (sct=0, sc=8) 00:33:05.652 Write completed with error (sct=0, sc=8) 00:33:05.652 Read completed with error (sct=0, sc=8) 00:33:05.652 Read completed with error (sct=0, sc=8) 00:33:05.652 Read completed with error (sct=0, sc=8) 00:33:05.652 Write completed with error (sct=0, sc=8) 00:33:05.652 Read completed with error (sct=0, sc=8) 00:33:05.652 [2024-10-01 08:47:57.067632] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5930 is same with the state(6) to be set 00:33:05.652 Read completed with error (sct=0, sc=8) 00:33:05.652 Read completed with error (sct=0, sc=8) 00:33:05.652 Read completed with error (sct=0, sc=8) 00:33:05.652 Read completed with error (sct=0, sc=8) 00:33:05.652 Write completed with error (sct=0, sc=8) 00:33:05.652 Write completed with error (sct=0, sc=8) 00:33:05.652 Write completed with error (sct=0, sc=8) 00:33:05.652 Read completed with error (sct=0, sc=8) 00:33:05.652 Write completed with error (sct=0, sc=8) 00:33:05.652 Read completed with error (sct=0, sc=8) 00:33:05.652 Read completed with error (sct=0, sc=8) 00:33:05.652 Read completed with error (sct=0, sc=8) 00:33:05.652 Read completed with error (sct=0, sc=8) 00:33:05.652 Read completed with error (sct=0, sc=8) 00:33:05.652 Read completed with error (sct=0, sc=8) 00:33:05.652 Write completed with error (sct=0, sc=8) 00:33:05.652 Read completed with error (sct=0, sc=8) 00:33:05.652 Read completed with error (sct=0, sc=8) 00:33:05.652 [2024-10-01 08:47:57.069819] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc48400d780 is same with the state(6) to be set 00:33:05.652 Write completed with error (sct=0, sc=8) 00:33:05.652 Read completed with error (sct=0, sc=8) 00:33:05.652 Read completed with error (sct=0, sc=8) 00:33:05.652 Read completed with error (sct=0, sc=8) 00:33:05.652 Read completed with error (sct=0, sc=8) 00:33:05.652 Read completed with error (sct=0, sc=8) 00:33:05.652 Read completed with error (sct=0, sc=8) 00:33:05.652 Read completed with error (sct=0, sc=8) 00:33:05.652 Read completed with error (sct=0, sc=8) 00:33:05.652 Read completed with error (sct=0, sc=8) 00:33:05.652 Write completed with error (sct=0, sc=8) 00:33:05.652 Read completed with error (sct=0, sc=8) 00:33:05.652 Read completed with error (sct=0, sc=8) 00:33:05.652 Write completed with error (sct=0, sc=8) 00:33:05.652 Read completed with error (sct=0, sc=8) 00:33:05.652 Read completed with error (sct=0, sc=8) 00:33:05.652 Read completed with error (sct=0, sc=8) 00:33:05.652 Read completed with error (sct=0, sc=8) 00:33:05.652 Write completed with error (sct=0, sc=8) 00:33:05.652 [2024-10-01 08:47:57.069897] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc48400cfe0 is same with the state(6) to be set 00:33:05.652 Initializing NVMe Controllers 00:33:05.652 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:05.652 Controller IO queue size 128, less than required. 00:33:05.652 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:05.652 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:33:05.652 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:33:05.652 Initialization complete. Launching workers. 00:33:05.652 ======================================================== 00:33:05.652 Latency(us) 00:33:05.652 Device Information : IOPS MiB/s Average min max 00:33:05.652 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 174.76 0.09 883658.61 236.31 1006987.61 00:33:05.652 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 159.32 0.08 920102.52 278.79 1010047.53 00:33:05.652 ======================================================== 00:33:05.652 Total : 334.08 0.16 901038.71 236.31 1010047.53 00:33:05.652 00:33:05.652 [2024-10-01 08:47:57.070516] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c6a70 (9): Bad file descriptor 00:33:05.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:33:05.652 08:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.652 08:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:33:05.652 08:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3966697 00:33:05.652 08:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:33:05.914 08:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:33:05.914 08:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3966697 00:33:05.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3966697) - No such process 00:33:05.914 08:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3966697 00:33:05.914 08:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:33:05.914 08:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3966697 00:33:05.914 08:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:33:05.914 08:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:05.914 08:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:33:05.914 08:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:05.914 08:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 3966697 00:33:05.914 08:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:33:05.914 08:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:05.914 08:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:05.914 08:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:05.914 08:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:33:05.914 08:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:05.914 08:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:05.914 08:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.914 08:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:05.914 08:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:05.914 08:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:05.914 [2024-10-01 08:47:57.605624] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:05.914 08:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.914 08:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:05.914 08:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:05.914 08:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:05.914 08:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.914 08:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3967395 00:33:05.914 08:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:33:05.914 08:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:33:05.914 08:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3967395 00:33:05.914 08:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:05.914 [2024-10-01 08:47:57.671938] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:33:06.485 08:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:06.485 08:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3967395 00:33:06.485 08:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:07.057 08:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:07.057 08:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3967395 00:33:07.057 08:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:07.318 08:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:07.318 08:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3967395 00:33:07.318 08:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:07.890 08:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:07.890 08:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3967395 00:33:07.890 08:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:08.461 08:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:08.461 08:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3967395 00:33:08.461 08:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:09.033 08:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:09.033 08:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3967395 00:33:09.033 08:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:09.294 Initializing NVMe Controllers 00:33:09.294 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:09.294 Controller IO queue size 128, less than required. 00:33:09.294 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:09.294 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:33:09.294 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:33:09.294 Initialization complete. Launching workers. 00:33:09.294 ======================================================== 00:33:09.294 Latency(us) 00:33:09.294 Device Information : IOPS MiB/s Average min max 00:33:09.294 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002442.82 1000237.72 1005905.29 00:33:09.294 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005729.42 1000243.06 1042511.47 00:33:09.294 ======================================================== 00:33:09.294 Total : 256.00 0.12 1004086.12 1000237.72 1042511.47 00:33:09.294 00:33:09.555 08:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:09.555 08:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3967395 00:33:09.555 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3967395) - No such process 00:33:09.555 08:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3967395 00:33:09.555 08:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:33:09.555 08:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:33:09.555 08:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # nvmfcleanup 00:33:09.555 08:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:33:09.555 08:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:09.555 08:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:33:09.555 08:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:09.555 08:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:09.555 rmmod nvme_tcp 00:33:09.555 rmmod nvme_fabrics 00:33:09.555 rmmod nvme_keyring 00:33:09.555 08:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:09.555 08:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:33:09.555 08:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:33:09.555 08:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@513 -- # '[' -n 3966588 ']' 00:33:09.555 08:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # killprocess 3966588 00:33:09.555 08:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 3966588 ']' 00:33:09.555 08:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 3966588 00:33:09.555 08:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:33:09.555 08:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:09.555 08:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3966588 00:33:09.555 08:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:09.555 08:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:09.555 08:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3966588' 00:33:09.555 killing process with pid 3966588 00:33:09.555 08:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 3966588 00:33:09.555 08:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 3966588 00:33:09.817 08:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:33:09.817 08:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:33:09.817 08:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:33:09.817 08:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:33:09.817 08:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # iptables-save 00:33:09.817 08:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:33:09.817 08:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # iptables-restore 00:33:09.817 08:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:09.817 08:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:09.817 08:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:09.817 08:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:09.817 08:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:11.732 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:11.732 00:33:11.732 real 0m18.287s 00:33:11.732 user 0m26.690s 00:33:11.732 sys 0m7.476s 00:33:11.732 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:11.732 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:11.732 ************************************ 00:33:11.732 END TEST nvmf_delete_subsystem 00:33:11.732 ************************************ 00:33:11.994 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:33:11.994 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:33:11.994 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:11.994 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:11.994 ************************************ 00:33:11.994 START TEST nvmf_host_management 00:33:11.994 ************************************ 00:33:11.994 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:33:11.994 * Looking for test storage... 00:33:11.994 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:11.994 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:11.994 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:33:11.994 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:11.994 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:11.994 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:11.994 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:11.994 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:11.994 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:33:11.994 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:33:11.994 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:33:11.994 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:33:11.994 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:33:11.994 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:33:11.994 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:33:11.994 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:11.994 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:33:11.994 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:33:11.994 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:11.994 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:11.994 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:33:11.994 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:33:11.994 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:11.994 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:33:11.994 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:33:11.994 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:33:11.994 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:33:11.994 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:11.994 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:33:11.994 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:33:11.994 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:11.994 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:11.994 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:33:11.994 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:11.994 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:11.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:11.994 --rc genhtml_branch_coverage=1 00:33:11.994 --rc genhtml_function_coverage=1 00:33:11.994 --rc genhtml_legend=1 00:33:11.994 --rc geninfo_all_blocks=1 00:33:11.994 --rc geninfo_unexecuted_blocks=1 00:33:11.994 00:33:11.994 ' 00:33:11.994 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:11.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:11.994 --rc genhtml_branch_coverage=1 00:33:11.994 --rc genhtml_function_coverage=1 00:33:11.994 --rc genhtml_legend=1 00:33:11.994 --rc geninfo_all_blocks=1 00:33:11.994 --rc geninfo_unexecuted_blocks=1 00:33:11.994 00:33:11.994 ' 00:33:11.995 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:11.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:11.995 --rc genhtml_branch_coverage=1 00:33:11.995 --rc genhtml_function_coverage=1 00:33:11.995 --rc genhtml_legend=1 00:33:11.995 --rc geninfo_all_blocks=1 00:33:11.995 --rc geninfo_unexecuted_blocks=1 00:33:11.995 00:33:11.995 ' 00:33:11.995 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:11.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:11.995 --rc genhtml_branch_coverage=1 00:33:11.995 --rc genhtml_function_coverage=1 00:33:11.995 --rc genhtml_legend=1 00:33:11.995 --rc geninfo_all_blocks=1 00:33:11.995 --rc geninfo_unexecuted_blocks=1 00:33:11.995 00:33:11.995 ' 00:33:11.995 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:11.995 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:33:11.995 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:11.995 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:11.995 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:11.995 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:11.995 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:11.995 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:11.995 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:11.995 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:11.995 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:12.256 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:12.256 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:12.256 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:12.256 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:12.256 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:12.256 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:12.256 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:12.256 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:12.256 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:33:12.256 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:12.256 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:12.256 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:12.256 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:12.256 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:12.256 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:12.257 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:33:12.257 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:12.257 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:33:12.257 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:12.257 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:12.257 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:12.257 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:12.257 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:12.257 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:12.257 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:12.257 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:12.257 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:12.257 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:12.257 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:12.257 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:12.257 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:33:12.257 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:33:12.257 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:12.257 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@472 -- # prepare_net_devs 00:33:12.257 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@434 -- # local -g is_hw=no 00:33:12.257 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@436 -- # remove_spdk_ns 00:33:12.257 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:12.257 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:12.257 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:12.257 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:33:12.257 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:33:12.257 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:33:12.257 08:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:18.847 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:18.847 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:33:18.847 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:18.847 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:18.847 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:18.847 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:18.847 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:18.847 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:33:18.847 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:18.847 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:33:18.847 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:33:18.847 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:18.848 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:18.848 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ up == up ]] 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:18.848 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ up == up ]] 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:18.848 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # is_hw=yes 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:18.848 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:19.110 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:19.110 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:19.110 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:19.110 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:19.110 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:19.110 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.580 ms 00:33:19.110 00:33:19.110 --- 10.0.0.2 ping statistics --- 00:33:19.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:19.110 rtt min/avg/max/mdev = 0.580/0.580/0.580/0.000 ms 00:33:19.110 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:19.110 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:19.110 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.241 ms 00:33:19.110 00:33:19.110 --- 10.0.0.1 ping statistics --- 00:33:19.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:19.110 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:33:19.110 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:19.110 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # return 0 00:33:19.110 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:33:19.110 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:19.110 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:33:19.110 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:33:19.110 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:19.110 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:33:19.110 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:33:19.110 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:33:19.110 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:33:19.110 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:33:19.110 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:33:19.110 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:19.110 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:19.110 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@505 -- # nvmfpid=3972825 00:33:19.110 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@506 -- # waitforlisten 3972825 00:33:19.110 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 3972825 ']' 00:33:19.110 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:19.110 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:19.110 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:19.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:19.110 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:19.110 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:19.110 08:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:33:19.110 [2024-10-01 08:48:10.852864] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:19.110 [2024-10-01 08:48:10.854027] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:33:19.110 [2024-10-01 08:48:10.854084] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:19.372 [2024-10-01 08:48:10.945005] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:19.372 [2024-10-01 08:48:11.039746] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:19.372 [2024-10-01 08:48:11.039805] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:19.372 [2024-10-01 08:48:11.039814] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:19.372 [2024-10-01 08:48:11.039821] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:19.372 [2024-10-01 08:48:11.039827] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:19.372 [2024-10-01 08:48:11.041852] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:33:19.372 [2024-10-01 08:48:11.042048] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:33:19.372 [2024-10-01 08:48:11.042220] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:33:19.372 [2024-10-01 08:48:11.042221] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:33:19.372 [2024-10-01 08:48:11.130976] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:19.372 [2024-10-01 08:48:11.131753] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:19.372 [2024-10-01 08:48:11.132548] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:19.372 [2024-10-01 08:48:11.132735] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:19.372 [2024-10-01 08:48:11.132880] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:19.942 08:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:19.942 08:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:33:19.942 08:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:33:19.942 08:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:19.942 08:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:19.942 08:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:19.943 08:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:19.943 08:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:19.943 08:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:19.943 [2024-10-01 08:48:11.695122] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:19.943 08:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:19.943 08:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:33:19.943 08:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:19.943 08:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:19.943 08:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:33:19.943 08:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:33:19.943 08:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:33:19.943 08:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:19.943 08:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:19.943 Malloc0 00:33:19.943 [2024-10-01 08:48:11.763333] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:20.203 08:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:20.203 08:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:33:20.203 08:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:20.203 08:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:20.203 08:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3972986 00:33:20.203 08:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3972986 /var/tmp/bdevperf.sock 00:33:20.203 08:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 3972986 ']' 00:33:20.203 08:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:20.203 08:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:20.203 08:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:20.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:20.203 08:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:33:20.203 08:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:33:20.203 08:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:20.203 08:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:20.203 08:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:33:20.203 08:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:33:20.203 08:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:33:20.203 08:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:33:20.203 { 00:33:20.203 "params": { 00:33:20.203 "name": "Nvme$subsystem", 00:33:20.203 "trtype": "$TEST_TRANSPORT", 00:33:20.203 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:20.203 "adrfam": "ipv4", 00:33:20.203 "trsvcid": "$NVMF_PORT", 00:33:20.203 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:20.203 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:20.203 "hdgst": ${hdgst:-false}, 00:33:20.203 "ddgst": ${ddgst:-false} 00:33:20.203 }, 00:33:20.203 "method": "bdev_nvme_attach_controller" 00:33:20.203 } 00:33:20.203 EOF 00:33:20.203 )") 00:33:20.203 08:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:33:20.203 08:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:33:20.203 08:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:33:20.203 08:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:33:20.203 "params": { 00:33:20.203 "name": "Nvme0", 00:33:20.203 "trtype": "tcp", 00:33:20.203 "traddr": "10.0.0.2", 00:33:20.203 "adrfam": "ipv4", 00:33:20.203 "trsvcid": "4420", 00:33:20.203 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:20.203 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:20.203 "hdgst": false, 00:33:20.203 "ddgst": false 00:33:20.203 }, 00:33:20.203 "method": "bdev_nvme_attach_controller" 00:33:20.203 }' 00:33:20.203 [2024-10-01 08:48:11.877358] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:33:20.203 [2024-10-01 08:48:11.877429] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3972986 ] 00:33:20.203 [2024-10-01 08:48:11.939754] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:20.203 [2024-10-01 08:48:12.004226] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:33:20.463 Running I/O for 10 seconds... 00:33:21.036 08:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:21.036 08:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:33:21.036 08:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:33:21.036 08:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.036 08:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:21.036 08:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.036 08:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:21.036 08:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:33:21.036 08:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:33:21.036 08:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:33:21.036 08:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:33:21.036 08:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:33:21.036 08:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:33:21.036 08:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:33:21.036 08:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:33:21.036 08:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:33:21.036 08:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.036 08:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:21.036 08:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.036 08:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=814 00:33:21.036 08:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 814 -ge 100 ']' 00:33:21.036 08:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:33:21.036 08:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:33:21.036 08:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:33:21.036 08:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:33:21.036 08:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.036 08:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:21.036 [2024-10-01 08:48:12.738753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17234e0 is same with the state(6) to be set 00:33:21.036 [2024-10-01 08:48:12.738795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17234e0 is same with the state(6) to be set 00:33:21.036 [2024-10-01 08:48:12.738804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17234e0 is same with the state(6) to be set 00:33:21.036 [2024-10-01 08:48:12.738811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17234e0 is same with the state(6) to be set 00:33:21.036 [2024-10-01 08:48:12.738818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17234e0 is same with the state(6) to be set 00:33:21.036 [2024-10-01 08:48:12.738825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17234e0 is same with the state(6) to be set 00:33:21.036 08:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.036 08:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:33:21.036 08:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.036 08:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:21.036 [2024-10-01 08:48:12.746495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:120576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.036 [2024-10-01 08:48:12.746530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.036 [2024-10-01 08:48:12.746546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:120704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.036 [2024-10-01 08:48:12.746554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.036 [2024-10-01 08:48:12.746569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:120832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.036 [2024-10-01 08:48:12.746577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.036 [2024-10-01 08:48:12.746586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:120960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.036 [2024-10-01 08:48:12.746594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.036 [2024-10-01 08:48:12.746604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:121088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.036 [2024-10-01 08:48:12.746611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.036 [2024-10-01 08:48:12.746621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:121216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.036 [2024-10-01 08:48:12.746628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.036 [2024-10-01 08:48:12.746637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:121344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.036 [2024-10-01 08:48:12.746645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.036 [2024-10-01 08:48:12.746655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:121472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.036 [2024-10-01 08:48:12.746662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.036 [2024-10-01 08:48:12.746672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:121600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.036 [2024-10-01 08:48:12.746679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.036 [2024-10-01 08:48:12.746688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:121728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.036 [2024-10-01 08:48:12.746696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.036 [2024-10-01 08:48:12.746705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:121856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.036 [2024-10-01 08:48:12.746713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.036 [2024-10-01 08:48:12.746723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:121984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.036 [2024-10-01 08:48:12.746730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.036 [2024-10-01 08:48:12.746739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:122112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.036 [2024-10-01 08:48:12.746746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.037 [2024-10-01 08:48:12.746756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:122240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.037 [2024-10-01 08:48:12.746763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.037 [2024-10-01 08:48:12.746772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:122368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.037 [2024-10-01 08:48:12.746781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.037 [2024-10-01 08:48:12.746791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:122496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.037 [2024-10-01 08:48:12.746799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.037 [2024-10-01 08:48:12.746809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:122624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.037 [2024-10-01 08:48:12.746816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.037 [2024-10-01 08:48:12.746825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:122752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.037 [2024-10-01 08:48:12.746832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.037 [2024-10-01 08:48:12.746841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.037 [2024-10-01 08:48:12.746849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.037 [2024-10-01 08:48:12.746858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:123008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.037 [2024-10-01 08:48:12.746866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.037 [2024-10-01 08:48:12.746876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:123136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.037 [2024-10-01 08:48:12.746883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.037 [2024-10-01 08:48:12.746893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:123264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.037 [2024-10-01 08:48:12.746900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.037 [2024-10-01 08:48:12.746909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:123392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.037 [2024-10-01 08:48:12.746917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.037 [2024-10-01 08:48:12.746927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:118656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.037 [2024-10-01 08:48:12.746935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.037 [2024-10-01 08:48:12.746944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:118784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.037 [2024-10-01 08:48:12.746951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.037 [2024-10-01 08:48:12.746961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:118912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.037 [2024-10-01 08:48:12.746969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.037 [2024-10-01 08:48:12.746979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:123520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.037 [2024-10-01 08:48:12.746986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.037 [2024-10-01 08:48:12.747002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:119040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.037 [2024-10-01 08:48:12.747010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.037 [2024-10-01 08:48:12.747019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:119168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.037 [2024-10-01 08:48:12.747026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.037 [2024-10-01 08:48:12.747036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:119296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.037 [2024-10-01 08:48:12.747044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.037 [2024-10-01 08:48:12.747053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:123648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.037 [2024-10-01 08:48:12.747061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.037 [2024-10-01 08:48:12.747070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:123776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.037 [2024-10-01 08:48:12.747077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.037 [2024-10-01 08:48:12.747086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:123904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.037 [2024-10-01 08:48:12.747094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.037 [2024-10-01 08:48:12.747104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:124032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.037 [2024-10-01 08:48:12.747112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.037 [2024-10-01 08:48:12.747121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:124160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.037 [2024-10-01 08:48:12.747128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.037 [2024-10-01 08:48:12.747138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:124288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.037 [2024-10-01 08:48:12.747145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.037 [2024-10-01 08:48:12.747155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:124416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.037 [2024-10-01 08:48:12.747163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.037 [2024-10-01 08:48:12.747172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:119424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.037 [2024-10-01 08:48:12.747180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.037 [2024-10-01 08:48:12.747189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:124544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.037 [2024-10-01 08:48:12.747197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.037 [2024-10-01 08:48:12.747206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:119552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.037 [2024-10-01 08:48:12.747216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.037 [2024-10-01 08:48:12.747225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:124672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.037 [2024-10-01 08:48:12.747232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.037 [2024-10-01 08:48:12.747241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:124800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.037 [2024-10-01 08:48:12.747249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.037 [2024-10-01 08:48:12.747258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:124928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.037 [2024-10-01 08:48:12.747266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.037 [2024-10-01 08:48:12.747276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:125056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.037 [2024-10-01 08:48:12.747283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.037 [2024-10-01 08:48:12.747292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:125184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.037 [2024-10-01 08:48:12.747299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.037 [2024-10-01 08:48:12.747308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:125312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.037 [2024-10-01 08:48:12.747315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.037 [2024-10-01 08:48:12.747325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:119680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.037 [2024-10-01 08:48:12.747332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.037 [2024-10-01 08:48:12.747342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:125440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.037 [2024-10-01 08:48:12.747349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.037 [2024-10-01 08:48:12.747358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:125568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.037 [2024-10-01 08:48:12.747365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.037 [2024-10-01 08:48:12.747376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:119808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.037 [2024-10-01 08:48:12.747384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.037 [2024-10-01 08:48:12.747394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:125696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.037 [2024-10-01 08:48:12.747401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.037 [2024-10-01 08:48:12.747410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:125824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.037 [2024-10-01 08:48:12.747417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.037 [2024-10-01 08:48:12.747428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:125952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.038 [2024-10-01 08:48:12.747436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.038 [2024-10-01 08:48:12.747445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:119936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.038 [2024-10-01 08:48:12.747452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.038 [2024-10-01 08:48:12.747462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:120064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.038 [2024-10-01 08:48:12.747469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.038 [2024-10-01 08:48:12.747479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:120192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.038 [2024-10-01 08:48:12.747486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.038 [2024-10-01 08:48:12.747496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:126080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.038 [2024-10-01 08:48:12.747503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.038 [2024-10-01 08:48:12.747512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:126208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.038 [2024-10-01 08:48:12.747519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.038 [2024-10-01 08:48:12.747528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:126336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.038 [2024-10-01 08:48:12.747536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.038 [2024-10-01 08:48:12.747546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:126464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.038 [2024-10-01 08:48:12.747553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.038 [2024-10-01 08:48:12.747562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:120320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.038 [2024-10-01 08:48:12.747569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.038 [2024-10-01 08:48:12.747578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:126592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.038 [2024-10-01 08:48:12.747586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.038 [2024-10-01 08:48:12.747596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:126720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.038 [2024-10-01 08:48:12.747603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.038 [2024-10-01 08:48:12.747612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:120448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.038 [2024-10-01 08:48:12.747619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.038 [2024-10-01 08:48:12.747672] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17c9ff0 was disconnected and freed. reset controller. 00:33:21.038 [2024-10-01 08:48:12.747716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:21.038 [2024-10-01 08:48:12.747727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.038 [2024-10-01 08:48:12.747735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:21.038 [2024-10-01 08:48:12.747742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.038 [2024-10-01 08:48:12.747751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:21.038 [2024-10-01 08:48:12.747758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.038 [2024-10-01 08:48:12.747766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:21.038 [2024-10-01 08:48:12.747774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.038 [2024-10-01 08:48:12.747782] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b1280 is same with the state(6) to be set 00:33:21.038 [2024-10-01 08:48:12.748960] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:21.038 task offset: 120576 on job bdev=Nvme0n1 fails 00:33:21.038 00:33:21.038 Latency(us) 00:33:21.038 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:21.038 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:33:21.038 Job: Nvme0n1 ended in about 0.56 seconds with error 00:33:21.038 Verification LBA range: start 0x0 length 0x400 00:33:21.038 Nvme0n1 : 0.56 1646.76 102.92 113.69 0.00 35433.47 1672.53 37137.07 00:33:21.038 =================================================================================================================== 00:33:21.038 Total : 1646.76 102.92 113.69 0.00 35433.47 1672.53 37137.07 00:33:21.038 [2024-10-01 08:48:12.750942] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:33:21.038 [2024-10-01 08:48:12.750964] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b1280 (9): Bad file descriptor 00:33:21.038 08:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.038 08:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:33:21.038 [2024-10-01 08:48:12.756415] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:21.980 08:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3972986 00:33:21.980 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3972986) - No such process 00:33:21.980 08:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:33:21.980 08:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:33:21.980 08:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:33:21.980 08:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:33:21.980 08:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:33:21.980 08:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:33:21.980 08:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:33:21.980 08:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:33:21.980 { 00:33:21.980 "params": { 00:33:21.980 "name": "Nvme$subsystem", 00:33:21.980 "trtype": "$TEST_TRANSPORT", 00:33:21.980 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:21.980 "adrfam": "ipv4", 00:33:21.980 "trsvcid": "$NVMF_PORT", 00:33:21.980 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:21.980 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:21.980 "hdgst": ${hdgst:-false}, 00:33:21.980 "ddgst": ${ddgst:-false} 00:33:21.980 }, 00:33:21.980 "method": "bdev_nvme_attach_controller" 00:33:21.980 } 00:33:21.980 EOF 00:33:21.980 )") 00:33:21.980 08:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:33:21.980 08:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:33:21.980 08:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:33:21.980 08:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:33:21.980 "params": { 00:33:21.980 "name": "Nvme0", 00:33:21.980 "trtype": "tcp", 00:33:21.980 "traddr": "10.0.0.2", 00:33:21.980 "adrfam": "ipv4", 00:33:21.980 "trsvcid": "4420", 00:33:21.980 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:21.980 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:21.980 "hdgst": false, 00:33:21.980 "ddgst": false 00:33:21.980 }, 00:33:21.980 "method": "bdev_nvme_attach_controller" 00:33:21.980 }' 00:33:22.241 [2024-10-01 08:48:13.815913] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:33:22.241 [2024-10-01 08:48:13.815968] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3973339 ] 00:33:22.241 [2024-10-01 08:48:13.877795] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:22.241 [2024-10-01 08:48:13.941518] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:33:22.502 Running I/O for 1 seconds... 00:33:23.703 1472.00 IOPS, 92.00 MiB/s 00:33:23.703 Latency(us) 00:33:23.703 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:23.703 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:33:23.703 Verification LBA range: start 0x0 length 0x400 00:33:23.703 Nvme0n1 : 1.02 1499.09 93.69 0.00 0.00 41986.45 9775.79 34952.53 00:33:23.703 =================================================================================================================== 00:33:23.703 Total : 1499.09 93.69 0.00 0.00 41986.45 9775.79 34952.53 00:33:23.703 08:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:33:23.703 08:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:33:23.703 08:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:33:23.703 08:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:33:23.703 08:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:33:23.703 08:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # nvmfcleanup 00:33:23.703 08:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:33:23.703 08:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:23.703 08:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:33:23.703 08:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:23.703 08:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:23.703 rmmod nvme_tcp 00:33:23.703 rmmod nvme_fabrics 00:33:23.703 rmmod nvme_keyring 00:33:23.703 08:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:23.703 08:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:33:23.703 08:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:33:23.703 08:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@513 -- # '[' -n 3972825 ']' 00:33:23.703 08:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@514 -- # killprocess 3972825 00:33:23.703 08:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 3972825 ']' 00:33:23.703 08:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 3972825 00:33:23.703 08:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:33:23.703 08:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:23.703 08:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3972825 00:33:23.963 08:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:23.963 08:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:23.963 08:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3972825' 00:33:23.963 killing process with pid 3972825 00:33:23.963 08:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 3972825 00:33:23.963 08:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 3972825 00:33:23.963 [2024-10-01 08:48:15.665445] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:33:23.963 08:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:33:23.963 08:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:33:23.963 08:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:33:23.963 08:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:33:23.963 08:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-save 00:33:23.963 08:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:33:23.963 08:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-restore 00:33:23.963 08:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:23.963 08:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:23.963 08:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:23.963 08:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:23.963 08:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:26.505 08:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:26.506 08:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:33:26.506 00:33:26.506 real 0m14.169s 00:33:26.506 user 0m19.339s 00:33:26.506 sys 0m7.288s 00:33:26.506 08:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:26.506 08:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:26.506 ************************************ 00:33:26.506 END TEST nvmf_host_management 00:33:26.506 ************************************ 00:33:26.506 08:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:33:26.506 08:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:33:26.506 08:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:26.506 08:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:26.506 ************************************ 00:33:26.506 START TEST nvmf_lvol 00:33:26.506 ************************************ 00:33:26.506 08:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:33:26.506 * Looking for test storage... 00:33:26.506 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:26.506 08:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:26.506 08:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:33:26.506 08:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:26.506 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:26.506 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:26.506 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:26.506 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:26.506 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:33:26.506 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:33:26.506 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:33:26.506 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:33:26.506 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:33:26.506 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:33:26.506 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:33:26.506 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:26.506 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:33:26.506 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:33:26.506 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:26.506 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:26.506 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:33:26.506 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:33:26.506 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:26.506 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:33:26.506 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:33:26.506 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:33:26.506 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:33:26.506 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:26.506 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:33:26.506 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:33:26.506 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:26.506 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:26.506 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:33:26.506 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:26.506 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:26.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:26.506 --rc genhtml_branch_coverage=1 00:33:26.506 --rc genhtml_function_coverage=1 00:33:26.506 --rc genhtml_legend=1 00:33:26.506 --rc geninfo_all_blocks=1 00:33:26.506 --rc geninfo_unexecuted_blocks=1 00:33:26.506 00:33:26.506 ' 00:33:26.506 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:26.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:26.506 --rc genhtml_branch_coverage=1 00:33:26.506 --rc genhtml_function_coverage=1 00:33:26.506 --rc genhtml_legend=1 00:33:26.506 --rc geninfo_all_blocks=1 00:33:26.506 --rc geninfo_unexecuted_blocks=1 00:33:26.506 00:33:26.506 ' 00:33:26.506 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:26.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:26.506 --rc genhtml_branch_coverage=1 00:33:26.506 --rc genhtml_function_coverage=1 00:33:26.506 --rc genhtml_legend=1 00:33:26.506 --rc geninfo_all_blocks=1 00:33:26.506 --rc geninfo_unexecuted_blocks=1 00:33:26.506 00:33:26.506 ' 00:33:26.506 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:26.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:26.506 --rc genhtml_branch_coverage=1 00:33:26.506 --rc genhtml_function_coverage=1 00:33:26.506 --rc genhtml_legend=1 00:33:26.506 --rc geninfo_all_blocks=1 00:33:26.506 --rc geninfo_unexecuted_blocks=1 00:33:26.506 00:33:26.506 ' 00:33:26.506 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:26.506 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:33:26.506 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:26.506 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:26.506 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:26.506 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:26.506 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:26.506 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:26.506 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:26.506 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:26.506 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:26.506 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:26.506 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:26.506 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:26.506 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:26.506 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:26.506 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:26.506 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:26.506 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:26.506 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:33:26.506 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:26.506 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:26.506 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:26.506 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.506 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.507 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.507 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:33:26.507 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.507 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:33:26.507 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:26.507 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:26.507 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:26.507 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:26.507 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:26.507 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:26.507 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:26.507 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:26.507 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:26.507 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:26.507 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:26.507 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:26.507 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:33:26.507 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:33:26.507 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:26.507 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:33:26.507 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:33:26.507 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:26.507 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@472 -- # prepare_net_devs 00:33:26.507 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@434 -- # local -g is_hw=no 00:33:26.507 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@436 -- # remove_spdk_ns 00:33:26.507 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:26.507 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:26.507 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:26.507 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:33:26.507 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:33:26.507 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:33:26.507 08:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:34.734 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:34.734 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ up == up ]] 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:34.734 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ up == up ]] 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:34.734 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # is_hw=yes 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:34.734 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:34.735 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:34.735 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:34.735 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:34.735 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:34.735 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:34.735 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:34.735 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:34.735 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:34.735 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:34.735 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.640 ms 00:33:34.735 00:33:34.735 --- 10.0.0.2 ping statistics --- 00:33:34.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:34.735 rtt min/avg/max/mdev = 0.640/0.640/0.640/0.000 ms 00:33:34.735 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:34.735 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:34.735 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:33:34.735 00:33:34.735 --- 10.0.0.1 ping statistics --- 00:33:34.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:34.735 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:33:34.735 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:34.735 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # return 0 00:33:34.735 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:33:34.735 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:34.735 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:33:34.735 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:33:34.735 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:34.735 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:33:34.735 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:33:34.735 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:33:34.735 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:33:34.735 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:34.735 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:34.735 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@505 -- # nvmfpid=3977997 00:33:34.735 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@506 -- # waitforlisten 3977997 00:33:34.735 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:33:34.735 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 3977997 ']' 00:33:34.735 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:34.735 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:34.735 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:34.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:34.735 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:34.735 08:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:34.735 [2024-10-01 08:48:25.567236] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:34.735 [2024-10-01 08:48:25.568220] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:33:34.735 [2024-10-01 08:48:25.568258] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:34.735 [2024-10-01 08:48:25.635439] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:34.735 [2024-10-01 08:48:25.700238] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:34.735 [2024-10-01 08:48:25.700279] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:34.735 [2024-10-01 08:48:25.700287] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:34.735 [2024-10-01 08:48:25.700294] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:34.735 [2024-10-01 08:48:25.700300] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:34.735 [2024-10-01 08:48:25.701296] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:33:34.735 [2024-10-01 08:48:25.701413] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:33:34.735 [2024-10-01 08:48:25.701415] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:33:34.735 [2024-10-01 08:48:25.761803] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:34.735 [2024-10-01 08:48:25.762212] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:34.735 [2024-10-01 08:48:25.762279] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:34.735 [2024-10-01 08:48:25.762353] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:34.735 08:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:34.735 08:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:33:34.735 08:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:33:34.735 08:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:34.735 08:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:34.735 08:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:34.735 08:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:34.996 [2024-10-01 08:48:26.589979] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:34.996 08:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:35.257 08:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:33:35.257 08:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:35.257 08:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:33:35.257 08:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:33:35.518 08:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:33:35.778 08:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=a23b6b65-d22d-4d7b-9848-571b07af7771 00:33:35.778 08:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a23b6b65-d22d-4d7b-9848-571b07af7771 lvol 20 00:33:35.778 08:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=7f65afc1-5d74-493a-8399-0c1523aeeb39 00:33:35.778 08:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:33:36.039 08:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7f65afc1-5d74-493a-8399-0c1523aeeb39 00:33:36.299 08:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:36.299 [2024-10-01 08:48:28.070099] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:36.299 08:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:36.559 08:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3978430 00:33:36.559 08:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:33:36.559 08:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:33:37.500 08:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 7f65afc1-5d74-493a-8399-0c1523aeeb39 MY_SNAPSHOT 00:33:37.759 08:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=1e4234e6-a569-4913-9fd7-545b3994f8fe 00:33:37.759 08:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 7f65afc1-5d74-493a-8399-0c1523aeeb39 30 00:33:38.019 08:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 1e4234e6-a569-4913-9fd7-545b3994f8fe MY_CLONE 00:33:38.280 08:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=0ffedc96-531f-4dd4-9f5b-af0049e592cd 00:33:38.280 08:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 0ffedc96-531f-4dd4-9f5b-af0049e592cd 00:33:38.850 08:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3978430 00:33:46.988 Initializing NVMe Controllers 00:33:46.988 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:33:46.988 Controller IO queue size 128, less than required. 00:33:46.988 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:46.988 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:33:46.988 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:33:46.988 Initialization complete. Launching workers. 00:33:46.988 ======================================================== 00:33:46.988 Latency(us) 00:33:46.988 Device Information : IOPS MiB/s Average min max 00:33:46.988 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12277.30 47.96 10426.72 1596.50 44989.16 00:33:46.988 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 18079.40 70.62 7081.39 374.68 37741.00 00:33:46.988 ======================================================== 00:33:46.988 Total : 30356.70 118.58 8434.36 374.68 44989.16 00:33:46.988 00:33:46.988 08:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:47.248 08:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7f65afc1-5d74-493a-8399-0c1523aeeb39 00:33:47.248 08:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a23b6b65-d22d-4d7b-9848-571b07af7771 00:33:47.508 08:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:33:47.508 08:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:33:47.508 08:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:33:47.508 08:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # nvmfcleanup 00:33:47.508 08:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:33:47.508 08:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:47.508 08:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:33:47.508 08:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:47.508 08:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:47.508 rmmod nvme_tcp 00:33:47.508 rmmod nvme_fabrics 00:33:47.508 rmmod nvme_keyring 00:33:47.508 08:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:47.508 08:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:33:47.508 08:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:33:47.508 08:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@513 -- # '[' -n 3977997 ']' 00:33:47.508 08:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@514 -- # killprocess 3977997 00:33:47.508 08:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 3977997 ']' 00:33:47.508 08:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 3977997 00:33:47.508 08:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:33:47.508 08:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:47.508 08:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3977997 00:33:47.508 08:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:47.508 08:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:47.508 08:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3977997' 00:33:47.508 killing process with pid 3977997 00:33:47.508 08:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 3977997 00:33:47.508 08:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 3977997 00:33:47.767 08:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:33:47.767 08:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:33:47.767 08:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:33:47.767 08:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:33:47.767 08:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-save 00:33:47.767 08:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:33:47.767 08:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-restore 00:33:47.767 08:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:47.767 08:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:47.767 08:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:47.767 08:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:47.767 08:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:50.309 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:50.309 00:33:50.309 real 0m23.705s 00:33:50.309 user 0m55.692s 00:33:50.309 sys 0m10.672s 00:33:50.309 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:50.309 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:50.309 ************************************ 00:33:50.309 END TEST nvmf_lvol 00:33:50.309 ************************************ 00:33:50.309 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:33:50.309 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:33:50.309 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:50.309 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:50.309 ************************************ 00:33:50.309 START TEST nvmf_lvs_grow 00:33:50.309 ************************************ 00:33:50.309 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:33:50.309 * Looking for test storage... 00:33:50.309 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:50.309 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:50.309 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:50.309 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:33:50.309 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:50.309 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:50.309 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:50.309 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:50.309 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:33:50.309 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:33:50.309 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:33:50.309 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:33:50.309 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:33:50.309 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:33:50.309 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:33:50.309 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:50.309 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:33:50.309 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:33:50.309 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:50.309 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:50.309 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:33:50.309 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:33:50.309 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:50.309 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:33:50.309 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:33:50.309 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:33:50.309 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:33:50.309 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:50.309 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:33:50.310 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:33:50.310 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:50.310 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:50.310 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:33:50.310 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:50.310 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:50.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:50.310 --rc genhtml_branch_coverage=1 00:33:50.310 --rc genhtml_function_coverage=1 00:33:50.310 --rc genhtml_legend=1 00:33:50.310 --rc geninfo_all_blocks=1 00:33:50.310 --rc geninfo_unexecuted_blocks=1 00:33:50.310 00:33:50.310 ' 00:33:50.310 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:50.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:50.310 --rc genhtml_branch_coverage=1 00:33:50.310 --rc genhtml_function_coverage=1 00:33:50.310 --rc genhtml_legend=1 00:33:50.310 --rc geninfo_all_blocks=1 00:33:50.310 --rc geninfo_unexecuted_blocks=1 00:33:50.310 00:33:50.310 ' 00:33:50.310 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:50.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:50.310 --rc genhtml_branch_coverage=1 00:33:50.310 --rc genhtml_function_coverage=1 00:33:50.310 --rc genhtml_legend=1 00:33:50.310 --rc geninfo_all_blocks=1 00:33:50.310 --rc geninfo_unexecuted_blocks=1 00:33:50.310 00:33:50.310 ' 00:33:50.310 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:50.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:50.310 --rc genhtml_branch_coverage=1 00:33:50.310 --rc genhtml_function_coverage=1 00:33:50.310 --rc genhtml_legend=1 00:33:50.310 --rc geninfo_all_blocks=1 00:33:50.310 --rc geninfo_unexecuted_blocks=1 00:33:50.310 00:33:50.310 ' 00:33:50.310 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:50.310 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:33:50.310 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:50.310 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:50.310 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:50.310 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:50.310 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:50.310 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:50.310 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:50.310 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:50.310 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:50.310 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:50.310 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:50.310 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:50.310 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:50.310 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:50.310 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:50.310 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:50.310 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:50.310 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:33:50.310 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:50.310 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:50.310 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:50.310 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.310 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.310 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.310 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:33:50.310 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.310 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:33:50.310 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:50.310 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:50.310 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:50.310 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:50.310 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:50.310 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:50.310 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:50.310 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:50.311 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:50.311 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:50.311 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:50.311 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:50.311 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:33:50.311 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:33:50.311 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:50.311 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@472 -- # prepare_net_devs 00:33:50.311 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@434 -- # local -g is_hw=no 00:33:50.311 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@436 -- # remove_spdk_ns 00:33:50.311 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:50.311 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:50.311 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:50.311 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:33:50.311 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:33:50.311 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:33:50.311 08:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:58.449 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:58.449 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:33:58.449 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:58.449 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:58.449 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:58.449 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:58.449 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:58.449 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:33:58.449 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:58.449 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:33:58.449 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:33:58.449 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:33:58.449 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:33:58.449 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:33:58.449 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:33:58.449 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:58.449 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:58.449 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:58.449 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:58.449 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:58.449 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:58.449 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:58.449 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:58.449 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:58.449 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:58.449 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:58.449 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:33:58.449 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:33:58.449 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:33:58.449 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:33:58.449 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:33:58.449 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:33:58.449 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:33:58.449 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:58.449 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:58.449 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:33:58.449 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:33:58.449 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:58.449 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:58.449 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:33:58.449 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:33:58.450 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:58.450 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:58.450 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:33:58.450 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:33:58.450 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:58.450 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:58.450 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:33:58.450 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:33:58.450 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:33:58.450 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:33:58.450 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:33:58.450 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:58.450 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:33:58.450 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:58.450 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ up == up ]] 00:33:58.450 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:33:58.450 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:58.450 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:58.450 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:58.450 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:33:58.450 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:33:58.450 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:58.450 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:33:58.450 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:58.450 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ up == up ]] 00:33:58.450 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:33:58.450 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:58.450 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:58.450 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:58.450 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:33:58.450 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:33:58.450 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # is_hw=yes 00:33:58.450 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:33:58.450 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:33:58.450 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:33:58.450 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:58.450 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:58.450 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:58.450 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:58.450 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:58.450 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:58.450 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:58.450 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:58.450 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:58.450 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:58.450 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:58.450 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:58.450 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:58.450 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:58.450 08:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:58.450 08:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:58.450 08:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:58.450 08:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:58.450 08:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:58.450 08:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:58.450 08:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:58.450 08:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:58.450 08:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:58.450 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:58.450 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.647 ms 00:33:58.450 00:33:58.450 --- 10.0.0.2 ping statistics --- 00:33:58.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:58.450 rtt min/avg/max/mdev = 0.647/0.647/0.647/0.000 ms 00:33:58.450 08:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:58.450 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:58.450 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:33:58.450 00:33:58.450 --- 10.0.0.1 ping statistics --- 00:33:58.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:58.450 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:33:58.450 08:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:58.450 08:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # return 0 00:33:58.450 08:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:33:58.450 08:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:58.450 08:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:33:58.450 08:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:33:58.450 08:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:58.450 08:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:33:58.450 08:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:33:58.450 08:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:33:58.450 08:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:33:58.450 08:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:58.450 08:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:58.450 08:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@505 -- # nvmfpid=3984726 00:33:58.450 08:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@506 -- # waitforlisten 3984726 00:33:58.450 08:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:33:58.450 08:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 3984726 ']' 00:33:58.450 08:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:58.450 08:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:58.450 08:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:58.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:58.450 08:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:58.450 08:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:58.450 [2024-10-01 08:48:49.256806] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:58.450 [2024-10-01 08:48:49.257943] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:33:58.450 [2024-10-01 08:48:49.258002] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:58.450 [2024-10-01 08:48:49.329066] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:58.450 [2024-10-01 08:48:49.401501] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:58.450 [2024-10-01 08:48:49.401538] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:58.450 [2024-10-01 08:48:49.401546] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:58.450 [2024-10-01 08:48:49.401553] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:58.450 [2024-10-01 08:48:49.401558] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:58.451 [2024-10-01 08:48:49.402106] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:33:58.451 [2024-10-01 08:48:49.456370] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:58.451 [2024-10-01 08:48:49.456628] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:58.451 08:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:58.451 08:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:33:58.451 08:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:33:58.451 08:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:58.451 08:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:58.451 08:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:58.451 08:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:58.451 [2024-10-01 08:48:50.242553] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:58.711 08:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:33:58.711 08:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:33:58.711 08:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:58.711 08:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:58.711 ************************************ 00:33:58.711 START TEST lvs_grow_clean 00:33:58.711 ************************************ 00:33:58.711 08:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:33:58.711 08:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:33:58.711 08:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:33:58.711 08:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:33:58.712 08:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:33:58.712 08:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:33:58.712 08:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:33:58.712 08:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:58.712 08:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:58.712 08:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:58.972 08:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:33:58.972 08:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:33:58.972 08:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=310c92ae-dfae-483c-814d-b7932d8c3904 00:33:58.972 08:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 310c92ae-dfae-483c-814d-b7932d8c3904 00:33:58.972 08:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:33:59.233 08:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:33:59.233 08:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:33:59.233 08:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 310c92ae-dfae-483c-814d-b7932d8c3904 lvol 150 00:33:59.493 08:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=2800e5e0-9714-4064-a771-21f383315bfe 00:33:59.493 08:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:59.493 08:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:33:59.493 [2024-10-01 08:48:51.262471] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:33:59.493 [2024-10-01 08:48:51.262567] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:33:59.493 true 00:33:59.493 08:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 310c92ae-dfae-483c-814d-b7932d8c3904 00:33:59.493 08:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:33:59.753 08:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:33:59.753 08:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:34:00.014 08:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2800e5e0-9714-4064-a771-21f383315bfe 00:34:00.273 08:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:00.273 [2024-10-01 08:48:51.990877] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:00.273 08:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:00.534 08:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3985364 00:34:00.534 08:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:00.534 08:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3985364 /var/tmp/bdevperf.sock 00:34:00.534 08:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:34:00.534 08:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 3985364 ']' 00:34:00.534 08:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:00.534 08:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:00.534 08:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:00.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:00.534 08:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:00.534 08:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:34:00.534 [2024-10-01 08:48:52.229824] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:34:00.534 [2024-10-01 08:48:52.229903] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3985364 ] 00:34:00.534 [2024-10-01 08:48:52.309799] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:00.795 [2024-10-01 08:48:52.399945] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:34:01.368 08:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:01.368 08:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:34:01.368 08:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:34:01.629 Nvme0n1 00:34:01.890 08:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:34:01.890 [ 00:34:01.890 { 00:34:01.890 "name": "Nvme0n1", 00:34:01.890 "aliases": [ 00:34:01.890 "2800e5e0-9714-4064-a771-21f383315bfe" 00:34:01.890 ], 00:34:01.890 "product_name": "NVMe disk", 00:34:01.890 "block_size": 4096, 00:34:01.890 "num_blocks": 38912, 00:34:01.890 "uuid": "2800e5e0-9714-4064-a771-21f383315bfe", 00:34:01.890 "numa_id": 0, 00:34:01.890 "assigned_rate_limits": { 00:34:01.890 "rw_ios_per_sec": 0, 00:34:01.890 "rw_mbytes_per_sec": 0, 00:34:01.890 "r_mbytes_per_sec": 0, 00:34:01.890 "w_mbytes_per_sec": 0 00:34:01.890 }, 00:34:01.890 "claimed": false, 00:34:01.890 "zoned": false, 00:34:01.890 "supported_io_types": { 00:34:01.890 "read": true, 00:34:01.890 "write": true, 00:34:01.890 "unmap": true, 00:34:01.890 "flush": true, 00:34:01.890 "reset": true, 00:34:01.890 "nvme_admin": true, 00:34:01.890 "nvme_io": true, 00:34:01.890 "nvme_io_md": false, 00:34:01.890 "write_zeroes": true, 00:34:01.890 "zcopy": false, 00:34:01.890 "get_zone_info": false, 00:34:01.890 "zone_management": false, 00:34:01.890 "zone_append": false, 00:34:01.890 "compare": true, 00:34:01.890 "compare_and_write": true, 00:34:01.890 "abort": true, 00:34:01.890 "seek_hole": false, 00:34:01.890 "seek_data": false, 00:34:01.890 "copy": true, 00:34:01.890 "nvme_iov_md": false 00:34:01.890 }, 00:34:01.890 "memory_domains": [ 00:34:01.890 { 00:34:01.890 "dma_device_id": "system", 00:34:01.890 "dma_device_type": 1 00:34:01.890 } 00:34:01.890 ], 00:34:01.890 "driver_specific": { 00:34:01.890 "nvme": [ 00:34:01.890 { 00:34:01.890 "trid": { 00:34:01.890 "trtype": "TCP", 00:34:01.890 "adrfam": "IPv4", 00:34:01.890 "traddr": "10.0.0.2", 00:34:01.890 "trsvcid": "4420", 00:34:01.890 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:34:01.890 }, 00:34:01.890 "ctrlr_data": { 00:34:01.890 "cntlid": 1, 00:34:01.890 "vendor_id": "0x8086", 00:34:01.890 "model_number": "SPDK bdev Controller", 00:34:01.890 "serial_number": "SPDK0", 00:34:01.890 "firmware_revision": "25.01", 00:34:01.890 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:01.890 "oacs": { 00:34:01.890 "security": 0, 00:34:01.890 "format": 0, 00:34:01.890 "firmware": 0, 00:34:01.890 "ns_manage": 0 00:34:01.890 }, 00:34:01.890 "multi_ctrlr": true, 00:34:01.890 "ana_reporting": false 00:34:01.890 }, 00:34:01.890 "vs": { 00:34:01.890 "nvme_version": "1.3" 00:34:01.890 }, 00:34:01.890 "ns_data": { 00:34:01.890 "id": 1, 00:34:01.890 "can_share": true 00:34:01.890 } 00:34:01.890 } 00:34:01.890 ], 00:34:01.890 "mp_policy": "active_passive" 00:34:01.890 } 00:34:01.890 } 00:34:01.890 ] 00:34:01.890 08:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3985543 00:34:01.890 08:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:34:01.890 08:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:02.151 Running I/O for 10 seconds... 00:34:03.091 Latency(us) 00:34:03.091 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:03.091 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:03.091 Nvme0n1 : 1.00 17727.00 69.25 0.00 0.00 0.00 0.00 0.00 00:34:03.091 =================================================================================================================== 00:34:03.091 Total : 17727.00 69.25 0.00 0.00 0.00 0.00 0.00 00:34:03.091 00:34:04.032 08:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 310c92ae-dfae-483c-814d-b7932d8c3904 00:34:04.032 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:04.032 Nvme0n1 : 2.00 17823.50 69.62 0.00 0.00 0.00 0.00 0.00 00:34:04.032 =================================================================================================================== 00:34:04.032 Total : 17823.50 69.62 0.00 0.00 0.00 0.00 0.00 00:34:04.032 00:34:04.032 true 00:34:04.032 08:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 310c92ae-dfae-483c-814d-b7932d8c3904 00:34:04.032 08:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:34:04.293 08:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:34:04.293 08:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:34:04.293 08:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3985543 00:34:05.236 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:05.236 Nvme0n1 : 3.00 17877.33 69.83 0.00 0.00 0.00 0.00 0.00 00:34:05.236 =================================================================================================================== 00:34:05.236 Total : 17877.33 69.83 0.00 0.00 0.00 0.00 0.00 00:34:05.236 00:34:06.176 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:06.176 Nvme0n1 : 4.00 17920.00 70.00 0.00 0.00 0.00 0.00 0.00 00:34:06.176 =================================================================================================================== 00:34:06.176 Total : 17920.00 70.00 0.00 0.00 0.00 0.00 0.00 00:34:06.176 00:34:07.117 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:07.117 Nvme0n1 : 5.00 17945.60 70.10 0.00 0.00 0.00 0.00 0.00 00:34:07.117 =================================================================================================================== 00:34:07.117 Total : 17945.60 70.10 0.00 0.00 0.00 0.00 0.00 00:34:07.117 00:34:08.061 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:08.061 Nvme0n1 : 6.00 17962.67 70.17 0.00 0.00 0.00 0.00 0.00 00:34:08.061 =================================================================================================================== 00:34:08.061 Total : 17962.67 70.17 0.00 0.00 0.00 0.00 0.00 00:34:08.061 00:34:09.003 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:09.003 Nvme0n1 : 7.00 17983.86 70.25 0.00 0.00 0.00 0.00 0.00 00:34:09.003 =================================================================================================================== 00:34:09.003 Total : 17983.86 70.25 0.00 0.00 0.00 0.00 0.00 00:34:09.003 00:34:09.944 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:09.944 Nvme0n1 : 8.00 18000.00 70.31 0.00 0.00 0.00 0.00 0.00 00:34:09.944 =================================================================================================================== 00:34:09.944 Total : 18000.00 70.31 0.00 0.00 0.00 0.00 0.00 00:34:09.944 00:34:11.330 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:11.330 Nvme0n1 : 9.00 18012.33 70.36 0.00 0.00 0.00 0.00 0.00 00:34:11.330 =================================================================================================================== 00:34:11.330 Total : 18012.33 70.36 0.00 0.00 0.00 0.00 0.00 00:34:11.330 00:34:12.272 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:12.272 Nvme0n1 : 10.00 18022.40 70.40 0.00 0.00 0.00 0.00 0.00 00:34:12.272 =================================================================================================================== 00:34:12.272 Total : 18022.40 70.40 0.00 0.00 0.00 0.00 0.00 00:34:12.272 00:34:12.272 00:34:12.272 Latency(us) 00:34:12.272 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:12.272 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:12.272 Nvme0n1 : 10.00 18021.11 70.39 0.00 0.00 7098.83 2662.40 13107.20 00:34:12.272 =================================================================================================================== 00:34:12.272 Total : 18021.11 70.39 0.00 0.00 7098.83 2662.40 13107.20 00:34:12.272 { 00:34:12.272 "results": [ 00:34:12.272 { 00:34:12.272 "job": "Nvme0n1", 00:34:12.272 "core_mask": "0x2", 00:34:12.272 "workload": "randwrite", 00:34:12.272 "status": "finished", 00:34:12.272 "queue_depth": 128, 00:34:12.272 "io_size": 4096, 00:34:12.272 "runtime": 10.004211, 00:34:12.272 "iops": 18021.111310027347, 00:34:12.272 "mibps": 70.39496605479432, 00:34:12.272 "io_failed": 0, 00:34:12.272 "io_timeout": 0, 00:34:12.272 "avg_latency_us": 7098.8251659483685, 00:34:12.272 "min_latency_us": 2662.4, 00:34:12.272 "max_latency_us": 13107.2 00:34:12.272 } 00:34:12.272 ], 00:34:12.272 "core_count": 1 00:34:12.272 } 00:34:12.272 08:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3985364 00:34:12.272 08:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 3985364 ']' 00:34:12.272 08:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 3985364 00:34:12.272 08:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:34:12.272 08:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:12.272 08:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3985364 00:34:12.272 08:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:12.272 08:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:12.272 08:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3985364' 00:34:12.272 killing process with pid 3985364 00:34:12.272 08:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 3985364 00:34:12.272 Received shutdown signal, test time was about 10.000000 seconds 00:34:12.272 00:34:12.272 Latency(us) 00:34:12.272 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:12.272 =================================================================================================================== 00:34:12.272 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:12.272 08:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 3985364 00:34:12.272 08:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:12.533 08:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:12.533 08:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 310c92ae-dfae-483c-814d-b7932d8c3904 00:34:12.533 08:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:34:12.794 08:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:34:12.794 08:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:34:12.794 08:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:34:13.054 [2024-10-01 08:49:04.654622] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:34:13.054 08:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 310c92ae-dfae-483c-814d-b7932d8c3904 00:34:13.054 08:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:34:13.054 08:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 310c92ae-dfae-483c-814d-b7932d8c3904 00:34:13.054 08:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:13.054 08:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:13.054 08:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:13.054 08:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:13.054 08:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:13.054 08:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:13.054 08:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:13.054 08:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:34:13.054 08:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 310c92ae-dfae-483c-814d-b7932d8c3904 00:34:13.054 request: 00:34:13.054 { 00:34:13.054 "uuid": "310c92ae-dfae-483c-814d-b7932d8c3904", 00:34:13.054 "method": "bdev_lvol_get_lvstores", 00:34:13.054 "req_id": 1 00:34:13.054 } 00:34:13.054 Got JSON-RPC error response 00:34:13.054 response: 00:34:13.054 { 00:34:13.054 "code": -19, 00:34:13.054 "message": "No such device" 00:34:13.054 } 00:34:13.313 08:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:34:13.313 08:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:13.313 08:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:13.313 08:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:13.313 08:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:34:13.313 aio_bdev 00:34:13.313 08:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 2800e5e0-9714-4064-a771-21f383315bfe 00:34:13.313 08:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=2800e5e0-9714-4064-a771-21f383315bfe 00:34:13.313 08:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:34:13.313 08:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:34:13.313 08:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:34:13.313 08:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:34:13.313 08:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:13.573 08:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 2800e5e0-9714-4064-a771-21f383315bfe -t 2000 00:34:13.573 [ 00:34:13.573 { 00:34:13.573 "name": "2800e5e0-9714-4064-a771-21f383315bfe", 00:34:13.573 "aliases": [ 00:34:13.573 "lvs/lvol" 00:34:13.573 ], 00:34:13.573 "product_name": "Logical Volume", 00:34:13.573 "block_size": 4096, 00:34:13.573 "num_blocks": 38912, 00:34:13.573 "uuid": "2800e5e0-9714-4064-a771-21f383315bfe", 00:34:13.573 "assigned_rate_limits": { 00:34:13.573 "rw_ios_per_sec": 0, 00:34:13.573 "rw_mbytes_per_sec": 0, 00:34:13.573 "r_mbytes_per_sec": 0, 00:34:13.573 "w_mbytes_per_sec": 0 00:34:13.573 }, 00:34:13.573 "claimed": false, 00:34:13.573 "zoned": false, 00:34:13.573 "supported_io_types": { 00:34:13.573 "read": true, 00:34:13.573 "write": true, 00:34:13.573 "unmap": true, 00:34:13.573 "flush": false, 00:34:13.573 "reset": true, 00:34:13.573 "nvme_admin": false, 00:34:13.573 "nvme_io": false, 00:34:13.573 "nvme_io_md": false, 00:34:13.573 "write_zeroes": true, 00:34:13.573 "zcopy": false, 00:34:13.573 "get_zone_info": false, 00:34:13.573 "zone_management": false, 00:34:13.573 "zone_append": false, 00:34:13.573 "compare": false, 00:34:13.573 "compare_and_write": false, 00:34:13.573 "abort": false, 00:34:13.573 "seek_hole": true, 00:34:13.573 "seek_data": true, 00:34:13.573 "copy": false, 00:34:13.573 "nvme_iov_md": false 00:34:13.573 }, 00:34:13.573 "driver_specific": { 00:34:13.573 "lvol": { 00:34:13.573 "lvol_store_uuid": "310c92ae-dfae-483c-814d-b7932d8c3904", 00:34:13.573 "base_bdev": "aio_bdev", 00:34:13.573 "thin_provision": false, 00:34:13.573 "num_allocated_clusters": 38, 00:34:13.573 "snapshot": false, 00:34:13.573 "clone": false, 00:34:13.573 "esnap_clone": false 00:34:13.573 } 00:34:13.573 } 00:34:13.573 } 00:34:13.573 ] 00:34:13.573 08:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:34:13.573 08:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 310c92ae-dfae-483c-814d-b7932d8c3904 00:34:13.573 08:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:34:13.833 08:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:34:13.833 08:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 310c92ae-dfae-483c-814d-b7932d8c3904 00:34:13.833 08:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:34:14.092 08:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:34:14.092 08:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2800e5e0-9714-4064-a771-21f383315bfe 00:34:14.092 08:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 310c92ae-dfae-483c-814d-b7932d8c3904 00:34:14.352 08:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:34:14.612 08:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:14.612 00:34:14.612 real 0m15.994s 00:34:14.612 user 0m15.641s 00:34:14.612 sys 0m1.427s 00:34:14.612 08:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:14.612 08:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:34:14.612 ************************************ 00:34:14.612 END TEST lvs_grow_clean 00:34:14.612 ************************************ 00:34:14.612 08:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:34:14.612 08:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:34:14.612 08:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:14.612 08:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:14.612 ************************************ 00:34:14.612 START TEST lvs_grow_dirty 00:34:14.612 ************************************ 00:34:14.612 08:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:34:14.612 08:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:34:14.612 08:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:34:14.612 08:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:34:14.612 08:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:34:14.612 08:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:34:14.612 08:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:34:14.612 08:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:14.612 08:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:14.612 08:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:34:14.872 08:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:34:14.872 08:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:34:15.132 08:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=77a89ef4-f499-4021-b545-cb551db6ca4c 00:34:15.132 08:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:34:15.132 08:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77a89ef4-f499-4021-b545-cb551db6ca4c 00:34:15.132 08:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:34:15.132 08:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:34:15.392 08:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 77a89ef4-f499-4021-b545-cb551db6ca4c lvol 150 00:34:15.392 08:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=8c4d2cdc-dbb3-4a2e-9611-216380ea4d8c 00:34:15.392 08:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:15.392 08:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:34:15.652 [2024-10-01 08:49:07.266545] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:34:15.652 [2024-10-01 08:49:07.266690] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:34:15.652 true 00:34:15.652 08:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77a89ef4-f499-4021-b545-cb551db6ca4c 00:34:15.652 08:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:34:15.652 08:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:34:15.652 08:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:34:15.913 08:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8c4d2cdc-dbb3-4a2e-9611-216380ea4d8c 00:34:16.173 08:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:16.173 [2024-10-01 08:49:07.914708] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:16.173 08:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:16.434 08:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3988333 00:34:16.434 08:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:16.434 08:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:34:16.434 08:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3988333 /var/tmp/bdevperf.sock 00:34:16.434 08:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 3988333 ']' 00:34:16.434 08:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:16.434 08:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:16.434 08:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:16.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:16.434 08:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:16.434 08:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:16.434 [2024-10-01 08:49:08.132373] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:34:16.434 [2024-10-01 08:49:08.132429] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3988333 ] 00:34:16.434 [2024-10-01 08:49:08.209034] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:16.694 [2024-10-01 08:49:08.262474] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:34:17.266 08:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:17.266 08:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:34:17.266 08:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:34:17.526 Nvme0n1 00:34:17.526 08:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:34:17.786 [ 00:34:17.786 { 00:34:17.786 "name": "Nvme0n1", 00:34:17.786 "aliases": [ 00:34:17.786 "8c4d2cdc-dbb3-4a2e-9611-216380ea4d8c" 00:34:17.786 ], 00:34:17.786 "product_name": "NVMe disk", 00:34:17.786 "block_size": 4096, 00:34:17.786 "num_blocks": 38912, 00:34:17.786 "uuid": "8c4d2cdc-dbb3-4a2e-9611-216380ea4d8c", 00:34:17.786 "numa_id": 0, 00:34:17.786 "assigned_rate_limits": { 00:34:17.786 "rw_ios_per_sec": 0, 00:34:17.786 "rw_mbytes_per_sec": 0, 00:34:17.786 "r_mbytes_per_sec": 0, 00:34:17.786 "w_mbytes_per_sec": 0 00:34:17.786 }, 00:34:17.786 "claimed": false, 00:34:17.786 "zoned": false, 00:34:17.786 "supported_io_types": { 00:34:17.786 "read": true, 00:34:17.786 "write": true, 00:34:17.786 "unmap": true, 00:34:17.786 "flush": true, 00:34:17.786 "reset": true, 00:34:17.786 "nvme_admin": true, 00:34:17.786 "nvme_io": true, 00:34:17.786 "nvme_io_md": false, 00:34:17.786 "write_zeroes": true, 00:34:17.786 "zcopy": false, 00:34:17.786 "get_zone_info": false, 00:34:17.786 "zone_management": false, 00:34:17.786 "zone_append": false, 00:34:17.786 "compare": true, 00:34:17.786 "compare_and_write": true, 00:34:17.786 "abort": true, 00:34:17.786 "seek_hole": false, 00:34:17.786 "seek_data": false, 00:34:17.786 "copy": true, 00:34:17.786 "nvme_iov_md": false 00:34:17.786 }, 00:34:17.786 "memory_domains": [ 00:34:17.786 { 00:34:17.786 "dma_device_id": "system", 00:34:17.786 "dma_device_type": 1 00:34:17.786 } 00:34:17.786 ], 00:34:17.787 "driver_specific": { 00:34:17.787 "nvme": [ 00:34:17.787 { 00:34:17.787 "trid": { 00:34:17.787 "trtype": "TCP", 00:34:17.787 "adrfam": "IPv4", 00:34:17.787 "traddr": "10.0.0.2", 00:34:17.787 "trsvcid": "4420", 00:34:17.787 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:34:17.787 }, 00:34:17.787 "ctrlr_data": { 00:34:17.787 "cntlid": 1, 00:34:17.787 "vendor_id": "0x8086", 00:34:17.787 "model_number": "SPDK bdev Controller", 00:34:17.787 "serial_number": "SPDK0", 00:34:17.787 "firmware_revision": "25.01", 00:34:17.787 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:17.787 "oacs": { 00:34:17.787 "security": 0, 00:34:17.787 "format": 0, 00:34:17.787 "firmware": 0, 00:34:17.787 "ns_manage": 0 00:34:17.787 }, 00:34:17.787 "multi_ctrlr": true, 00:34:17.787 "ana_reporting": false 00:34:17.787 }, 00:34:17.787 "vs": { 00:34:17.787 "nvme_version": "1.3" 00:34:17.787 }, 00:34:17.787 "ns_data": { 00:34:17.787 "id": 1, 00:34:17.787 "can_share": true 00:34:17.787 } 00:34:17.787 } 00:34:17.787 ], 00:34:17.787 "mp_policy": "active_passive" 00:34:17.787 } 00:34:17.787 } 00:34:17.787 ] 00:34:17.787 08:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:17.787 08:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3988524 00:34:17.787 08:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:34:17.787 Running I/O for 10 seconds... 00:34:18.728 Latency(us) 00:34:18.728 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:18.728 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:18.728 Nvme0n1 : 1.00 17728.00 69.25 0.00 0.00 0.00 0.00 0.00 00:34:18.728 =================================================================================================================== 00:34:18.728 Total : 17728.00 69.25 0.00 0.00 0.00 0.00 0.00 00:34:18.728 00:34:19.668 08:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 77a89ef4-f499-4021-b545-cb551db6ca4c 00:34:19.928 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:19.928 Nvme0n1 : 2.00 17832.50 69.66 0.00 0.00 0.00 0.00 0.00 00:34:19.928 =================================================================================================================== 00:34:19.928 Total : 17832.50 69.66 0.00 0.00 0.00 0.00 0.00 00:34:19.928 00:34:19.928 true 00:34:19.928 08:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:34:19.928 08:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77a89ef4-f499-4021-b545-cb551db6ca4c 00:34:20.189 08:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:34:20.189 08:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:34:20.189 08:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3988524 00:34:20.759 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:20.759 Nvme0n1 : 3.00 17877.67 69.83 0.00 0.00 0.00 0.00 0.00 00:34:20.759 =================================================================================================================== 00:34:20.759 Total : 17877.67 69.83 0.00 0.00 0.00 0.00 0.00 00:34:20.759 00:34:21.699 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:21.699 Nvme0n1 : 4.00 17920.25 70.00 0.00 0.00 0.00 0.00 0.00 00:34:21.699 =================================================================================================================== 00:34:21.699 Total : 17920.25 70.00 0.00 0.00 0.00 0.00 0.00 00:34:21.699 00:34:23.081 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:23.081 Nvme0n1 : 5.00 17945.80 70.10 0.00 0.00 0.00 0.00 0.00 00:34:23.081 =================================================================================================================== 00:34:23.081 Total : 17945.80 70.10 0.00 0.00 0.00 0.00 0.00 00:34:23.081 00:34:24.022 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:24.022 Nvme0n1 : 6.00 17962.83 70.17 0.00 0.00 0.00 0.00 0.00 00:34:24.022 =================================================================================================================== 00:34:24.022 Total : 17962.83 70.17 0.00 0.00 0.00 0.00 0.00 00:34:24.022 00:34:24.961 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:24.961 Nvme0n1 : 7.00 17984.00 70.25 0.00 0.00 0.00 0.00 0.00 00:34:24.961 =================================================================================================================== 00:34:24.961 Total : 17984.00 70.25 0.00 0.00 0.00 0.00 0.00 00:34:24.961 00:34:25.901 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:25.901 Nvme0n1 : 8.00 17992.00 70.28 0.00 0.00 0.00 0.00 0.00 00:34:25.901 =================================================================================================================== 00:34:25.901 Total : 17992.00 70.28 0.00 0.00 0.00 0.00 0.00 00:34:25.901 00:34:26.895 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:26.895 Nvme0n1 : 9.00 18005.44 70.33 0.00 0.00 0.00 0.00 0.00 00:34:26.895 =================================================================================================================== 00:34:26.895 Total : 18005.44 70.33 0.00 0.00 0.00 0.00 0.00 00:34:26.895 00:34:27.879 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:27.879 Nvme0n1 : 10.00 18015.70 70.37 0.00 0.00 0.00 0.00 0.00 00:34:27.879 =================================================================================================================== 00:34:27.879 Total : 18015.70 70.37 0.00 0.00 0.00 0.00 0.00 00:34:27.879 00:34:27.879 00:34:27.879 Latency(us) 00:34:27.879 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:27.879 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:27.879 Nvme0n1 : 10.00 18015.04 70.37 0.00 0.00 7101.30 1856.85 12834.13 00:34:27.879 =================================================================================================================== 00:34:27.879 Total : 18015.04 70.37 0.00 0.00 7101.30 1856.85 12834.13 00:34:27.879 { 00:34:27.879 "results": [ 00:34:27.879 { 00:34:27.879 "job": "Nvme0n1", 00:34:27.879 "core_mask": "0x2", 00:34:27.879 "workload": "randwrite", 00:34:27.879 "status": "finished", 00:34:27.879 "queue_depth": 128, 00:34:27.879 "io_size": 4096, 00:34:27.879 "runtime": 10.003976, 00:34:27.879 "iops": 18015.03722120085, 00:34:27.879 "mibps": 70.37123914531583, 00:34:27.879 "io_failed": 0, 00:34:27.879 "io_timeout": 0, 00:34:27.879 "avg_latency_us": 7101.298502217635, 00:34:27.879 "min_latency_us": 1856.8533333333332, 00:34:27.879 "max_latency_us": 12834.133333333333 00:34:27.879 } 00:34:27.879 ], 00:34:27.879 "core_count": 1 00:34:27.879 } 00:34:27.879 08:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3988333 00:34:27.879 08:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 3988333 ']' 00:34:27.879 08:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 3988333 00:34:27.879 08:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:34:27.879 08:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:27.879 08:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3988333 00:34:27.879 08:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:27.879 08:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:27.879 08:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3988333' 00:34:27.879 killing process with pid 3988333 00:34:27.879 08:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 3988333 00:34:27.879 Received shutdown signal, test time was about 10.000000 seconds 00:34:27.879 00:34:27.879 Latency(us) 00:34:27.879 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:27.879 =================================================================================================================== 00:34:27.879 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:27.879 08:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 3988333 00:34:28.141 08:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:28.141 08:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:28.402 08:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77a89ef4-f499-4021-b545-cb551db6ca4c 00:34:28.402 08:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:34:28.662 08:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:34:28.662 08:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:34:28.662 08:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3984726 00:34:28.662 08:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3984726 00:34:28.662 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3984726 Killed "${NVMF_APP[@]}" "$@" 00:34:28.662 08:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:34:28.662 08:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:34:28.662 08:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:34:28.662 08:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:28.662 08:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:28.662 08:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # nvmfpid=3990556 00:34:28.662 08:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # waitforlisten 3990556 00:34:28.662 08:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:34:28.662 08:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 3990556 ']' 00:34:28.662 08:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:28.662 08:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:28.662 08:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:28.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:28.662 08:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:28.662 08:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:28.662 [2024-10-01 08:49:20.326310] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:28.662 [2024-10-01 08:49:20.327289] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:34:28.662 [2024-10-01 08:49:20.327331] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:28.662 [2024-10-01 08:49:20.393048] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:28.662 [2024-10-01 08:49:20.456134] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:28.662 [2024-10-01 08:49:20.456169] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:28.662 [2024-10-01 08:49:20.456177] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:28.662 [2024-10-01 08:49:20.456184] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:28.662 [2024-10-01 08:49:20.456190] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:28.662 [2024-10-01 08:49:20.456720] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:34:28.923 [2024-10-01 08:49:20.510811] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:28.923 [2024-10-01 08:49:20.511080] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:29.495 08:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:29.495 08:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:34:29.495 08:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:34:29.495 08:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:29.495 08:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:29.495 08:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:29.495 08:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:34:29.495 [2024-10-01 08:49:21.311935] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:34:29.495 [2024-10-01 08:49:21.312079] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:34:29.495 [2024-10-01 08:49:21.312114] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:34:29.756 08:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:34:29.756 08:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 8c4d2cdc-dbb3-4a2e-9611-216380ea4d8c 00:34:29.756 08:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=8c4d2cdc-dbb3-4a2e-9611-216380ea4d8c 00:34:29.756 08:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:34:29.756 08:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:34:29.756 08:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:34:29.756 08:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:34:29.756 08:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:29.756 08:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8c4d2cdc-dbb3-4a2e-9611-216380ea4d8c -t 2000 00:34:30.017 [ 00:34:30.017 { 00:34:30.017 "name": "8c4d2cdc-dbb3-4a2e-9611-216380ea4d8c", 00:34:30.017 "aliases": [ 00:34:30.017 "lvs/lvol" 00:34:30.017 ], 00:34:30.017 "product_name": "Logical Volume", 00:34:30.017 "block_size": 4096, 00:34:30.017 "num_blocks": 38912, 00:34:30.017 "uuid": "8c4d2cdc-dbb3-4a2e-9611-216380ea4d8c", 00:34:30.017 "assigned_rate_limits": { 00:34:30.017 "rw_ios_per_sec": 0, 00:34:30.017 "rw_mbytes_per_sec": 0, 00:34:30.017 "r_mbytes_per_sec": 0, 00:34:30.017 "w_mbytes_per_sec": 0 00:34:30.017 }, 00:34:30.017 "claimed": false, 00:34:30.017 "zoned": false, 00:34:30.017 "supported_io_types": { 00:34:30.017 "read": true, 00:34:30.017 "write": true, 00:34:30.017 "unmap": true, 00:34:30.017 "flush": false, 00:34:30.017 "reset": true, 00:34:30.017 "nvme_admin": false, 00:34:30.017 "nvme_io": false, 00:34:30.017 "nvme_io_md": false, 00:34:30.017 "write_zeroes": true, 00:34:30.017 "zcopy": false, 00:34:30.017 "get_zone_info": false, 00:34:30.017 "zone_management": false, 00:34:30.017 "zone_append": false, 00:34:30.017 "compare": false, 00:34:30.017 "compare_and_write": false, 00:34:30.017 "abort": false, 00:34:30.017 "seek_hole": true, 00:34:30.017 "seek_data": true, 00:34:30.017 "copy": false, 00:34:30.017 "nvme_iov_md": false 00:34:30.017 }, 00:34:30.017 "driver_specific": { 00:34:30.017 "lvol": { 00:34:30.017 "lvol_store_uuid": "77a89ef4-f499-4021-b545-cb551db6ca4c", 00:34:30.018 "base_bdev": "aio_bdev", 00:34:30.018 "thin_provision": false, 00:34:30.018 "num_allocated_clusters": 38, 00:34:30.018 "snapshot": false, 00:34:30.018 "clone": false, 00:34:30.018 "esnap_clone": false 00:34:30.018 } 00:34:30.018 } 00:34:30.018 } 00:34:30.018 ] 00:34:30.018 08:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:34:30.018 08:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77a89ef4-f499-4021-b545-cb551db6ca4c 00:34:30.018 08:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:34:30.279 08:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:34:30.279 08:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77a89ef4-f499-4021-b545-cb551db6ca4c 00:34:30.279 08:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:34:30.279 08:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:34:30.279 08:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:34:30.541 [2024-10-01 08:49:22.177156] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:34:30.541 08:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77a89ef4-f499-4021-b545-cb551db6ca4c 00:34:30.541 08:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:34:30.541 08:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77a89ef4-f499-4021-b545-cb551db6ca4c 00:34:30.541 08:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:30.541 08:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:30.541 08:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:30.541 08:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:30.541 08:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:30.541 08:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:30.541 08:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:30.541 08:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:34:30.541 08:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77a89ef4-f499-4021-b545-cb551db6ca4c 00:34:30.801 request: 00:34:30.801 { 00:34:30.801 "uuid": "77a89ef4-f499-4021-b545-cb551db6ca4c", 00:34:30.801 "method": "bdev_lvol_get_lvstores", 00:34:30.801 "req_id": 1 00:34:30.801 } 00:34:30.801 Got JSON-RPC error response 00:34:30.801 response: 00:34:30.801 { 00:34:30.801 "code": -19, 00:34:30.801 "message": "No such device" 00:34:30.801 } 00:34:30.801 08:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:34:30.801 08:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:30.801 08:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:30.801 08:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:30.801 08:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:34:30.801 aio_bdev 00:34:30.801 08:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 8c4d2cdc-dbb3-4a2e-9611-216380ea4d8c 00:34:30.801 08:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=8c4d2cdc-dbb3-4a2e-9611-216380ea4d8c 00:34:30.801 08:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:34:30.801 08:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:34:30.801 08:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:34:30.802 08:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:34:30.802 08:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:31.061 08:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8c4d2cdc-dbb3-4a2e-9611-216380ea4d8c -t 2000 00:34:31.322 [ 00:34:31.322 { 00:34:31.322 "name": "8c4d2cdc-dbb3-4a2e-9611-216380ea4d8c", 00:34:31.322 "aliases": [ 00:34:31.322 "lvs/lvol" 00:34:31.322 ], 00:34:31.322 "product_name": "Logical Volume", 00:34:31.322 "block_size": 4096, 00:34:31.322 "num_blocks": 38912, 00:34:31.322 "uuid": "8c4d2cdc-dbb3-4a2e-9611-216380ea4d8c", 00:34:31.322 "assigned_rate_limits": { 00:34:31.322 "rw_ios_per_sec": 0, 00:34:31.322 "rw_mbytes_per_sec": 0, 00:34:31.322 "r_mbytes_per_sec": 0, 00:34:31.322 "w_mbytes_per_sec": 0 00:34:31.322 }, 00:34:31.322 "claimed": false, 00:34:31.322 "zoned": false, 00:34:31.322 "supported_io_types": { 00:34:31.322 "read": true, 00:34:31.322 "write": true, 00:34:31.322 "unmap": true, 00:34:31.322 "flush": false, 00:34:31.322 "reset": true, 00:34:31.322 "nvme_admin": false, 00:34:31.322 "nvme_io": false, 00:34:31.322 "nvme_io_md": false, 00:34:31.322 "write_zeroes": true, 00:34:31.322 "zcopy": false, 00:34:31.322 "get_zone_info": false, 00:34:31.322 "zone_management": false, 00:34:31.322 "zone_append": false, 00:34:31.322 "compare": false, 00:34:31.322 "compare_and_write": false, 00:34:31.322 "abort": false, 00:34:31.322 "seek_hole": true, 00:34:31.322 "seek_data": true, 00:34:31.322 "copy": false, 00:34:31.322 "nvme_iov_md": false 00:34:31.322 }, 00:34:31.322 "driver_specific": { 00:34:31.322 "lvol": { 00:34:31.322 "lvol_store_uuid": "77a89ef4-f499-4021-b545-cb551db6ca4c", 00:34:31.322 "base_bdev": "aio_bdev", 00:34:31.322 "thin_provision": false, 00:34:31.322 "num_allocated_clusters": 38, 00:34:31.322 "snapshot": false, 00:34:31.322 "clone": false, 00:34:31.322 "esnap_clone": false 00:34:31.322 } 00:34:31.322 } 00:34:31.322 } 00:34:31.322 ] 00:34:31.322 08:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:34:31.322 08:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77a89ef4-f499-4021-b545-cb551db6ca4c 00:34:31.322 08:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:34:31.322 08:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:34:31.322 08:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77a89ef4-f499-4021-b545-cb551db6ca4c 00:34:31.322 08:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:34:31.584 08:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:34:31.584 08:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8c4d2cdc-dbb3-4a2e-9611-216380ea4d8c 00:34:31.844 08:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 77a89ef4-f499-4021-b545-cb551db6ca4c 00:34:31.844 08:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:34:32.105 08:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:32.105 00:34:32.105 real 0m17.368s 00:34:32.105 user 0m35.242s 00:34:32.105 sys 0m2.917s 00:34:32.105 08:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:32.105 08:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:32.105 ************************************ 00:34:32.105 END TEST lvs_grow_dirty 00:34:32.105 ************************************ 00:34:32.105 08:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:34:32.105 08:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:34:32.105 08:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:34:32.105 08:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:34:32.105 08:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:34:32.105 08:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:34:32.105 08:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:34:32.105 08:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:34:32.105 08:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:34:32.105 nvmf_trace.0 00:34:32.105 08:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:34:32.105 08:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:34:32.105 08:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # nvmfcleanup 00:34:32.105 08:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:34:32.105 08:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:32.105 08:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:34:32.105 08:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:32.105 08:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:32.105 rmmod nvme_tcp 00:34:32.105 rmmod nvme_fabrics 00:34:32.105 rmmod nvme_keyring 00:34:32.105 08:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:32.105 08:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:34:32.105 08:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:34:32.105 08:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@513 -- # '[' -n 3990556 ']' 00:34:32.105 08:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@514 -- # killprocess 3990556 00:34:32.105 08:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 3990556 ']' 00:34:32.105 08:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 3990556 00:34:32.105 08:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:34:32.105 08:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:32.105 08:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3990556 00:34:32.366 08:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:32.366 08:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:32.366 08:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3990556' 00:34:32.366 killing process with pid 3990556 00:34:32.366 08:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 3990556 00:34:32.366 08:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 3990556 00:34:32.366 08:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:34:32.366 08:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:34:32.366 08:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:34:32.366 08:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:34:32.366 08:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-save 00:34:32.366 08:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:34:32.366 08:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-restore 00:34:32.366 08:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:32.366 08:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:32.366 08:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:32.366 08:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:32.366 08:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:34.914 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:34.914 00:34:34.914 real 0m44.578s 00:34:34.914 user 0m53.699s 00:34:34.914 sys 0m10.446s 00:34:34.914 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:34.914 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:34.914 ************************************ 00:34:34.914 END TEST nvmf_lvs_grow 00:34:34.914 ************************************ 00:34:34.914 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:34:34.914 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:34:34.914 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:34.914 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:34.914 ************************************ 00:34:34.914 START TEST nvmf_bdev_io_wait 00:34:34.914 ************************************ 00:34:34.914 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:34:34.914 * Looking for test storage... 00:34:34.914 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:34.914 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:34:34.914 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:34:34.914 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:34:34.914 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:34:34.914 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:34.914 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:34.914 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:34.914 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:34:34.914 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:34:34.914 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:34:34.914 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:34:34.914 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:34:34.914 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:34:34.914 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:34:34.914 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:34.914 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:34:34.914 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:34:34.914 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:34.914 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:34.914 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:34:34.914 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:34:34.914 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:34.914 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:34:34.914 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:34:34.914 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:34:34.914 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:34:34.914 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:34.914 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:34:34.914 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:34:34.914 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:34.914 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:34.914 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:34:34.914 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:34.914 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:34:34.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:34.914 --rc genhtml_branch_coverage=1 00:34:34.914 --rc genhtml_function_coverage=1 00:34:34.914 --rc genhtml_legend=1 00:34:34.914 --rc geninfo_all_blocks=1 00:34:34.914 --rc geninfo_unexecuted_blocks=1 00:34:34.914 00:34:34.914 ' 00:34:34.914 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:34:34.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:34.914 --rc genhtml_branch_coverage=1 00:34:34.914 --rc genhtml_function_coverage=1 00:34:34.914 --rc genhtml_legend=1 00:34:34.914 --rc geninfo_all_blocks=1 00:34:34.914 --rc geninfo_unexecuted_blocks=1 00:34:34.914 00:34:34.914 ' 00:34:34.914 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:34:34.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:34.914 --rc genhtml_branch_coverage=1 00:34:34.914 --rc genhtml_function_coverage=1 00:34:34.914 --rc genhtml_legend=1 00:34:34.914 --rc geninfo_all_blocks=1 00:34:34.914 --rc geninfo_unexecuted_blocks=1 00:34:34.914 00:34:34.914 ' 00:34:34.914 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:34:34.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:34.914 --rc genhtml_branch_coverage=1 00:34:34.914 --rc genhtml_function_coverage=1 00:34:34.914 --rc genhtml_legend=1 00:34:34.914 --rc geninfo_all_blocks=1 00:34:34.914 --rc geninfo_unexecuted_blocks=1 00:34:34.914 00:34:34.914 ' 00:34:34.914 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:34.914 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:34:34.914 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:34.914 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:34.914 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:34.914 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:34.914 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:34.914 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:34.914 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:34.914 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:34.914 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:34.914 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:34.914 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:34.914 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:34.914 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:34.914 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:34.914 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:34.914 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:34.914 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:34.914 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:34:34.914 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:34.914 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:34.914 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:34.915 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.915 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.915 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.915 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:34:34.915 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.915 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:34:34.915 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:34.915 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:34.915 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:34.915 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:34.915 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:34.915 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:34.915 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:34.915 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:34.915 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:34.915 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:34.915 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:34.915 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:34.915 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:34:34.915 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:34:34.915 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:34.915 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # prepare_net_devs 00:34:34.915 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@434 -- # local -g is_hw=no 00:34:34.915 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # remove_spdk_ns 00:34:34.915 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:34.915 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:34.915 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:34.915 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:34:34.915 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:34:34.915 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:34:34.915 08:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:43.055 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:43.055 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:34:43.055 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:43.055 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:43.056 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:43.056 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ up == up ]] 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:43.056 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ up == up ]] 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:43.056 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # is_hw=yes 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:43.056 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:43.057 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:43.057 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:43.057 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:43.057 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:43.057 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:43.057 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:43.057 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:43.057 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:43.057 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:43.057 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:43.057 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.660 ms 00:34:43.057 00:34:43.057 --- 10.0.0.2 ping statistics --- 00:34:43.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:43.057 rtt min/avg/max/mdev = 0.660/0.660/0.660/0.000 ms 00:34:43.057 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:43.057 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:43.057 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:34:43.057 00:34:43.057 --- 10.0.0.1 ping statistics --- 00:34:43.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:43.057 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:34:43.057 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:43.057 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # return 0 00:34:43.057 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:34:43.057 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:43.057 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:34:43.057 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:34:43.057 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:43.057 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:34:43.057 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:34:43.057 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:34:43.057 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:34:43.057 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:43.057 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:43.057 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:34:43.057 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # nvmfpid=3995591 00:34:43.057 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # waitforlisten 3995591 00:34:43.057 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 3995591 ']' 00:34:43.057 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:43.057 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:43.057 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:43.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:43.057 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:43.057 08:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:43.057 [2024-10-01 08:49:33.730162] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:43.057 [2024-10-01 08:49:33.731246] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:34:43.057 [2024-10-01 08:49:33.731294] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:43.057 [2024-10-01 08:49:33.798722] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:43.057 [2024-10-01 08:49:33.864557] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:43.057 [2024-10-01 08:49:33.864592] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:43.057 [2024-10-01 08:49:33.864600] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:43.057 [2024-10-01 08:49:33.864606] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:43.057 [2024-10-01 08:49:33.864612] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:43.057 [2024-10-01 08:49:33.866268] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:34:43.057 [2024-10-01 08:49:33.866386] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:34:43.057 [2024-10-01 08:49:33.866542] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:34:43.057 [2024-10-01 08:49:33.866543] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:34:43.057 [2024-10-01 08:49:33.866800] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:43.057 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:43.057 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:34:43.057 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:34:43.057 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:43.057 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:43.057 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:43.057 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:34:43.057 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.057 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:43.057 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.057 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:34:43.057 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.057 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:43.057 [2024-10-01 08:49:34.616403] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:43.057 [2024-10-01 08:49:34.616883] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:43.057 [2024-10-01 08:49:34.617481] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:43.057 [2024-10-01 08:49:34.617689] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:43.058 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.058 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:43.058 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.058 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:43.058 [2024-10-01 08:49:34.626986] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:43.058 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.058 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:43.058 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.058 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:43.058 Malloc0 00:34:43.058 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.058 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:43.058 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.058 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:43.058 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.058 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:43.058 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.058 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:43.058 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.058 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:43.058 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.058 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:43.058 [2024-10-01 08:49:34.703167] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:43.058 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.058 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3995648 00:34:43.058 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3995650 00:34:43.058 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:34:43.058 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:34:43.058 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:34:43.058 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:34:43.058 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:34:43.058 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:34:43.058 { 00:34:43.058 "params": { 00:34:43.058 "name": "Nvme$subsystem", 00:34:43.058 "trtype": "$TEST_TRANSPORT", 00:34:43.058 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:43.058 "adrfam": "ipv4", 00:34:43.058 "trsvcid": "$NVMF_PORT", 00:34:43.058 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:43.058 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:43.058 "hdgst": ${hdgst:-false}, 00:34:43.058 "ddgst": ${ddgst:-false} 00:34:43.058 }, 00:34:43.058 "method": "bdev_nvme_attach_controller" 00:34:43.058 } 00:34:43.058 EOF 00:34:43.058 )") 00:34:43.058 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3995653 00:34:43.058 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:34:43.058 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:34:43.058 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:34:43.058 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:34:43.058 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:34:43.058 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3995658 00:34:43.058 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:34:43.058 { 00:34:43.058 "params": { 00:34:43.058 "name": "Nvme$subsystem", 00:34:43.058 "trtype": "$TEST_TRANSPORT", 00:34:43.058 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:43.058 "adrfam": "ipv4", 00:34:43.058 "trsvcid": "$NVMF_PORT", 00:34:43.058 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:43.058 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:43.058 "hdgst": ${hdgst:-false}, 00:34:43.058 "ddgst": ${ddgst:-false} 00:34:43.058 }, 00:34:43.058 "method": "bdev_nvme_attach_controller" 00:34:43.058 } 00:34:43.058 EOF 00:34:43.058 )") 00:34:43.058 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:34:43.058 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:34:43.058 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:34:43.058 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:34:43.058 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:34:43.058 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:34:43.058 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:34:43.058 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:34:43.058 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:34:43.058 { 00:34:43.058 "params": { 00:34:43.058 "name": "Nvme$subsystem", 00:34:43.058 "trtype": "$TEST_TRANSPORT", 00:34:43.058 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:43.058 "adrfam": "ipv4", 00:34:43.058 "trsvcid": "$NVMF_PORT", 00:34:43.058 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:43.058 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:43.058 "hdgst": ${hdgst:-false}, 00:34:43.058 "ddgst": ${ddgst:-false} 00:34:43.058 }, 00:34:43.058 "method": "bdev_nvme_attach_controller" 00:34:43.058 } 00:34:43.058 EOF 00:34:43.058 )") 00:34:43.058 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:34:43.058 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:34:43.058 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:34:43.058 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:34:43.058 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:34:43.058 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:34:43.058 { 00:34:43.058 "params": { 00:34:43.058 "name": "Nvme$subsystem", 00:34:43.058 "trtype": "$TEST_TRANSPORT", 00:34:43.058 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:43.058 "adrfam": "ipv4", 00:34:43.058 "trsvcid": "$NVMF_PORT", 00:34:43.058 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:43.058 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:43.058 "hdgst": ${hdgst:-false}, 00:34:43.058 "ddgst": ${ddgst:-false} 00:34:43.058 }, 00:34:43.059 "method": "bdev_nvme_attach_controller" 00:34:43.059 } 00:34:43.059 EOF 00:34:43.059 )") 00:34:43.059 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:34:43.059 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3995648 00:34:43.059 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:34:43.059 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:34:43.059 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:34:43.059 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:34:43.059 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:34:43.059 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:34:43.059 "params": { 00:34:43.059 "name": "Nvme1", 00:34:43.059 "trtype": "tcp", 00:34:43.059 "traddr": "10.0.0.2", 00:34:43.059 "adrfam": "ipv4", 00:34:43.059 "trsvcid": "4420", 00:34:43.059 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:43.059 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:43.059 "hdgst": false, 00:34:43.059 "ddgst": false 00:34:43.059 }, 00:34:43.059 "method": "bdev_nvme_attach_controller" 00:34:43.059 }' 00:34:43.059 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:34:43.059 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:34:43.059 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:34:43.059 "params": { 00:34:43.059 "name": "Nvme1", 00:34:43.059 "trtype": "tcp", 00:34:43.059 "traddr": "10.0.0.2", 00:34:43.059 "adrfam": "ipv4", 00:34:43.059 "trsvcid": "4420", 00:34:43.059 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:43.059 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:43.059 "hdgst": false, 00:34:43.059 "ddgst": false 00:34:43.059 }, 00:34:43.059 "method": "bdev_nvme_attach_controller" 00:34:43.059 }' 00:34:43.059 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:34:43.059 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:34:43.059 "params": { 00:34:43.059 "name": "Nvme1", 00:34:43.059 "trtype": "tcp", 00:34:43.059 "traddr": "10.0.0.2", 00:34:43.059 "adrfam": "ipv4", 00:34:43.059 "trsvcid": "4420", 00:34:43.059 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:43.059 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:43.059 "hdgst": false, 00:34:43.059 "ddgst": false 00:34:43.059 }, 00:34:43.059 "method": "bdev_nvme_attach_controller" 00:34:43.059 }' 00:34:43.059 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:34:43.059 08:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:34:43.059 "params": { 00:34:43.059 "name": "Nvme1", 00:34:43.059 "trtype": "tcp", 00:34:43.059 "traddr": "10.0.0.2", 00:34:43.059 "adrfam": "ipv4", 00:34:43.059 "trsvcid": "4420", 00:34:43.059 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:43.059 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:43.059 "hdgst": false, 00:34:43.059 "ddgst": false 00:34:43.059 }, 00:34:43.059 "method": "bdev_nvme_attach_controller" 00:34:43.059 }' 00:34:43.059 [2024-10-01 08:49:34.759770] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:34:43.059 [2024-10-01 08:49:34.759827] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:34:43.059 [2024-10-01 08:49:34.759918] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:34:43.059 [2024-10-01 08:49:34.759964] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:34:43.059 [2024-10-01 08:49:34.760647] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:34:43.059 [2024-10-01 08:49:34.760695] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:34:43.059 [2024-10-01 08:49:34.761368] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:34:43.059 [2024-10-01 08:49:34.761416] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:34:43.320 [2024-10-01 08:49:34.903554] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:43.320 [2024-10-01 08:49:34.933313] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:43.320 [2024-10-01 08:49:34.955671] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:34:43.320 [2024-10-01 08:49:34.983485] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:34:43.320 [2024-10-01 08:49:34.998929] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:43.320 [2024-10-01 08:49:35.049711] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:34:43.320 [2024-10-01 08:49:35.060436] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:43.320 [2024-10-01 08:49:35.111258] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:34:43.581 Running I/O for 1 seconds... 00:34:43.581 Running I/O for 1 seconds... 00:34:43.581 Running I/O for 1 seconds... 00:34:43.842 Running I/O for 1 seconds... 00:34:44.413 188776.00 IOPS, 737.41 MiB/s 00:34:44.413 Latency(us) 00:34:44.413 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:44.413 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:34:44.413 Nvme1n1 : 1.00 188403.43 735.95 0.00 0.00 675.84 310.61 1979.73 00:34:44.413 =================================================================================================================== 00:34:44.413 Total : 188403.43 735.95 0.00 0.00 675.84 310.61 1979.73 00:34:44.674 12943.00 IOPS, 50.56 MiB/s 00:34:44.674 Latency(us) 00:34:44.674 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:44.674 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:34:44.674 Nvme1n1 : 1.01 12996.32 50.77 0.00 0.00 9813.63 2088.96 12670.29 00:34:44.674 =================================================================================================================== 00:34:44.674 Total : 12996.32 50.77 0.00 0.00 9813.63 2088.96 12670.29 00:34:44.674 18824.00 IOPS, 73.53 MiB/s 00:34:44.674 Latency(us) 00:34:44.674 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:44.674 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:34:44.674 Nvme1n1 : 1.01 18896.64 73.82 0.00 0.00 6756.67 2170.88 10103.47 00:34:44.674 =================================================================================================================== 00:34:44.674 Total : 18896.64 73.82 0.00 0.00 6756.67 2170.88 10103.47 00:34:44.674 08:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3995650 00:34:44.934 11664.00 IOPS, 45.56 MiB/s 00:34:44.934 Latency(us) 00:34:44.934 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:44.934 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:34:44.934 Nvme1n1 : 1.01 11719.91 45.78 0.00 0.00 10883.49 4751.36 16930.13 00:34:44.934 =================================================================================================================== 00:34:44.934 Total : 11719.91 45.78 0.00 0.00 10883.49 4751.36 16930.13 00:34:44.934 08:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3995653 00:34:44.934 08:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3995658 00:34:44.934 08:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:44.934 08:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:44.934 08:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:44.934 08:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:44.934 08:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:34:44.934 08:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:34:44.934 08:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # nvmfcleanup 00:34:44.934 08:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:34:44.934 08:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:44.934 08:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:34:44.934 08:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:44.934 08:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:44.934 rmmod nvme_tcp 00:34:44.934 rmmod nvme_fabrics 00:34:44.934 rmmod nvme_keyring 00:34:44.934 08:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:44.934 08:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:34:44.934 08:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:34:44.934 08:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@513 -- # '[' -n 3995591 ']' 00:34:44.935 08:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # killprocess 3995591 00:34:44.935 08:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 3995591 ']' 00:34:44.935 08:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 3995591 00:34:44.935 08:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:34:44.935 08:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:44.935 08:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3995591 00:34:45.195 08:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:45.195 08:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:45.195 08:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3995591' 00:34:45.195 killing process with pid 3995591 00:34:45.195 08:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 3995591 00:34:45.195 08:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 3995591 00:34:45.195 08:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:34:45.195 08:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:34:45.195 08:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:34:45.195 08:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:34:45.195 08:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-save 00:34:45.195 08:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:34:45.195 08:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-restore 00:34:45.195 08:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:45.195 08:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:45.195 08:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:45.196 08:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:45.196 08:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:47.739 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:47.739 00:34:47.739 real 0m12.724s 00:34:47.739 user 0m15.992s 00:34:47.739 sys 0m7.275s 00:34:47.739 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:47.739 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:47.739 ************************************ 00:34:47.739 END TEST nvmf_bdev_io_wait 00:34:47.739 ************************************ 00:34:47.739 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:34:47.739 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:34:47.739 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:47.739 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:47.739 ************************************ 00:34:47.739 START TEST nvmf_queue_depth 00:34:47.739 ************************************ 00:34:47.740 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:34:47.740 * Looking for test storage... 00:34:47.740 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:47.740 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:34:47.740 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:34:47.740 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:34:47.740 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:34:47.740 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:47.740 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:47.740 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:47.740 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:34:47.740 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:34:47.740 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:34:47.740 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:34:47.740 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:34:47.740 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:34:47.740 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:34:47.740 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:47.740 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:34:47.740 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:34:47.740 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:47.740 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:47.740 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:34:47.740 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:34:47.740 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:47.740 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:34:47.740 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:34:47.740 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:34:47.740 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:34:47.740 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:47.740 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:34:47.740 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:34:47.740 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:47.740 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:47.740 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:34:47.740 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:47.740 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:34:47.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:47.740 --rc genhtml_branch_coverage=1 00:34:47.740 --rc genhtml_function_coverage=1 00:34:47.740 --rc genhtml_legend=1 00:34:47.740 --rc geninfo_all_blocks=1 00:34:47.740 --rc geninfo_unexecuted_blocks=1 00:34:47.740 00:34:47.740 ' 00:34:47.740 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:34:47.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:47.740 --rc genhtml_branch_coverage=1 00:34:47.740 --rc genhtml_function_coverage=1 00:34:47.740 --rc genhtml_legend=1 00:34:47.740 --rc geninfo_all_blocks=1 00:34:47.740 --rc geninfo_unexecuted_blocks=1 00:34:47.740 00:34:47.740 ' 00:34:47.740 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:34:47.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:47.740 --rc genhtml_branch_coverage=1 00:34:47.740 --rc genhtml_function_coverage=1 00:34:47.740 --rc genhtml_legend=1 00:34:47.740 --rc geninfo_all_blocks=1 00:34:47.740 --rc geninfo_unexecuted_blocks=1 00:34:47.740 00:34:47.740 ' 00:34:47.740 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:34:47.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:47.740 --rc genhtml_branch_coverage=1 00:34:47.740 --rc genhtml_function_coverage=1 00:34:47.740 --rc genhtml_legend=1 00:34:47.740 --rc geninfo_all_blocks=1 00:34:47.740 --rc geninfo_unexecuted_blocks=1 00:34:47.740 00:34:47.740 ' 00:34:47.740 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:47.740 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:34:47.740 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:47.740 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:47.740 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:47.740 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:47.740 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:47.740 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:47.740 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:47.740 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:47.740 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:47.740 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:47.740 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:47.740 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:47.740 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:47.740 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:47.740 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:47.740 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:47.740 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:47.740 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:34:47.740 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:47.740 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:47.740 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:47.740 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:47.740 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:47.741 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:47.741 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:34:47.741 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:47.741 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:34:47.741 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:47.741 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:47.741 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:47.741 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:47.741 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:47.741 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:47.741 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:47.741 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:47.741 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:47.741 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:47.741 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:34:47.741 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:34:47.741 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:47.741 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:34:47.741 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:34:47.741 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:47.741 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@472 -- # prepare_net_devs 00:34:47.741 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@434 -- # local -g is_hw=no 00:34:47.741 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@436 -- # remove_spdk_ns 00:34:47.741 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:47.741 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:47.741 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:47.741 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:34:47.741 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:34:47.741 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:34:47.741 08:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:54.326 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:54.326 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:34:54.326 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:54.326 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:54.326 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:54.326 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:54.326 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:54.326 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:34:54.326 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:54.326 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:34:54.326 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:34:54.326 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:34:54.326 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:34:54.326 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:34:54.326 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:34:54.326 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:54.326 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:54.326 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:54.326 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:54.326 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:54.326 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:54.326 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:54.326 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:54.326 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:54.326 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:54.326 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:54.326 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:34:54.326 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:34:54.326 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:34:54.326 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:34:54.326 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:34:54.326 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:34:54.326 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:34:54.326 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:54.326 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:54.326 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:34:54.326 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:34:54.326 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:54.326 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:54.326 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:34:54.326 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:34:54.326 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:54.326 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:54.326 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:34:54.326 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:34:54.326 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:54.326 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:54.326 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:34:54.326 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:34:54.326 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:34:54.326 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:34:54.326 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:34:54.326 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:54.326 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:34:54.326 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:54.587 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ up == up ]] 00:34:54.587 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:34:54.587 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:54.587 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:54.587 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:54.587 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:34:54.587 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:34:54.587 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:54.587 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:34:54.587 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:54.587 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ up == up ]] 00:34:54.587 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:34:54.587 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:54.587 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:54.587 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:54.587 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:34:54.587 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:34:54.587 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # is_hw=yes 00:34:54.587 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:34:54.587 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:34:54.587 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:34:54.587 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:54.587 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:54.587 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:54.587 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:54.587 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:54.587 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:54.587 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:54.587 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:54.587 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:54.587 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:54.587 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:54.587 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:54.587 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:54.587 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:54.587 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:54.587 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:54.587 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:54.587 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:54.587 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:54.848 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:54.848 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:54.848 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:54.848 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:54.848 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:54.848 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.657 ms 00:34:54.848 00:34:54.848 --- 10.0.0.2 ping statistics --- 00:34:54.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:54.848 rtt min/avg/max/mdev = 0.657/0.657/0.657/0.000 ms 00:34:54.848 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:54.848 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:54.848 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:34:54.848 00:34:54.848 --- 10.0.0.1 ping statistics --- 00:34:54.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:54.848 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:34:54.848 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:54.848 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # return 0 00:34:54.848 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:34:54.848 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:54.848 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:34:54.848 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:34:54.848 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:54.848 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:34:54.848 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:34:54.848 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:34:54.848 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:34:54.848 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:54.848 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:54.848 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:34:54.848 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@505 -- # nvmfpid=4000305 00:34:54.848 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@506 -- # waitforlisten 4000305 00:34:54.848 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 4000305 ']' 00:34:54.848 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:54.848 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:54.848 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:54.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:54.848 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:54.848 08:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:54.848 [2024-10-01 08:49:46.540580] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:54.848 [2024-10-01 08:49:46.541419] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:34:54.848 [2024-10-01 08:49:46.541460] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:54.848 [2024-10-01 08:49:46.623502] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:55.109 [2024-10-01 08:49:46.714579] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:55.109 [2024-10-01 08:49:46.714633] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:55.109 [2024-10-01 08:49:46.714641] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:55.109 [2024-10-01 08:49:46.714648] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:55.109 [2024-10-01 08:49:46.714655] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:55.109 [2024-10-01 08:49:46.715414] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:34:55.109 [2024-10-01 08:49:46.790355] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:55.110 [2024-10-01 08:49:46.790643] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:55.682 08:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:55.682 08:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:34:55.682 08:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:34:55.682 08:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:55.682 08:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:55.682 08:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:55.682 08:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:55.682 08:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:55.682 08:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:55.682 [2024-10-01 08:49:47.428287] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:55.682 08:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:55.682 08:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:55.682 08:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:55.682 08:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:55.682 Malloc0 00:34:55.682 08:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:55.682 08:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:55.682 08:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:55.682 08:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:55.682 08:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:55.682 08:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:55.682 08:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:55.682 08:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:55.682 08:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:55.682 08:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:55.682 08:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:55.682 08:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:55.941 [2024-10-01 08:49:47.508379] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:55.941 08:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:55.941 08:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=4000402 00:34:55.941 08:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:55.941 08:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:34:55.941 08:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 4000402 /var/tmp/bdevperf.sock 00:34:55.941 08:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 4000402 ']' 00:34:55.941 08:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:55.941 08:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:55.941 08:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:55.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:55.941 08:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:55.941 08:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:55.941 [2024-10-01 08:49:47.564042] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:34:55.941 [2024-10-01 08:49:47.564094] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4000402 ] 00:34:55.941 [2024-10-01 08:49:47.624723] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:55.941 [2024-10-01 08:49:47.691479] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:34:56.880 08:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:56.880 08:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:34:56.880 08:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:56.880 08:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:56.880 08:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:56.880 NVMe0n1 00:34:56.880 08:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:56.880 08:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:56.880 Running I/O for 10 seconds... 00:35:07.187 9138.00 IOPS, 35.70 MiB/s 9219.50 IOPS, 36.01 MiB/s 9448.00 IOPS, 36.91 MiB/s 9494.00 IOPS, 37.09 MiB/s 10044.80 IOPS, 39.24 MiB/s 10430.33 IOPS, 40.74 MiB/s 10686.57 IOPS, 41.74 MiB/s 10888.38 IOPS, 42.53 MiB/s 11050.44 IOPS, 43.17 MiB/s 11203.60 IOPS, 43.76 MiB/s 00:35:07.187 Latency(us) 00:35:07.187 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:07.187 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:35:07.187 Verification LBA range: start 0x0 length 0x4000 00:35:07.187 NVMe0n1 : 10.05 11241.04 43.91 0.00 0.00 90749.61 10540.37 67283.63 00:35:07.187 =================================================================================================================== 00:35:07.187 Total : 11241.04 43.91 0.00 0.00 90749.61 10540.37 67283.63 00:35:07.187 { 00:35:07.187 "results": [ 00:35:07.187 { 00:35:07.187 "job": "NVMe0n1", 00:35:07.187 "core_mask": "0x1", 00:35:07.187 "workload": "verify", 00:35:07.187 "status": "finished", 00:35:07.187 "verify_range": { 00:35:07.187 "start": 0, 00:35:07.187 "length": 16384 00:35:07.187 }, 00:35:07.187 "queue_depth": 1024, 00:35:07.187 "io_size": 4096, 00:35:07.187 "runtime": 10.046493, 00:35:07.187 "iops": 11241.037046459895, 00:35:07.187 "mibps": 43.91030096273396, 00:35:07.187 "io_failed": 0, 00:35:07.187 "io_timeout": 0, 00:35:07.187 "avg_latency_us": 90749.61077547455, 00:35:07.187 "min_latency_us": 10540.373333333333, 00:35:07.187 "max_latency_us": 67283.62666666666 00:35:07.187 } 00:35:07.187 ], 00:35:07.187 "core_count": 1 00:35:07.187 } 00:35:07.187 08:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 4000402 00:35:07.187 08:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 4000402 ']' 00:35:07.187 08:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 4000402 00:35:07.187 08:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:35:07.187 08:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:07.187 08:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4000402 00:35:07.187 08:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:07.187 08:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:07.187 08:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4000402' 00:35:07.187 killing process with pid 4000402 00:35:07.187 08:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 4000402 00:35:07.187 Received shutdown signal, test time was about 10.000000 seconds 00:35:07.187 00:35:07.187 Latency(us) 00:35:07.187 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:07.187 =================================================================================================================== 00:35:07.187 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:07.188 08:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 4000402 00:35:07.188 08:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:35:07.188 08:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:35:07.188 08:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # nvmfcleanup 00:35:07.188 08:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:35:07.188 08:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:07.188 08:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:35:07.188 08:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:07.188 08:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:07.188 rmmod nvme_tcp 00:35:07.188 rmmod nvme_fabrics 00:35:07.188 rmmod nvme_keyring 00:35:07.188 08:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:07.188 08:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:35:07.188 08:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:35:07.188 08:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@513 -- # '[' -n 4000305 ']' 00:35:07.188 08:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@514 -- # killprocess 4000305 00:35:07.188 08:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 4000305 ']' 00:35:07.188 08:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 4000305 00:35:07.188 08:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:35:07.188 08:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:07.188 08:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4000305 00:35:07.188 08:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:07.188 08:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:07.188 08:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4000305' 00:35:07.188 killing process with pid 4000305 00:35:07.188 08:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 4000305 00:35:07.188 08:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 4000305 00:35:07.453 08:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:35:07.453 08:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:35:07.453 08:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:35:07.453 08:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:35:07.453 08:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-save 00:35:07.453 08:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:35:07.453 08:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-restore 00:35:07.453 08:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:07.453 08:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:07.453 08:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:07.453 08:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:07.453 08:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:09.367 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:09.367 00:35:09.367 real 0m22.063s 00:35:09.367 user 0m24.484s 00:35:09.367 sys 0m7.111s 00:35:09.367 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:09.367 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:09.367 ************************************ 00:35:09.367 END TEST nvmf_queue_depth 00:35:09.367 ************************************ 00:35:09.628 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:35:09.628 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:35:09.628 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:09.628 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:09.628 ************************************ 00:35:09.628 START TEST nvmf_target_multipath 00:35:09.628 ************************************ 00:35:09.628 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:35:09.628 * Looking for test storage... 00:35:09.628 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:09.628 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:35:09.628 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:35:09.628 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:35:09.628 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:35:09.628 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:09.628 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:09.628 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:09.628 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:35:09.628 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:35:09.628 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:35:09.628 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:35:09.628 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:35:09.628 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:35:09.628 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:35:09.628 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:09.629 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:35:09.629 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:35:09.629 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:09.629 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:09.629 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:35:09.629 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:35:09.629 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:09.629 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:35:09.629 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:35:09.629 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:35:09.629 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:35:09.629 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:09.629 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:35:09.629 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:35:09.629 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:09.629 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:09.629 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:35:09.629 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:09.629 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:35:09.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:09.629 --rc genhtml_branch_coverage=1 00:35:09.629 --rc genhtml_function_coverage=1 00:35:09.629 --rc genhtml_legend=1 00:35:09.629 --rc geninfo_all_blocks=1 00:35:09.629 --rc geninfo_unexecuted_blocks=1 00:35:09.629 00:35:09.629 ' 00:35:09.629 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:35:09.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:09.629 --rc genhtml_branch_coverage=1 00:35:09.629 --rc genhtml_function_coverage=1 00:35:09.629 --rc genhtml_legend=1 00:35:09.629 --rc geninfo_all_blocks=1 00:35:09.629 --rc geninfo_unexecuted_blocks=1 00:35:09.629 00:35:09.629 ' 00:35:09.629 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:35:09.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:09.629 --rc genhtml_branch_coverage=1 00:35:09.629 --rc genhtml_function_coverage=1 00:35:09.629 --rc genhtml_legend=1 00:35:09.629 --rc geninfo_all_blocks=1 00:35:09.629 --rc geninfo_unexecuted_blocks=1 00:35:09.629 00:35:09.629 ' 00:35:09.629 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:35:09.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:09.629 --rc genhtml_branch_coverage=1 00:35:09.629 --rc genhtml_function_coverage=1 00:35:09.629 --rc genhtml_legend=1 00:35:09.629 --rc geninfo_all_blocks=1 00:35:09.629 --rc geninfo_unexecuted_blocks=1 00:35:09.629 00:35:09.629 ' 00:35:09.629 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:09.629 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:35:09.891 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:09.891 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:09.891 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:09.891 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:09.891 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:09.891 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:09.891 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:09.891 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:09.891 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:09.891 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:09.891 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:09.891 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:09.891 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:09.891 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:09.891 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:09.891 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:09.891 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:09.891 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:35:09.891 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:09.891 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:09.891 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:09.891 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:09.891 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:09.891 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:09.891 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:35:09.891 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:09.891 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:35:09.891 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:09.891 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:09.891 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:09.891 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:09.891 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:09.891 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:09.891 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:09.891 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:09.891 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:09.891 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:09.891 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:09.891 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:09.891 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:35:09.891 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:09.891 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:35:09.891 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:35:09.891 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:09.891 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@472 -- # prepare_net_devs 00:35:09.891 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@434 -- # local -g is_hw=no 00:35:09.891 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@436 -- # remove_spdk_ns 00:35:09.891 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:09.891 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:09.891 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:09.891 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:35:09.891 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:35:09.891 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:35:09.891 08:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:35:18.035 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:18.035 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:35:18.035 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:18.035 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:18.035 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:18.035 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:18.035 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:18.035 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:35:18.035 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:18.035 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:35:18.035 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:35:18.035 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:35:18.035 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:35:18.035 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:35:18.035 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:35:18.035 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:18.035 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:18.035 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:18.035 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:18.035 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:18.035 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:18.035 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:18.035 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:18.035 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:18.035 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:18.035 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:18.035 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:35:18.035 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:35:18.035 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:35:18.035 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:35:18.035 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:35:18.035 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:35:18.035 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:35:18.035 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:18.035 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:18.036 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ up == up ]] 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:18.036 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ up == up ]] 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:18.036 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # is_hw=yes 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:18.036 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:18.036 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.550 ms 00:35:18.036 00:35:18.036 --- 10.0.0.2 ping statistics --- 00:35:18.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:18.036 rtt min/avg/max/mdev = 0.550/0.550/0.550/0.000 ms 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:18.036 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:18.036 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:35:18.036 00:35:18.036 --- 10.0.0.1 ping statistics --- 00:35:18.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:18.036 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # return 0 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:35:18.036 only one NIC for nvmf test 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:18.036 rmmod nvme_tcp 00:35:18.036 rmmod nvme_fabrics 00:35:18.036 rmmod nvme_keyring 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:35:18.036 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:35:18.037 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:35:18.037 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:35:18.037 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:35:18.037 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:18.037 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:18.037 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:18.037 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:18.037 08:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:19.423 08:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:19.423 08:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:35:19.423 08:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:35:19.423 08:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:35:19.423 08:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:35:19.423 08:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:19.423 08:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:35:19.423 08:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:19.423 08:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:19.423 08:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:19.423 08:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:35:19.423 08:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:35:19.423 08:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:35:19.423 08:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:35:19.423 08:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:35:19.423 08:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:35:19.423 08:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:35:19.423 08:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:35:19.423 08:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:35:19.424 08:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:35:19.424 08:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:19.424 08:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:19.424 08:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:19.424 08:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:19.424 08:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:19.424 08:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:19.424 00:35:19.424 real 0m9.712s 00:35:19.424 user 0m2.063s 00:35:19.424 sys 0m5.603s 00:35:19.424 08:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:19.424 08:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:35:19.424 ************************************ 00:35:19.424 END TEST nvmf_target_multipath 00:35:19.424 ************************************ 00:35:19.424 08:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:35:19.424 08:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:35:19.424 08:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:19.424 08:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:19.424 ************************************ 00:35:19.424 START TEST nvmf_zcopy 00:35:19.424 ************************************ 00:35:19.424 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:35:19.424 * Looking for test storage... 00:35:19.424 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:19.424 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:35:19.424 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:35:19.424 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:35:19.424 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:35:19.424 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:19.424 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:19.424 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:19.424 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:35:19.424 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:35:19.424 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:35:19.424 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:35:19.424 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:35:19.424 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:35:19.424 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:35:19.424 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:19.424 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:35:19.424 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:35:19.424 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:19.424 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:19.424 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:35:19.424 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:35:19.424 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:19.424 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:35:19.424 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:35:19.424 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:35:19.424 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:35:19.424 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:19.424 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:35:19.424 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:35:19.424 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:19.424 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:19.424 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:35:19.424 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:19.424 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:35:19.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:19.424 --rc genhtml_branch_coverage=1 00:35:19.424 --rc genhtml_function_coverage=1 00:35:19.424 --rc genhtml_legend=1 00:35:19.424 --rc geninfo_all_blocks=1 00:35:19.424 --rc geninfo_unexecuted_blocks=1 00:35:19.424 00:35:19.424 ' 00:35:19.424 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:35:19.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:19.424 --rc genhtml_branch_coverage=1 00:35:19.424 --rc genhtml_function_coverage=1 00:35:19.424 --rc genhtml_legend=1 00:35:19.424 --rc geninfo_all_blocks=1 00:35:19.424 --rc geninfo_unexecuted_blocks=1 00:35:19.424 00:35:19.424 ' 00:35:19.424 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:35:19.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:19.424 --rc genhtml_branch_coverage=1 00:35:19.424 --rc genhtml_function_coverage=1 00:35:19.424 --rc genhtml_legend=1 00:35:19.424 --rc geninfo_all_blocks=1 00:35:19.424 --rc geninfo_unexecuted_blocks=1 00:35:19.424 00:35:19.424 ' 00:35:19.424 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:35:19.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:19.424 --rc genhtml_branch_coverage=1 00:35:19.424 --rc genhtml_function_coverage=1 00:35:19.424 --rc genhtml_legend=1 00:35:19.424 --rc geninfo_all_blocks=1 00:35:19.424 --rc geninfo_unexecuted_blocks=1 00:35:19.424 00:35:19.424 ' 00:35:19.424 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:19.424 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:35:19.687 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:19.687 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:19.687 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:19.687 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:19.687 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:19.687 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:19.687 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:19.687 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:19.687 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:19.687 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:19.687 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:19.687 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:19.687 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:19.687 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:19.687 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:19.687 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:19.687 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:19.688 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:35:19.688 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:19.688 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:19.688 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:19.688 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:19.688 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:19.688 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:19.688 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:35:19.688 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:19.688 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:35:19.688 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:19.688 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:19.688 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:19.688 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:19.688 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:19.688 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:19.688 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:19.688 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:19.688 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:19.688 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:19.688 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:35:19.688 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:35:19.688 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:19.688 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@472 -- # prepare_net_devs 00:35:19.688 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@434 -- # local -g is_hw=no 00:35:19.688 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@436 -- # remove_spdk_ns 00:35:19.688 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:19.688 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:19.688 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:19.688 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:35:19.688 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:35:19.688 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:35:19.688 08:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:27.830 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:27.830 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:35:27.830 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:27.830 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:27.830 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:27.830 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:27.830 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:27.830 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:35:27.830 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:27.830 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:35:27.830 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:35:27.830 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:35:27.830 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:35:27.830 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:35:27.830 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:35:27.830 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:27.830 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:27.830 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:27.830 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:27.830 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:27.830 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:27.830 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:27.830 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:27.830 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:27.830 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:27.830 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:27.830 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:35:27.830 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:35:27.830 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:27.831 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:27.831 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ up == up ]] 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:27.831 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ up == up ]] 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:27.831 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # is_hw=yes 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:27.831 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:27.831 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.518 ms 00:35:27.831 00:35:27.831 --- 10.0.0.2 ping statistics --- 00:35:27.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:27.831 rtt min/avg/max/mdev = 0.518/0.518/0.518/0.000 ms 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:27.831 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:27.831 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:35:27.831 00:35:27.831 --- 10.0.0.1 ping statistics --- 00:35:27.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:27.831 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # return 0 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:35:27.831 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:35:27.832 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:27.832 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:35:27.832 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:35:27.832 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:35:27.832 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:35:27.832 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:27.832 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:27.832 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@505 -- # nvmfpid=4010784 00:35:27.832 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@506 -- # waitforlisten 4010784 00:35:27.832 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:35:27.832 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 4010784 ']' 00:35:27.832 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:27.832 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:27.832 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:27.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:27.832 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:27.832 08:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:27.832 [2024-10-01 08:50:18.537810] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:27.832 [2024-10-01 08:50:18.538940] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:35:27.832 [2024-10-01 08:50:18.539002] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:27.832 [2024-10-01 08:50:18.610848] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:27.832 [2024-10-01 08:50:18.705055] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:27.832 [2024-10-01 08:50:18.705109] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:27.832 [2024-10-01 08:50:18.705117] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:27.832 [2024-10-01 08:50:18.705125] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:27.832 [2024-10-01 08:50:18.705131] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:27.832 [2024-10-01 08:50:18.705870] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:35:27.832 [2024-10-01 08:50:18.781748] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:27.832 [2024-10-01 08:50:18.782063] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:27.832 08:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:27.832 08:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:35:27.832 08:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:35:27.832 08:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:27.832 08:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:27.832 08:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:27.832 08:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:35:27.832 08:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:35:27.832 08:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:27.832 08:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:27.832 [2024-10-01 08:50:19.386735] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:27.832 08:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.832 08:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:35:27.832 08:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:27.832 08:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:27.832 08:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.832 08:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:27.832 08:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:27.832 08:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:27.832 [2024-10-01 08:50:19.415040] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:27.832 08:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.832 08:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:27.832 08:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:27.832 08:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:27.832 08:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.832 08:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:35:27.832 08:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:27.832 08:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:27.832 malloc0 00:35:27.832 08:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.832 08:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:35:27.832 08:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:27.832 08:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:27.832 08:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.832 08:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:35:27.832 08:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:35:27.832 08:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:35:27.832 08:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:35:27.832 08:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:35:27.832 08:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:35:27.832 { 00:35:27.832 "params": { 00:35:27.832 "name": "Nvme$subsystem", 00:35:27.832 "trtype": "$TEST_TRANSPORT", 00:35:27.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:27.832 "adrfam": "ipv4", 00:35:27.832 "trsvcid": "$NVMF_PORT", 00:35:27.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:27.832 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:27.832 "hdgst": ${hdgst:-false}, 00:35:27.832 "ddgst": ${ddgst:-false} 00:35:27.832 }, 00:35:27.832 "method": "bdev_nvme_attach_controller" 00:35:27.832 } 00:35:27.832 EOF 00:35:27.832 )") 00:35:27.832 08:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:35:27.833 08:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:35:27.833 08:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:35:27.833 08:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:35:27.833 "params": { 00:35:27.833 "name": "Nvme1", 00:35:27.833 "trtype": "tcp", 00:35:27.833 "traddr": "10.0.0.2", 00:35:27.833 "adrfam": "ipv4", 00:35:27.833 "trsvcid": "4420", 00:35:27.833 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:27.833 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:27.833 "hdgst": false, 00:35:27.833 "ddgst": false 00:35:27.833 }, 00:35:27.833 "method": "bdev_nvme_attach_controller" 00:35:27.833 }' 00:35:27.833 [2024-10-01 08:50:19.522897] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:35:27.833 [2024-10-01 08:50:19.522959] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4011011 ] 00:35:27.833 [2024-10-01 08:50:19.588077] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:28.128 [2024-10-01 08:50:19.658826] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:35:28.430 Running I/O for 10 seconds... 00:35:38.262 6515.00 IOPS, 50.90 MiB/s 6565.00 IOPS, 51.29 MiB/s 6582.67 IOPS, 51.43 MiB/s 6592.50 IOPS, 51.50 MiB/s 6600.00 IOPS, 51.56 MiB/s 6672.17 IOPS, 52.13 MiB/s 7083.29 IOPS, 55.34 MiB/s 7392.00 IOPS, 57.75 MiB/s 7632.33 IOPS, 59.63 MiB/s 7825.90 IOPS, 61.14 MiB/s 00:35:38.262 Latency(us) 00:35:38.262 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:38.262 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:35:38.262 Verification LBA range: start 0x0 length 0x1000 00:35:38.262 Nvme1n1 : 10.01 7829.87 61.17 0.00 0.00 16295.14 1966.08 27525.12 00:35:38.262 =================================================================================================================== 00:35:38.262 Total : 7829.87 61.17 0.00 0.00 16295.14 1966.08 27525.12 00:35:38.522 08:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=4013013 00:35:38.522 08:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:35:38.522 08:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:38.522 08:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:35:38.522 08:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:35:38.522 08:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:35:38.522 08:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:35:38.522 08:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:35:38.522 08:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:35:38.522 { 00:35:38.522 "params": { 00:35:38.522 "name": "Nvme$subsystem", 00:35:38.522 "trtype": "$TEST_TRANSPORT", 00:35:38.522 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:38.522 "adrfam": "ipv4", 00:35:38.522 "trsvcid": "$NVMF_PORT", 00:35:38.522 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:38.522 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:38.522 "hdgst": ${hdgst:-false}, 00:35:38.522 "ddgst": ${ddgst:-false} 00:35:38.522 }, 00:35:38.522 "method": "bdev_nvme_attach_controller" 00:35:38.522 } 00:35:38.522 EOF 00:35:38.522 )") 00:35:38.522 08:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:35:38.522 [2024-10-01 08:50:30.146287] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.522 [2024-10-01 08:50:30.146316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.522 08:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:35:38.522 08:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:35:38.522 08:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:35:38.522 "params": { 00:35:38.522 "name": "Nvme1", 00:35:38.522 "trtype": "tcp", 00:35:38.522 "traddr": "10.0.0.2", 00:35:38.522 "adrfam": "ipv4", 00:35:38.522 "trsvcid": "4420", 00:35:38.522 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:38.522 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:38.522 "hdgst": false, 00:35:38.522 "ddgst": false 00:35:38.522 }, 00:35:38.522 "method": "bdev_nvme_attach_controller" 00:35:38.522 }' 00:35:38.522 [2024-10-01 08:50:30.158255] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.522 [2024-10-01 08:50:30.158265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.522 [2024-10-01 08:50:30.170254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.522 [2024-10-01 08:50:30.170263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.522 [2024-10-01 08:50:30.182253] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.522 [2024-10-01 08:50:30.182267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.522 [2024-10-01 08:50:30.189142] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:35:38.522 [2024-10-01 08:50:30.189195] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4013013 ] 00:35:38.522 [2024-10-01 08:50:30.194253] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.522 [2024-10-01 08:50:30.194262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.522 [2024-10-01 08:50:30.206254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.522 [2024-10-01 08:50:30.206264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.522 [2024-10-01 08:50:30.218254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.522 [2024-10-01 08:50:30.218261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.522 [2024-10-01 08:50:30.230253] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.522 [2024-10-01 08:50:30.230261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.522 [2024-10-01 08:50:30.242253] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.522 [2024-10-01 08:50:30.242262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.522 [2024-10-01 08:50:30.251356] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:38.522 [2024-10-01 08:50:30.254253] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.522 [2024-10-01 08:50:30.254261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.522 [2024-10-01 08:50:30.266254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.522 [2024-10-01 08:50:30.266262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.522 [2024-10-01 08:50:30.278254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.522 [2024-10-01 08:50:30.278263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.522 [2024-10-01 08:50:30.290254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.522 [2024-10-01 08:50:30.290267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.522 [2024-10-01 08:50:30.302253] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.522 [2024-10-01 08:50:30.302262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.522 [2024-10-01 08:50:30.314254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.522 [2024-10-01 08:50:30.314263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.522 [2024-10-01 08:50:30.315489] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:35:38.522 [2024-10-01 08:50:30.326266] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.522 [2024-10-01 08:50:30.326277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.522 [2024-10-01 08:50:30.338259] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.522 [2024-10-01 08:50:30.338273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.782 [2024-10-01 08:50:30.350256] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.782 [2024-10-01 08:50:30.350266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.782 [2024-10-01 08:50:30.362254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.782 [2024-10-01 08:50:30.362262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.782 [2024-10-01 08:50:30.374254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.782 [2024-10-01 08:50:30.374267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.782 [2024-10-01 08:50:30.386262] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.782 [2024-10-01 08:50:30.386277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.782 [2024-10-01 08:50:30.398258] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.782 [2024-10-01 08:50:30.398269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.782 [2024-10-01 08:50:30.410255] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.782 [2024-10-01 08:50:30.410265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.782 [2024-10-01 08:50:30.422256] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.782 [2024-10-01 08:50:30.422268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.782 [2024-10-01 08:50:30.434255] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.782 [2024-10-01 08:50:30.434266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.782 [2024-10-01 08:50:30.482555] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.782 [2024-10-01 08:50:30.482571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.782 [2024-10-01 08:50:30.494257] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.782 [2024-10-01 08:50:30.494270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.782 Running I/O for 5 seconds... 00:35:38.782 [2024-10-01 08:50:30.509327] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.782 [2024-10-01 08:50:30.509343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.782 [2024-10-01 08:50:30.522055] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.782 [2024-10-01 08:50:30.522072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.782 [2024-10-01 08:50:30.534568] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.782 [2024-10-01 08:50:30.534583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.782 [2024-10-01 08:50:30.549225] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.782 [2024-10-01 08:50:30.549241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.782 [2024-10-01 08:50:30.562422] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.783 [2024-10-01 08:50:30.562437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.783 [2024-10-01 08:50:30.573765] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.783 [2024-10-01 08:50:30.573781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.783 [2024-10-01 08:50:30.586398] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.783 [2024-10-01 08:50:30.586412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.783 [2024-10-01 08:50:30.601209] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.783 [2024-10-01 08:50:30.601226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.043 [2024-10-01 08:50:30.614216] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.043 [2024-10-01 08:50:30.614232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.043 [2024-10-01 08:50:30.625940] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.043 [2024-10-01 08:50:30.625957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.043 [2024-10-01 08:50:30.638707] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.043 [2024-10-01 08:50:30.638722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.043 [2024-10-01 08:50:30.653216] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.043 [2024-10-01 08:50:30.653236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.043 [2024-10-01 08:50:30.665999] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.043 [2024-10-01 08:50:30.666015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.043 [2024-10-01 08:50:30.678342] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.043 [2024-10-01 08:50:30.678358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.043 [2024-10-01 08:50:30.689745] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.043 [2024-10-01 08:50:30.689763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.043 [2024-10-01 08:50:30.702760] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.043 [2024-10-01 08:50:30.702776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.043 [2024-10-01 08:50:30.717170] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.043 [2024-10-01 08:50:30.717185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.043 [2024-10-01 08:50:30.729837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.043 [2024-10-01 08:50:30.729852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.043 [2024-10-01 08:50:30.742374] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.043 [2024-10-01 08:50:30.742390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.043 [2024-10-01 08:50:30.754305] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.043 [2024-10-01 08:50:30.754320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.043 [2024-10-01 08:50:30.766035] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.043 [2024-10-01 08:50:30.766050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.043 [2024-10-01 08:50:30.778368] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.043 [2024-10-01 08:50:30.778383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.043 [2024-10-01 08:50:30.790029] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.043 [2024-10-01 08:50:30.790044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.043 [2024-10-01 08:50:30.802576] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.043 [2024-10-01 08:50:30.802591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.043 [2024-10-01 08:50:30.817279] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.043 [2024-10-01 08:50:30.817294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.043 [2024-10-01 08:50:30.829737] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.043 [2024-10-01 08:50:30.829752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.043 [2024-10-01 08:50:30.842071] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.043 [2024-10-01 08:50:30.842087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.043 [2024-10-01 08:50:30.854821] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.043 [2024-10-01 08:50:30.854836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.303 [2024-10-01 08:50:30.869227] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.303 [2024-10-01 08:50:30.869242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.303 [2024-10-01 08:50:30.881747] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.303 [2024-10-01 08:50:30.881762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.303 [2024-10-01 08:50:30.893948] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.303 [2024-10-01 08:50:30.893964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.303 [2024-10-01 08:50:30.906622] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.303 [2024-10-01 08:50:30.906637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.303 [2024-10-01 08:50:30.921534] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.303 [2024-10-01 08:50:30.921550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.303 [2024-10-01 08:50:30.934629] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.303 [2024-10-01 08:50:30.934643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.303 [2024-10-01 08:50:30.949429] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.303 [2024-10-01 08:50:30.949444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.303 [2024-10-01 08:50:30.962329] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.303 [2024-10-01 08:50:30.962344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.303 [2024-10-01 08:50:30.974707] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.303 [2024-10-01 08:50:30.974722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.303 [2024-10-01 08:50:30.989720] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.303 [2024-10-01 08:50:30.989734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.304 [2024-10-01 08:50:31.002128] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.304 [2024-10-01 08:50:31.002144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.304 [2024-10-01 08:50:31.014016] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.304 [2024-10-01 08:50:31.014031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.304 [2024-10-01 08:50:31.027007] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.304 [2024-10-01 08:50:31.027021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.304 [2024-10-01 08:50:31.041387] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.304 [2024-10-01 08:50:31.041402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.304 [2024-10-01 08:50:31.054266] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.304 [2024-10-01 08:50:31.054281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.304 [2024-10-01 08:50:31.066073] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.304 [2024-10-01 08:50:31.066088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.304 [2024-10-01 08:50:31.078562] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.304 [2024-10-01 08:50:31.078576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.304 [2024-10-01 08:50:31.093492] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.304 [2024-10-01 08:50:31.093507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.304 [2024-10-01 08:50:31.105964] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.304 [2024-10-01 08:50:31.105979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.304 [2024-10-01 08:50:31.118309] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.304 [2024-10-01 08:50:31.118323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.563 [2024-10-01 08:50:31.130933] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.563 [2024-10-01 08:50:31.130948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.563 [2024-10-01 08:50:31.145795] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.563 [2024-10-01 08:50:31.145811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.563 [2024-10-01 08:50:31.158449] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.563 [2024-10-01 08:50:31.158463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.563 [2024-10-01 08:50:31.170170] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.563 [2024-10-01 08:50:31.170185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.563 [2024-10-01 08:50:31.182794] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.563 [2024-10-01 08:50:31.182809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.563 [2024-10-01 08:50:31.197817] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.563 [2024-10-01 08:50:31.197832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.563 [2024-10-01 08:50:31.210629] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.563 [2024-10-01 08:50:31.210644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.563 [2024-10-01 08:50:31.225341] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.563 [2024-10-01 08:50:31.225355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.563 [2024-10-01 08:50:31.238057] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.563 [2024-10-01 08:50:31.238072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.563 [2024-10-01 08:50:31.249880] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.563 [2024-10-01 08:50:31.249895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.563 [2024-10-01 08:50:31.262319] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.563 [2024-10-01 08:50:31.262335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.563 [2024-10-01 08:50:31.274814] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.563 [2024-10-01 08:50:31.274829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.563 [2024-10-01 08:50:31.289060] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.563 [2024-10-01 08:50:31.289075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.563 [2024-10-01 08:50:31.301537] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.563 [2024-10-01 08:50:31.301552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.564 [2024-10-01 08:50:31.314007] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.564 [2024-10-01 08:50:31.314022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.564 [2024-10-01 08:50:31.326742] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.564 [2024-10-01 08:50:31.326756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.564 [2024-10-01 08:50:31.341215] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.564 [2024-10-01 08:50:31.341230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.564 [2024-10-01 08:50:31.353860] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.564 [2024-10-01 08:50:31.353875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.564 [2024-10-01 08:50:31.366518] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.564 [2024-10-01 08:50:31.366532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.564 [2024-10-01 08:50:31.381335] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.564 [2024-10-01 08:50:31.381349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.823 [2024-10-01 08:50:31.394063] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.823 [2024-10-01 08:50:31.394079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.823 [2024-10-01 08:50:31.406396] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.823 [2024-10-01 08:50:31.406410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.823 [2024-10-01 08:50:31.421255] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.823 [2024-10-01 08:50:31.421270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.823 [2024-10-01 08:50:31.434068] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.823 [2024-10-01 08:50:31.434083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.823 [2024-10-01 08:50:31.446441] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.823 [2024-10-01 08:50:31.446455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.823 [2024-10-01 08:50:31.461258] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.823 [2024-10-01 08:50:31.461273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.823 [2024-10-01 08:50:31.473898] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.823 [2024-10-01 08:50:31.473913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.823 [2024-10-01 08:50:31.486172] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.823 [2024-10-01 08:50:31.486188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.823 [2024-10-01 08:50:31.499273] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.823 [2024-10-01 08:50:31.499287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.823 19216.00 IOPS, 150.12 MiB/s [2024-10-01 08:50:31.513728] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.823 [2024-10-01 08:50:31.513743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.823 [2024-10-01 08:50:31.526354] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.823 [2024-10-01 08:50:31.526368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.823 [2024-10-01 08:50:31.538414] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.823 [2024-10-01 08:50:31.538427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.823 [2024-10-01 08:50:31.553085] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.823 [2024-10-01 08:50:31.553099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.823 [2024-10-01 08:50:31.565697] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.823 [2024-10-01 08:50:31.565712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.823 [2024-10-01 08:50:31.578097] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.823 [2024-10-01 08:50:31.578112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.823 [2024-10-01 08:50:31.590128] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.823 [2024-10-01 08:50:31.590142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.823 [2024-10-01 08:50:31.602679] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.823 [2024-10-01 08:50:31.602693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.823 [2024-10-01 08:50:31.617199] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.824 [2024-10-01 08:50:31.617214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.824 [2024-10-01 08:50:31.630090] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.824 [2024-10-01 08:50:31.630109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.824 [2024-10-01 08:50:31.642410] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.824 [2024-10-01 08:50:31.642425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.083 [2024-10-01 08:50:31.654888] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.083 [2024-10-01 08:50:31.654902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.083 [2024-10-01 08:50:31.669773] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.083 [2024-10-01 08:50:31.669788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.083 [2024-10-01 08:50:31.682257] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.083 [2024-10-01 08:50:31.682272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.083 [2024-10-01 08:50:31.694849] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.083 [2024-10-01 08:50:31.694864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.083 [2024-10-01 08:50:31.709326] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.083 [2024-10-01 08:50:31.709340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.083 [2024-10-01 08:50:31.722110] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.083 [2024-10-01 08:50:31.722125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.083 [2024-10-01 08:50:31.734862] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.083 [2024-10-01 08:50:31.734876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.083 [2024-10-01 08:50:31.749190] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.083 [2024-10-01 08:50:31.749204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.083 [2024-10-01 08:50:31.762336] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.083 [2024-10-01 08:50:31.762350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.084 [2024-10-01 08:50:31.774113] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.084 [2024-10-01 08:50:31.774128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.084 [2024-10-01 08:50:31.786808] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.084 [2024-10-01 08:50:31.786821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.084 [2024-10-01 08:50:31.801203] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.084 [2024-10-01 08:50:31.801218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.084 [2024-10-01 08:50:31.813837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.084 [2024-10-01 08:50:31.813851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.084 [2024-10-01 08:50:31.825982] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.084 [2024-10-01 08:50:31.826000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.084 [2024-10-01 08:50:31.838713] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.084 [2024-10-01 08:50:31.838727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.084 [2024-10-01 08:50:31.853451] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.084 [2024-10-01 08:50:31.853465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.084 [2024-10-01 08:50:31.865987] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.084 [2024-10-01 08:50:31.866005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.084 [2024-10-01 08:50:31.878051] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.084 [2024-10-01 08:50:31.878069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.084 [2024-10-01 08:50:31.890445] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.084 [2024-10-01 08:50:31.890459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.084 [2024-10-01 08:50:31.903358] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.084 [2024-10-01 08:50:31.903373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.343 [2024-10-01 08:50:31.917211] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.343 [2024-10-01 08:50:31.917225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.343 [2024-10-01 08:50:31.929650] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.343 [2024-10-01 08:50:31.929664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.343 [2024-10-01 08:50:31.942542] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.343 [2024-10-01 08:50:31.942556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.343 [2024-10-01 08:50:31.957454] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.343 [2024-10-01 08:50:31.957468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.343 [2024-10-01 08:50:31.970045] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.343 [2024-10-01 08:50:31.970059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.343 [2024-10-01 08:50:31.982739] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.344 [2024-10-01 08:50:31.982753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.344 [2024-10-01 08:50:31.997477] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.344 [2024-10-01 08:50:31.997491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.344 [2024-10-01 08:50:32.010319] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.344 [2024-10-01 08:50:32.010333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.344 [2024-10-01 08:50:32.021713] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.344 [2024-10-01 08:50:32.021727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.344 [2024-10-01 08:50:32.034493] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.344 [2024-10-01 08:50:32.034507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.344 [2024-10-01 08:50:32.046729] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.344 [2024-10-01 08:50:32.046743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.344 [2024-10-01 08:50:32.061219] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.344 [2024-10-01 08:50:32.061234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.344 [2024-10-01 08:50:32.073708] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.344 [2024-10-01 08:50:32.073723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.344 [2024-10-01 08:50:32.086571] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.344 [2024-10-01 08:50:32.086584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.344 [2024-10-01 08:50:32.101749] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.344 [2024-10-01 08:50:32.101763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.344 [2024-10-01 08:50:32.114093] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.344 [2024-10-01 08:50:32.114108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.344 [2024-10-01 08:50:32.126331] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.344 [2024-10-01 08:50:32.126349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.344 [2024-10-01 08:50:32.138280] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.344 [2024-10-01 08:50:32.138294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.344 [2024-10-01 08:50:32.151164] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.344 [2024-10-01 08:50:32.151178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.344 [2024-10-01 08:50:32.165504] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.344 [2024-10-01 08:50:32.165519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.604 [2024-10-01 08:50:32.178153] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.604 [2024-10-01 08:50:32.178168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.604 [2024-10-01 08:50:32.190850] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.604 [2024-10-01 08:50:32.190865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.604 [2024-10-01 08:50:32.205225] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.604 [2024-10-01 08:50:32.205240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.604 [2024-10-01 08:50:32.218118] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.604 [2024-10-01 08:50:32.218132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.604 [2024-10-01 08:50:32.230500] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.604 [2024-10-01 08:50:32.230514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.604 [2024-10-01 08:50:32.245365] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.604 [2024-10-01 08:50:32.245379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.604 [2024-10-01 08:50:32.258069] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.604 [2024-10-01 08:50:32.258083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.604 [2024-10-01 08:50:32.270929] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.604 [2024-10-01 08:50:32.270944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.604 [2024-10-01 08:50:32.285173] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.604 [2024-10-01 08:50:32.285188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.604 [2024-10-01 08:50:32.298004] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.604 [2024-10-01 08:50:32.298018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.604 [2024-10-01 08:50:32.310791] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.604 [2024-10-01 08:50:32.310805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.604 [2024-10-01 08:50:32.325306] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.604 [2024-10-01 08:50:32.325321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.604 [2024-10-01 08:50:32.338361] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.604 [2024-10-01 08:50:32.338376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.604 [2024-10-01 08:50:32.351041] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.604 [2024-10-01 08:50:32.351055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.604 [2024-10-01 08:50:32.365639] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.604 [2024-10-01 08:50:32.365654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.604 [2024-10-01 08:50:32.378530] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.604 [2024-10-01 08:50:32.378548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.604 [2024-10-01 08:50:32.393260] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.604 [2024-10-01 08:50:32.393275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.604 [2024-10-01 08:50:32.406347] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.604 [2024-10-01 08:50:32.406361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.604 [2024-10-01 08:50:32.418136] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.604 [2024-10-01 08:50:32.418151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.865 [2024-10-01 08:50:32.430359] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.865 [2024-10-01 08:50:32.430374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.865 [2024-10-01 08:50:32.442834] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.865 [2024-10-01 08:50:32.442848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.865 [2024-10-01 08:50:32.457624] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.865 [2024-10-01 08:50:32.457638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.865 [2024-10-01 08:50:32.469790] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.865 [2024-10-01 08:50:32.469804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.865 [2024-10-01 08:50:32.482283] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.865 [2024-10-01 08:50:32.482297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.865 [2024-10-01 08:50:32.494544] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.865 [2024-10-01 08:50:32.494558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.865 19256.00 IOPS, 150.44 MiB/s [2024-10-01 08:50:32.508755] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.865 [2024-10-01 08:50:32.508770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.865 [2024-10-01 08:50:32.521614] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.865 [2024-10-01 08:50:32.521630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.865 [2024-10-01 08:50:32.534842] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.865 [2024-10-01 08:50:32.534857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.865 [2024-10-01 08:50:32.549607] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.865 [2024-10-01 08:50:32.549623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.865 [2024-10-01 08:50:32.562115] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.865 [2024-10-01 08:50:32.562131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.865 [2024-10-01 08:50:32.574446] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.865 [2024-10-01 08:50:32.574461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.865 [2024-10-01 08:50:32.589547] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.865 [2024-10-01 08:50:32.589561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.865 [2024-10-01 08:50:32.602068] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.865 [2024-10-01 08:50:32.602082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.865 [2024-10-01 08:50:32.614562] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.865 [2024-10-01 08:50:32.614576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.865 [2024-10-01 08:50:32.629448] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.865 [2024-10-01 08:50:32.629463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.865 [2024-10-01 08:50:32.641818] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.865 [2024-10-01 08:50:32.641833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.865 [2024-10-01 08:50:32.654464] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.865 [2024-10-01 08:50:32.654478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.865 [2024-10-01 08:50:32.669507] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.865 [2024-10-01 08:50:32.669522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.865 [2024-10-01 08:50:32.682021] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.865 [2024-10-01 08:50:32.682035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.126 [2024-10-01 08:50:32.694224] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.126 [2024-10-01 08:50:32.694239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.126 [2024-10-01 08:50:32.706836] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.126 [2024-10-01 08:50:32.706850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.126 [2024-10-01 08:50:32.721474] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.126 [2024-10-01 08:50:32.721489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.126 [2024-10-01 08:50:32.734101] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.126 [2024-10-01 08:50:32.734116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.126 [2024-10-01 08:50:32.746666] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.126 [2024-10-01 08:50:32.746681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.126 [2024-10-01 08:50:32.761518] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.126 [2024-10-01 08:50:32.761533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.126 [2024-10-01 08:50:32.773744] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.126 [2024-10-01 08:50:32.773759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.126 [2024-10-01 08:50:32.786314] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.126 [2024-10-01 08:50:32.786329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.126 [2024-10-01 08:50:32.797879] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.126 [2024-10-01 08:50:32.797894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.126 [2024-10-01 08:50:32.810847] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.126 [2024-10-01 08:50:32.810861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.126 [2024-10-01 08:50:32.825125] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.126 [2024-10-01 08:50:32.825140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.126 [2024-10-01 08:50:32.838012] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.126 [2024-10-01 08:50:32.838028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.126 [2024-10-01 08:50:32.850331] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.126 [2024-10-01 08:50:32.850346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.126 [2024-10-01 08:50:32.862855] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.126 [2024-10-01 08:50:32.862869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.126 [2024-10-01 08:50:32.877360] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.126 [2024-10-01 08:50:32.877375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.126 [2024-10-01 08:50:32.889960] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.126 [2024-10-01 08:50:32.889975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.126 [2024-10-01 08:50:32.902337] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.126 [2024-10-01 08:50:32.902351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.126 [2024-10-01 08:50:32.914522] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.126 [2024-10-01 08:50:32.914536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.126 [2024-10-01 08:50:32.929136] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.126 [2024-10-01 08:50:32.929151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.126 [2024-10-01 08:50:32.941764] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.126 [2024-10-01 08:50:32.941778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.387 [2024-10-01 08:50:32.954018] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.387 [2024-10-01 08:50:32.954033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.387 [2024-10-01 08:50:32.966565] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.387 [2024-10-01 08:50:32.966579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.387 [2024-10-01 08:50:32.981237] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.387 [2024-10-01 08:50:32.981252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.387 [2024-10-01 08:50:32.993786] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.387 [2024-10-01 08:50:32.993800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.387 [2024-10-01 08:50:33.006151] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.387 [2024-10-01 08:50:33.006166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.387 [2024-10-01 08:50:33.018591] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.387 [2024-10-01 08:50:33.018605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.387 [2024-10-01 08:50:33.033632] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.387 [2024-10-01 08:50:33.033647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.387 [2024-10-01 08:50:33.046122] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.387 [2024-10-01 08:50:33.046137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.387 [2024-10-01 08:50:33.058484] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.387 [2024-10-01 08:50:33.058499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.387 [2024-10-01 08:50:33.070664] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.387 [2024-10-01 08:50:33.070679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.387 [2024-10-01 08:50:33.085369] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.387 [2024-10-01 08:50:33.085384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.387 [2024-10-01 08:50:33.097513] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.387 [2024-10-01 08:50:33.097528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.387 [2024-10-01 08:50:33.110000] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.387 [2024-10-01 08:50:33.110019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.387 [2024-10-01 08:50:33.122398] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.387 [2024-10-01 08:50:33.122412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.387 [2024-10-01 08:50:33.136963] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.387 [2024-10-01 08:50:33.136977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.387 [2024-10-01 08:50:33.149941] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.387 [2024-10-01 08:50:33.149956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.387 [2024-10-01 08:50:33.162082] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.387 [2024-10-01 08:50:33.162097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.387 [2024-10-01 08:50:33.174339] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.387 [2024-10-01 08:50:33.174353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.387 [2024-10-01 08:50:33.187000] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.387 [2024-10-01 08:50:33.187015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.387 [2024-10-01 08:50:33.201371] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.387 [2024-10-01 08:50:33.201386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.647 [2024-10-01 08:50:33.214363] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.647 [2024-10-01 08:50:33.214379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.647 [2024-10-01 08:50:33.225478] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.647 [2024-10-01 08:50:33.225493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.647 [2024-10-01 08:50:33.238368] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.647 [2024-10-01 08:50:33.238382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.647 [2024-10-01 08:50:33.253530] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.647 [2024-10-01 08:50:33.253545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.647 [2024-10-01 08:50:33.266233] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.647 [2024-10-01 08:50:33.266248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.647 [2024-10-01 08:50:33.277733] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.647 [2024-10-01 08:50:33.277748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.647 [2024-10-01 08:50:33.290618] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.647 [2024-10-01 08:50:33.290631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.647 [2024-10-01 08:50:33.305160] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.647 [2024-10-01 08:50:33.305174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.647 [2024-10-01 08:50:33.318115] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.647 [2024-10-01 08:50:33.318129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.647 [2024-10-01 08:50:33.330680] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.647 [2024-10-01 08:50:33.330694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.647 [2024-10-01 08:50:33.345644] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.648 [2024-10-01 08:50:33.345658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.648 [2024-10-01 08:50:33.357921] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.648 [2024-10-01 08:50:33.357939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.648 [2024-10-01 08:50:33.370115] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.648 [2024-10-01 08:50:33.370129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.648 [2024-10-01 08:50:33.382223] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.648 [2024-10-01 08:50:33.382238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.648 [2024-10-01 08:50:33.394611] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.648 [2024-10-01 08:50:33.394625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.648 [2024-10-01 08:50:33.409226] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.648 [2024-10-01 08:50:33.409241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.648 [2024-10-01 08:50:33.422253] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.648 [2024-10-01 08:50:33.422268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.648 [2024-10-01 08:50:33.435080] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.648 [2024-10-01 08:50:33.435094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.648 [2024-10-01 08:50:33.449284] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.648 [2024-10-01 08:50:33.449298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.648 [2024-10-01 08:50:33.462145] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.648 [2024-10-01 08:50:33.462160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.908 [2024-10-01 08:50:33.474350] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.908 [2024-10-01 08:50:33.474365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.908 [2024-10-01 08:50:33.486623] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.908 [2024-10-01 08:50:33.486638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.908 [2024-10-01 08:50:33.501616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.908 [2024-10-01 08:50:33.501632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.908 19238.67 IOPS, 150.30 MiB/s [2024-10-01 08:50:33.514850] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.908 [2024-10-01 08:50:33.514864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.908 [2024-10-01 08:50:33.529068] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.908 [2024-10-01 08:50:33.529083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.908 [2024-10-01 08:50:33.541965] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.908 [2024-10-01 08:50:33.541980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.908 [2024-10-01 08:50:33.554211] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.908 [2024-10-01 08:50:33.554226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.908 [2024-10-01 08:50:33.566682] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.908 [2024-10-01 08:50:33.566696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.908 [2024-10-01 08:50:33.581185] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.908 [2024-10-01 08:50:33.581199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.908 [2024-10-01 08:50:33.593775] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.908 [2024-10-01 08:50:33.593790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.908 [2024-10-01 08:50:33.606139] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.908 [2024-10-01 08:50:33.606157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.908 [2024-10-01 08:50:33.618299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.908 [2024-10-01 08:50:33.618314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.908 [2024-10-01 08:50:33.630337] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.908 [2024-10-01 08:50:33.630351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.908 [2024-10-01 08:50:33.642929] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.908 [2024-10-01 08:50:33.642943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.908 [2024-10-01 08:50:33.657531] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.908 [2024-10-01 08:50:33.657546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.908 [2024-10-01 08:50:33.670337] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.908 [2024-10-01 08:50:33.670351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.908 [2024-10-01 08:50:33.682342] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.908 [2024-10-01 08:50:33.682357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.908 [2024-10-01 08:50:33.694738] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.908 [2024-10-01 08:50:33.694753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.908 [2024-10-01 08:50:33.709650] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.908 [2024-10-01 08:50:33.709665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.908 [2024-10-01 08:50:33.721963] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.908 [2024-10-01 08:50:33.721978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.169 [2024-10-01 08:50:33.734002] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.169 [2024-10-01 08:50:33.734017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.169 [2024-10-01 08:50:33.746667] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.169 [2024-10-01 08:50:33.746682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.169 [2024-10-01 08:50:33.761363] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.169 [2024-10-01 08:50:33.761378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.169 [2024-10-01 08:50:33.773725] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.169 [2024-10-01 08:50:33.773740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.169 [2024-10-01 08:50:33.786097] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.169 [2024-10-01 08:50:33.786112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.169 [2024-10-01 08:50:33.798138] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.169 [2024-10-01 08:50:33.798152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.169 [2024-10-01 08:50:33.811042] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.169 [2024-10-01 08:50:33.811056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.169 [2024-10-01 08:50:33.825492] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.169 [2024-10-01 08:50:33.825507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.169 [2024-10-01 08:50:33.837820] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.169 [2024-10-01 08:50:33.837835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.169 [2024-10-01 08:50:33.850455] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.169 [2024-10-01 08:50:33.850468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.169 [2024-10-01 08:50:33.865094] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.169 [2024-10-01 08:50:33.865108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.169 [2024-10-01 08:50:33.877794] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.169 [2024-10-01 08:50:33.877808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.169 [2024-10-01 08:50:33.890544] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.169 [2024-10-01 08:50:33.890558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.169 [2024-10-01 08:50:33.905347] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.169 [2024-10-01 08:50:33.905362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.169 [2024-10-01 08:50:33.918003] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.169 [2024-10-01 08:50:33.918018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.169 [2024-10-01 08:50:33.930611] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.169 [2024-10-01 08:50:33.930625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.169 [2024-10-01 08:50:33.945345] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.169 [2024-10-01 08:50:33.945359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.169 [2024-10-01 08:50:33.958092] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.169 [2024-10-01 08:50:33.958106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.169 [2024-10-01 08:50:33.970561] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.169 [2024-10-01 08:50:33.970575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.169 [2024-10-01 08:50:33.985757] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.169 [2024-10-01 08:50:33.985772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.429 [2024-10-01 08:50:33.998409] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.429 [2024-10-01 08:50:33.998424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.429 [2024-10-01 08:50:34.010570] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.429 [2024-10-01 08:50:34.010584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.429 [2024-10-01 08:50:34.025085] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.429 [2024-10-01 08:50:34.025100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.429 [2024-10-01 08:50:34.038181] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.429 [2024-10-01 08:50:34.038195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.429 [2024-10-01 08:50:34.050561] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.429 [2024-10-01 08:50:34.050575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.429 [2024-10-01 08:50:34.065111] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.429 [2024-10-01 08:50:34.065126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.429 [2024-10-01 08:50:34.077826] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.429 [2024-10-01 08:50:34.077840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.429 [2024-10-01 08:50:34.090285] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.429 [2024-10-01 08:50:34.090300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.429 [2024-10-01 08:50:34.102749] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.429 [2024-10-01 08:50:34.102763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.429 [2024-10-01 08:50:34.117785] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.429 [2024-10-01 08:50:34.117800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.429 [2024-10-01 08:50:34.130157] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.429 [2024-10-01 08:50:34.130172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.429 [2024-10-01 08:50:34.141932] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.429 [2024-10-01 08:50:34.141946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.429 [2024-10-01 08:50:34.154587] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.429 [2024-10-01 08:50:34.154601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.429 [2024-10-01 08:50:34.169389] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.429 [2024-10-01 08:50:34.169404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.429 [2024-10-01 08:50:34.182268] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.429 [2024-10-01 08:50:34.182283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.429 [2024-10-01 08:50:34.194314] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.429 [2024-10-01 08:50:34.194329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.429 [2024-10-01 08:50:34.206949] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.429 [2024-10-01 08:50:34.206963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.429 [2024-10-01 08:50:34.221422] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.429 [2024-10-01 08:50:34.221436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.429 [2024-10-01 08:50:34.233673] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.429 [2024-10-01 08:50:34.233688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.429 [2024-10-01 08:50:34.246129] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.429 [2024-10-01 08:50:34.246143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.689 [2024-10-01 08:50:34.258618] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.689 [2024-10-01 08:50:34.258632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.689 [2024-10-01 08:50:34.273400] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.689 [2024-10-01 08:50:34.273415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.689 [2024-10-01 08:50:34.286044] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.689 [2024-10-01 08:50:34.286059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.689 [2024-10-01 08:50:34.298065] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.689 [2024-10-01 08:50:34.298079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.689 [2024-10-01 08:50:34.310337] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.689 [2024-10-01 08:50:34.310352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.689 [2024-10-01 08:50:34.321462] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.689 [2024-10-01 08:50:34.321477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.689 [2024-10-01 08:50:34.334203] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.689 [2024-10-01 08:50:34.334218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.689 [2024-10-01 08:50:34.346595] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.689 [2024-10-01 08:50:34.346609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.689 [2024-10-01 08:50:34.361394] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.689 [2024-10-01 08:50:34.361409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.690 [2024-10-01 08:50:34.374189] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.690 [2024-10-01 08:50:34.374205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.690 [2024-10-01 08:50:34.386478] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.690 [2024-10-01 08:50:34.386492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.690 [2024-10-01 08:50:34.398575] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.690 [2024-10-01 08:50:34.398589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.690 [2024-10-01 08:50:34.413557] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.690 [2024-10-01 08:50:34.413572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.690 [2024-10-01 08:50:34.425988] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.690 [2024-10-01 08:50:34.426008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.690 [2024-10-01 08:50:34.438321] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.690 [2024-10-01 08:50:34.438336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.690 [2024-10-01 08:50:34.450940] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.690 [2024-10-01 08:50:34.450955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.690 [2024-10-01 08:50:34.465727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.690 [2024-10-01 08:50:34.465741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.690 [2024-10-01 08:50:34.479036] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.690 [2024-10-01 08:50:34.479051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.690 [2024-10-01 08:50:34.493390] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.690 [2024-10-01 08:50:34.493405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.690 [2024-10-01 08:50:34.506347] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.690 [2024-10-01 08:50:34.506363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.950 19254.25 IOPS, 150.42 MiB/s [2024-10-01 08:50:34.517903] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.950 [2024-10-01 08:50:34.517918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.950 [2024-10-01 08:50:34.530442] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.950 [2024-10-01 08:50:34.530456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.950 [2024-10-01 08:50:34.542802] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.950 [2024-10-01 08:50:34.542816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.950 [2024-10-01 08:50:34.557213] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.950 [2024-10-01 08:50:34.557227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.950 [2024-10-01 08:50:34.569802] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.950 [2024-10-01 08:50:34.569817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.950 [2024-10-01 08:50:34.582171] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.950 [2024-10-01 08:50:34.582190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.950 [2024-10-01 08:50:34.594339] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.950 [2024-10-01 08:50:34.594354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.950 [2024-10-01 08:50:34.606686] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.950 [2024-10-01 08:50:34.606700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.950 [2024-10-01 08:50:34.621436] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.950 [2024-10-01 08:50:34.621450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.950 [2024-10-01 08:50:34.633833] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.950 [2024-10-01 08:50:34.633848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.950 [2024-10-01 08:50:34.646479] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.950 [2024-10-01 08:50:34.646493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.950 [2024-10-01 08:50:34.660732] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.950 [2024-10-01 08:50:34.660747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.950 [2024-10-01 08:50:34.673595] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.950 [2024-10-01 08:50:34.673611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.950 [2024-10-01 08:50:34.685927] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.950 [2024-10-01 08:50:34.685941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.950 [2024-10-01 08:50:34.698631] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.950 [2024-10-01 08:50:34.698646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.950 [2024-10-01 08:50:34.712962] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.950 [2024-10-01 08:50:34.712978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.950 [2024-10-01 08:50:34.725728] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.950 [2024-10-01 08:50:34.725742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.950 [2024-10-01 08:50:34.738208] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.950 [2024-10-01 08:50:34.738223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.950 [2024-10-01 08:50:34.750774] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.950 [2024-10-01 08:50:34.750789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.950 [2024-10-01 08:50:34.765073] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.950 [2024-10-01 08:50:34.765088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.210 [2024-10-01 08:50:34.777769] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.210 [2024-10-01 08:50:34.777784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.210 [2024-10-01 08:50:34.791264] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.210 [2024-10-01 08:50:34.791279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.210 [2024-10-01 08:50:34.805482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.210 [2024-10-01 08:50:34.805497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.210 [2024-10-01 08:50:34.817719] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.210 [2024-10-01 08:50:34.817735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.210 [2024-10-01 08:50:34.830329] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.210 [2024-10-01 08:50:34.830348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.210 [2024-10-01 08:50:34.843021] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.210 [2024-10-01 08:50:34.843037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.210 [2024-10-01 08:50:34.857077] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.210 [2024-10-01 08:50:34.857092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.210 [2024-10-01 08:50:34.870038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.210 [2024-10-01 08:50:34.870053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.210 [2024-10-01 08:50:34.882211] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.210 [2024-10-01 08:50:34.882225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.210 [2024-10-01 08:50:34.894959] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.210 [2024-10-01 08:50:34.894974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.210 [2024-10-01 08:50:34.909529] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.210 [2024-10-01 08:50:34.909544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.210 [2024-10-01 08:50:34.921975] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.210 [2024-10-01 08:50:34.921990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.210 [2024-10-01 08:50:34.934423] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.210 [2024-10-01 08:50:34.934437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.210 [2024-10-01 08:50:34.946254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.210 [2024-10-01 08:50:34.946269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.210 [2024-10-01 08:50:34.958890] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.210 [2024-10-01 08:50:34.958904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.210 [2024-10-01 08:50:34.973242] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.210 [2024-10-01 08:50:34.973256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.210 [2024-10-01 08:50:34.985769] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.210 [2024-10-01 08:50:34.985784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.210 [2024-10-01 08:50:34.998264] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.210 [2024-10-01 08:50:34.998279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.211 [2024-10-01 08:50:35.010797] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.211 [2024-10-01 08:50:35.010811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.211 [2024-10-01 08:50:35.025310] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.211 [2024-10-01 08:50:35.025326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.470 [2024-10-01 08:50:35.037820] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.470 [2024-10-01 08:50:35.037835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.470 [2024-10-01 08:50:35.049922] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.470 [2024-10-01 08:50:35.049937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.470 [2024-10-01 08:50:35.061871] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.470 [2024-10-01 08:50:35.061885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.470 [2024-10-01 08:50:35.074467] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.470 [2024-10-01 08:50:35.074487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.470 [2024-10-01 08:50:35.089170] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.470 [2024-10-01 08:50:35.089185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.470 [2024-10-01 08:50:35.101949] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.470 [2024-10-01 08:50:35.101963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.471 [2024-10-01 08:50:35.114324] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.471 [2024-10-01 08:50:35.114339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.471 [2024-10-01 08:50:35.126401] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.471 [2024-10-01 08:50:35.126416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.471 [2024-10-01 08:50:35.138978] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.471 [2024-10-01 08:50:35.139002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.471 [2024-10-01 08:50:35.153040] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.471 [2024-10-01 08:50:35.153055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.471 [2024-10-01 08:50:35.165557] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.471 [2024-10-01 08:50:35.165571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.471 [2024-10-01 08:50:35.177869] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.471 [2024-10-01 08:50:35.177883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.471 [2024-10-01 08:50:35.190613] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.471 [2024-10-01 08:50:35.190627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.471 [2024-10-01 08:50:35.205130] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.471 [2024-10-01 08:50:35.205145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.471 [2024-10-01 08:50:35.218217] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.471 [2024-10-01 08:50:35.218232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.471 [2024-10-01 08:50:35.230565] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.471 [2024-10-01 08:50:35.230579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.471 [2024-10-01 08:50:35.245594] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.471 [2024-10-01 08:50:35.245609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.471 [2024-10-01 08:50:35.257889] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.471 [2024-10-01 08:50:35.257903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.471 [2024-10-01 08:50:35.270548] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.471 [2024-10-01 08:50:35.270561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.471 [2024-10-01 08:50:35.285507] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.471 [2024-10-01 08:50:35.285521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.732 [2024-10-01 08:50:35.297756] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.732 [2024-10-01 08:50:35.297771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.732 [2024-10-01 08:50:35.311050] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.732 [2024-10-01 08:50:35.311064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.732 [2024-10-01 08:50:35.325509] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.733 [2024-10-01 08:50:35.325527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.733 [2024-10-01 08:50:35.338216] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.733 [2024-10-01 08:50:35.338231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.733 [2024-10-01 08:50:35.350173] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.733 [2024-10-01 08:50:35.350187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.733 [2024-10-01 08:50:35.362473] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.733 [2024-10-01 08:50:35.362487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.733 [2024-10-01 08:50:35.377604] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.733 [2024-10-01 08:50:35.377619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.733 [2024-10-01 08:50:35.390193] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.733 [2024-10-01 08:50:35.390208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.733 [2024-10-01 08:50:35.402712] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.733 [2024-10-01 08:50:35.402726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.733 [2024-10-01 08:50:35.417227] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.733 [2024-10-01 08:50:35.417241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.733 [2024-10-01 08:50:35.430227] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.733 [2024-10-01 08:50:35.430242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.733 [2024-10-01 08:50:35.442329] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.733 [2024-10-01 08:50:35.442343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.733 [2024-10-01 08:50:35.454652] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.733 [2024-10-01 08:50:35.454666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.733 [2024-10-01 08:50:35.469183] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.733 [2024-10-01 08:50:35.469198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.733 [2024-10-01 08:50:35.481914] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.733 [2024-10-01 08:50:35.481928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.733 [2024-10-01 08:50:35.494483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.733 [2024-10-01 08:50:35.494497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.733 [2024-10-01 08:50:35.507100] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.733 [2024-10-01 08:50:35.507114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.733 19265.20 IOPS, 150.51 MiB/s 00:35:43.733 Latency(us) 00:35:43.733 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:43.733 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:35:43.733 Nvme1n1 : 5.01 19269.37 150.54 0.00 0.00 6636.69 2689.71 11468.80 00:35:43.733 =================================================================================================================== 00:35:43.733 Total : 19269.37 150.54 0.00 0.00 6636.69 2689.71 11468.80 00:35:43.733 [2024-10-01 08:50:35.518258] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.733 [2024-10-01 08:50:35.518272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.733 [2024-10-01 08:50:35.530256] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.733 [2024-10-01 08:50:35.530269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.733 [2024-10-01 08:50:35.542260] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.733 [2024-10-01 08:50:35.542271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.733 [2024-10-01 08:50:35.554258] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.733 [2024-10-01 08:50:35.554270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.994 [2024-10-01 08:50:35.566259] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.994 [2024-10-01 08:50:35.566270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.994 [2024-10-01 08:50:35.578256] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.994 [2024-10-01 08:50:35.578267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.994 [2024-10-01 08:50:35.590254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.994 [2024-10-01 08:50:35.590261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.994 [2024-10-01 08:50:35.602254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.994 [2024-10-01 08:50:35.602262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.994 [2024-10-01 08:50:35.614255] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.994 [2024-10-01 08:50:35.614265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.994 [2024-10-01 08:50:35.626255] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.994 [2024-10-01 08:50:35.626264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.994 [2024-10-01 08:50:35.638254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.994 [2024-10-01 08:50:35.638262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.994 [2024-10-01 08:50:35.650253] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.994 [2024-10-01 08:50:35.650259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.994 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (4013013) - No such process 00:35:43.994 08:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 4013013 00:35:43.994 08:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:43.994 08:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.994 08:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:43.994 08:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.994 08:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:35:43.994 08:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.994 08:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:43.994 delay0 00:35:43.994 08:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.994 08:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:35:43.994 08:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.994 08:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:43.994 08:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.994 08:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:35:43.994 [2024-10-01 08:50:35.747030] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:35:52.129 [2024-10-01 08:50:42.619955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22148d0 is same with the state(6) to be set 00:35:52.129 Initializing NVMe Controllers 00:35:52.129 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:52.129 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:35:52.129 Initialization complete. Launching workers. 00:35:52.129 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 6183 00:35:52.129 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 6465, failed to submit 38 00:35:52.129 success 6291, unsuccessful 174, failed 0 00:35:52.129 08:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:35:52.129 08:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:35:52.129 08:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # nvmfcleanup 00:35:52.129 08:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:35:52.129 08:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:52.129 08:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:35:52.129 08:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:52.129 08:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:52.129 rmmod nvme_tcp 00:35:52.129 rmmod nvme_fabrics 00:35:52.129 rmmod nvme_keyring 00:35:52.129 08:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:52.129 08:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:35:52.129 08:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:35:52.129 08:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@513 -- # '[' -n 4010784 ']' 00:35:52.129 08:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@514 -- # killprocess 4010784 00:35:52.129 08:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 4010784 ']' 00:35:52.129 08:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 4010784 00:35:52.129 08:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:35:52.129 08:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:52.129 08:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4010784 00:35:52.129 08:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:52.129 08:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:52.130 08:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4010784' 00:35:52.130 killing process with pid 4010784 00:35:52.130 08:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 4010784 00:35:52.130 08:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 4010784 00:35:52.130 08:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:35:52.130 08:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:35:52.130 08:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:35:52.130 08:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:35:52.130 08:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-save 00:35:52.130 08:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:35:52.130 08:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-restore 00:35:52.130 08:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:52.130 08:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:52.130 08:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:52.130 08:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:52.130 08:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:53.515 08:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:53.515 00:35:53.515 real 0m33.901s 00:35:53.515 user 0m44.016s 00:35:53.515 sys 0m11.735s 00:35:53.515 08:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:53.515 08:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:53.515 ************************************ 00:35:53.515 END TEST nvmf_zcopy 00:35:53.515 ************************************ 00:35:53.515 08:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:35:53.515 08:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:35:53.515 08:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:53.515 08:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:53.515 ************************************ 00:35:53.515 START TEST nvmf_nmic 00:35:53.515 ************************************ 00:35:53.515 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:35:53.515 * Looking for test storage... 00:35:53.515 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:53.515 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:35:53.515 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:35:53.515 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:35:53.515 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:35:53.515 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:53.515 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:53.515 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:53.515 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:35:53.515 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:35:53.515 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:35:53.515 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:35:53.515 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:35:53.515 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:35:53.515 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:35:53.515 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:53.515 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:35:53.515 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:35:53.515 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:53.515 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:53.515 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:35:53.515 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:35:53.515 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:53.515 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:35:53.515 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:35:53.515 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:35:53.515 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:35:53.515 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:53.515 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:35:53.515 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:35:53.515 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:53.515 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:53.515 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:35:53.515 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:53.515 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:35:53.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:53.515 --rc genhtml_branch_coverage=1 00:35:53.515 --rc genhtml_function_coverage=1 00:35:53.515 --rc genhtml_legend=1 00:35:53.515 --rc geninfo_all_blocks=1 00:35:53.515 --rc geninfo_unexecuted_blocks=1 00:35:53.515 00:35:53.515 ' 00:35:53.515 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:35:53.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:53.515 --rc genhtml_branch_coverage=1 00:35:53.515 --rc genhtml_function_coverage=1 00:35:53.515 --rc genhtml_legend=1 00:35:53.515 --rc geninfo_all_blocks=1 00:35:53.515 --rc geninfo_unexecuted_blocks=1 00:35:53.515 00:35:53.515 ' 00:35:53.515 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:35:53.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:53.515 --rc genhtml_branch_coverage=1 00:35:53.515 --rc genhtml_function_coverage=1 00:35:53.515 --rc genhtml_legend=1 00:35:53.515 --rc geninfo_all_blocks=1 00:35:53.515 --rc geninfo_unexecuted_blocks=1 00:35:53.515 00:35:53.515 ' 00:35:53.515 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:35:53.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:53.515 --rc genhtml_branch_coverage=1 00:35:53.515 --rc genhtml_function_coverage=1 00:35:53.515 --rc genhtml_legend=1 00:35:53.515 --rc geninfo_all_blocks=1 00:35:53.515 --rc geninfo_unexecuted_blocks=1 00:35:53.515 00:35:53.515 ' 00:35:53.515 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:53.515 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:35:53.515 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:53.515 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:53.515 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:53.515 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:53.515 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:53.515 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:53.515 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:53.515 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:53.515 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:53.515 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:53.515 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:53.515 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:53.515 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:53.515 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:53.515 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:53.515 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:53.515 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:53.515 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:35:53.515 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:53.515 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:53.515 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:53.515 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:53.515 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:53.516 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:53.516 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:35:53.516 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:53.516 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:35:53.516 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:53.516 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:53.516 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:53.516 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:53.516 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:53.516 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:53.516 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:53.516 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:53.516 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:53.516 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:53.516 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:53.516 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:53.516 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:35:53.516 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:35:53.516 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:53.516 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@472 -- # prepare_net_devs 00:35:53.516 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@434 -- # local -g is_hw=no 00:35:53.516 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@436 -- # remove_spdk_ns 00:35:53.516 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:53.516 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:53.516 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:53.516 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:35:53.516 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:35:53.516 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:35:53.516 08:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:01.659 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:01.659 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ up == up ]] 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:01.659 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ up == up ]] 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:01.659 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # is_hw=yes 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:01.659 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:01.660 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:01.660 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.595 ms 00:36:01.660 00:36:01.660 --- 10.0.0.2 ping statistics --- 00:36:01.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:01.660 rtt min/avg/max/mdev = 0.595/0.595/0.595/0.000 ms 00:36:01.660 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:01.660 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:01.660 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:36:01.660 00:36:01.660 --- 10.0.0.1 ping statistics --- 00:36:01.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:01.660 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:36:01.660 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:01.660 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # return 0 00:36:01.660 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:36:01.660 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:01.660 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:36:01.660 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:36:01.660 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:01.660 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:36:01.660 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:36:01.660 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:36:01.660 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:36:01.660 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:01.660 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:01.660 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@505 -- # nvmfpid=4019383 00:36:01.660 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@506 -- # waitforlisten 4019383 00:36:01.660 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:36:01.660 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 4019383 ']' 00:36:01.660 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:01.660 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:01.660 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:01.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:01.660 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:01.660 08:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:01.660 [2024-10-01 08:50:52.461890] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:01.660 [2024-10-01 08:50:52.463038] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:36:01.660 [2024-10-01 08:50:52.463092] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:01.660 [2024-10-01 08:50:52.534835] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:01.660 [2024-10-01 08:50:52.609407] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:01.660 [2024-10-01 08:50:52.609446] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:01.660 [2024-10-01 08:50:52.609454] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:01.660 [2024-10-01 08:50:52.609461] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:01.660 [2024-10-01 08:50:52.609467] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:01.660 [2024-10-01 08:50:52.611015] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:36:01.660 [2024-10-01 08:50:52.611239] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:36:01.660 [2024-10-01 08:50:52.611239] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:36:01.660 [2024-10-01 08:50:52.611101] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:36:01.660 [2024-10-01 08:50:52.683379] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:01.660 [2024-10-01 08:50:52.683718] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:01.660 [2024-10-01 08:50:52.684610] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:36:01.660 [2024-10-01 08:50:52.684614] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:01.660 [2024-10-01 08:50:52.684851] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:01.660 08:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:01.660 08:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:36:01.660 08:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:36:01.660 08:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:01.660 08:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:01.660 08:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:01.660 08:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:01.660 08:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.660 08:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:01.660 [2024-10-01 08:50:53.300017] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:01.660 08:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.660 08:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:01.660 08:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.660 08:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:01.660 Malloc0 00:36:01.660 08:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.660 08:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:36:01.660 08:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.660 08:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:01.660 08:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.660 08:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:01.660 08:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.660 08:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:01.660 08:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.660 08:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:01.660 08:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.660 08:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:01.660 [2024-10-01 08:50:53.363867] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:01.660 08:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.660 08:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:36:01.660 test case1: single bdev can't be used in multiple subsystems 00:36:01.660 08:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:36:01.660 08:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.660 08:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:01.660 08:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.660 08:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:36:01.660 08:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.660 08:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:01.660 08:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.660 08:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:36:01.660 08:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:36:01.660 08:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.660 08:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:01.660 [2024-10-01 08:50:53.399615] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:36:01.660 [2024-10-01 08:50:53.399634] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:36:01.660 [2024-10-01 08:50:53.399642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:01.660 request: 00:36:01.660 { 00:36:01.660 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:36:01.660 "namespace": { 00:36:01.660 "bdev_name": "Malloc0", 00:36:01.660 "no_auto_visible": false 00:36:01.660 }, 00:36:01.660 "method": "nvmf_subsystem_add_ns", 00:36:01.660 "req_id": 1 00:36:01.660 } 00:36:01.660 Got JSON-RPC error response 00:36:01.660 response: 00:36:01.660 { 00:36:01.660 "code": -32602, 00:36:01.660 "message": "Invalid parameters" 00:36:01.660 } 00:36:01.661 08:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:36:01.661 08:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:36:01.661 08:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:36:01.661 08:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:36:01.661 Adding namespace failed - expected result. 00:36:01.661 08:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:36:01.661 test case2: host connect to nvmf target in multiple paths 00:36:01.661 08:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:36:01.661 08:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.661 08:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:01.661 [2024-10-01 08:50:53.411725] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:36:01.661 08:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.661 08:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:36:02.233 08:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:36:02.494 08:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:36:02.494 08:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:36:02.494 08:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:36:02.494 08:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:36:02.494 08:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:36:04.408 08:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:36:04.408 08:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:36:04.408 08:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:36:04.408 08:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:36:04.408 08:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:36:04.408 08:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:36:04.408 08:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:36:04.408 [global] 00:36:04.408 thread=1 00:36:04.408 invalidate=1 00:36:04.408 rw=write 00:36:04.408 time_based=1 00:36:04.408 runtime=1 00:36:04.408 ioengine=libaio 00:36:04.408 direct=1 00:36:04.408 bs=4096 00:36:04.408 iodepth=1 00:36:04.408 norandommap=0 00:36:04.408 numjobs=1 00:36:04.408 00:36:04.408 verify_dump=1 00:36:04.408 verify_backlog=512 00:36:04.408 verify_state_save=0 00:36:04.408 do_verify=1 00:36:04.408 verify=crc32c-intel 00:36:04.408 [job0] 00:36:04.408 filename=/dev/nvme0n1 00:36:04.408 Could not set queue depth (nvme0n1) 00:36:04.978 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:04.978 fio-3.35 00:36:04.978 Starting 1 thread 00:36:05.919 00:36:05.919 job0: (groupid=0, jobs=1): err= 0: pid=4020475: Tue Oct 1 08:50:57 2024 00:36:05.919 read: IOPS=16, BW=66.0KiB/s (67.6kB/s)(68.0KiB/1030msec) 00:36:05.919 slat (nsec): min=7902, max=28516, avg=25538.65, stdev=6256.22 00:36:05.919 clat (usec): min=865, max=41964, avg=39058.31, stdev=9853.16 00:36:05.919 lat (usec): min=875, max=41991, avg=39083.84, stdev=9857.23 00:36:05.919 clat percentiles (usec): 00:36:05.919 | 1.00th=[ 865], 5.00th=[ 865], 10.00th=[40633], 20.00th=[41157], 00:36:05.919 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:36:05.919 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:36:05.919 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:36:05.919 | 99.99th=[42206] 00:36:05.919 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:36:05.919 slat (usec): min=9, max=28592, avg=88.13, stdev=1262.21 00:36:05.919 clat (usec): min=148, max=915, avg=616.29, stdev=144.20 00:36:05.919 lat (usec): min=159, max=29361, avg=704.43, stdev=1277.38 00:36:05.919 clat percentiles (usec): 00:36:05.919 | 1.00th=[ 219], 5.00th=[ 383], 10.00th=[ 412], 20.00th=[ 490], 00:36:05.919 | 30.00th=[ 529], 40.00th=[ 578], 50.00th=[ 627], 60.00th=[ 668], 00:36:05.919 | 70.00th=[ 717], 80.00th=[ 758], 90.00th=[ 791], 95.00th=[ 824], 00:36:05.919 | 99.00th=[ 865], 99.50th=[ 889], 99.90th=[ 914], 99.95th=[ 914], 00:36:05.919 | 99.99th=[ 914] 00:36:05.919 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:36:05.919 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:05.919 lat (usec) : 250=1.32%, 500=22.31%, 750=50.85%, 1000=22.50% 00:36:05.919 lat (msec) : 50=3.02% 00:36:05.919 cpu : usr=1.36%, sys=1.65%, ctx=532, majf=0, minf=1 00:36:05.919 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:05.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:05.919 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:05.919 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:05.919 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:05.919 00:36:05.919 Run status group 0 (all jobs): 00:36:05.919 READ: bw=66.0KiB/s (67.6kB/s), 66.0KiB/s-66.0KiB/s (67.6kB/s-67.6kB/s), io=68.0KiB (69.6kB), run=1030-1030msec 00:36:05.919 WRITE: bw=1988KiB/s (2036kB/s), 1988KiB/s-1988KiB/s (2036kB/s-2036kB/s), io=2048KiB (2097kB), run=1030-1030msec 00:36:05.919 00:36:05.919 Disk stats (read/write): 00:36:05.919 nvme0n1: ios=39/512, merge=0/0, ticks=1485/258, in_queue=1743, util=98.60% 00:36:05.919 08:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:36:06.181 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:36:06.181 08:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:36:06.181 08:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:36:06.181 08:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:36:06.181 08:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:06.181 08:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:36:06.181 08:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:06.181 08:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:36:06.181 08:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:36:06.181 08:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:36:06.181 08:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # nvmfcleanup 00:36:06.181 08:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:36:06.181 08:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:06.181 08:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:36:06.181 08:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:06.181 08:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:06.181 rmmod nvme_tcp 00:36:06.181 rmmod nvme_fabrics 00:36:06.181 rmmod nvme_keyring 00:36:06.181 08:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:06.181 08:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:36:06.181 08:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:36:06.181 08:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@513 -- # '[' -n 4019383 ']' 00:36:06.181 08:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@514 -- # killprocess 4019383 00:36:06.181 08:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 4019383 ']' 00:36:06.181 08:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 4019383 00:36:06.181 08:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:36:06.181 08:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:06.181 08:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4019383 00:36:06.441 08:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:06.441 08:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:06.441 08:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4019383' 00:36:06.441 killing process with pid 4019383 00:36:06.441 08:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 4019383 00:36:06.441 08:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 4019383 00:36:06.441 08:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:36:06.441 08:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:36:06.441 08:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:36:06.441 08:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:36:06.441 08:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-save 00:36:06.441 08:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:36:06.441 08:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-restore 00:36:06.441 08:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:06.441 08:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:06.441 08:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:06.441 08:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:06.441 08:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:08.985 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:08.985 00:36:08.985 real 0m15.279s 00:36:08.985 user 0m34.550s 00:36:08.985 sys 0m7.292s 00:36:08.985 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:08.985 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:08.985 ************************************ 00:36:08.985 END TEST nvmf_nmic 00:36:08.985 ************************************ 00:36:08.985 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:36:08.985 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:36:08.985 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:08.985 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:08.985 ************************************ 00:36:08.985 START TEST nvmf_fio_target 00:36:08.985 ************************************ 00:36:08.985 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:36:08.985 * Looking for test storage... 00:36:08.985 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:08.985 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:36:08.985 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:36:08.985 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:36:08.985 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:36:08.985 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:08.985 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:08.985 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:08.985 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:36:08.985 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:36:08.985 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:36:08.985 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:36:08.985 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:36:08.985 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:36:08.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:08.986 --rc genhtml_branch_coverage=1 00:36:08.986 --rc genhtml_function_coverage=1 00:36:08.986 --rc genhtml_legend=1 00:36:08.986 --rc geninfo_all_blocks=1 00:36:08.986 --rc geninfo_unexecuted_blocks=1 00:36:08.986 00:36:08.986 ' 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:36:08.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:08.986 --rc genhtml_branch_coverage=1 00:36:08.986 --rc genhtml_function_coverage=1 00:36:08.986 --rc genhtml_legend=1 00:36:08.986 --rc geninfo_all_blocks=1 00:36:08.986 --rc geninfo_unexecuted_blocks=1 00:36:08.986 00:36:08.986 ' 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:36:08.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:08.986 --rc genhtml_branch_coverage=1 00:36:08.986 --rc genhtml_function_coverage=1 00:36:08.986 --rc genhtml_legend=1 00:36:08.986 --rc geninfo_all_blocks=1 00:36:08.986 --rc geninfo_unexecuted_blocks=1 00:36:08.986 00:36:08.986 ' 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:36:08.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:08.986 --rc genhtml_branch_coverage=1 00:36:08.986 --rc genhtml_function_coverage=1 00:36:08.986 --rc genhtml_legend=1 00:36:08.986 --rc geninfo_all_blocks=1 00:36:08.986 --rc geninfo_unexecuted_blocks=1 00:36:08.986 00:36:08.986 ' 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:36:08.986 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:08.987 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:08.987 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:08.987 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:36:08.987 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:36:08.987 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:36:08.987 08:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:17.123 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:17.123 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:17.123 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:17.123 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # is_hw=yes 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:17.123 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:17.124 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:17.124 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:17.124 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:17.124 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.612 ms 00:36:17.124 00:36:17.124 --- 10.0.0.2 ping statistics --- 00:36:17.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:17.124 rtt min/avg/max/mdev = 0.612/0.612/0.612/0.000 ms 00:36:17.124 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:17.124 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:17.124 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:36:17.124 00:36:17.124 --- 10.0.0.1 ping statistics --- 00:36:17.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:17.124 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:36:17.124 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:17.124 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # return 0 00:36:17.124 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:36:17.124 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:17.124 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:36:17.124 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:36:17.124 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:17.124 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:36:17.124 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:36:17.124 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:36:17.124 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:36:17.124 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:17.124 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:36:17.124 08:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@505 -- # nvmfpid=4024997 00:36:17.124 08:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@506 -- # waitforlisten 4024997 00:36:17.124 08:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:36:17.124 08:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 4024997 ']' 00:36:17.124 08:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:17.124 08:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:17.124 08:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:17.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:17.124 08:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:17.124 08:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:36:17.124 [2024-10-01 08:51:08.052552] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:17.124 [2024-10-01 08:51:08.053517] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:36:17.124 [2024-10-01 08:51:08.053553] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:17.124 [2024-10-01 08:51:08.124359] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:17.124 [2024-10-01 08:51:08.188403] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:17.124 [2024-10-01 08:51:08.188441] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:17.124 [2024-10-01 08:51:08.188454] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:17.124 [2024-10-01 08:51:08.188460] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:17.124 [2024-10-01 08:51:08.188466] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:17.124 [2024-10-01 08:51:08.192014] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:36:17.124 [2024-10-01 08:51:08.192157] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:36:17.124 [2024-10-01 08:51:08.192393] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:36:17.124 [2024-10-01 08:51:08.192393] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:36:17.124 [2024-10-01 08:51:08.256496] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:17.124 [2024-10-01 08:51:08.256514] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:17.124 [2024-10-01 08:51:08.256635] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:17.124 [2024-10-01 08:51:08.257181] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:36:17.124 [2024-10-01 08:51:08.257392] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:17.124 08:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:17.124 08:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:36:17.124 08:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:36:17.124 08:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:17.124 08:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:36:17.124 08:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:17.124 08:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:36:17.124 [2024-10-01 08:51:08.484933] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:17.124 08:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:17.124 08:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:36:17.124 08:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:17.124 08:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:36:17.124 08:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:17.385 08:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:36:17.385 08:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:17.646 08:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:36:17.646 08:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:36:17.906 08:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:17.906 08:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:36:17.906 08:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:18.166 08:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:36:18.166 08:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:18.426 08:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:36:18.426 08:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:36:18.426 08:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:36:18.686 08:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:36:18.686 08:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:18.946 08:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:36:18.946 08:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:36:18.946 08:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:19.207 [2024-10-01 08:51:10.893113] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:19.207 08:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:36:19.468 08:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:36:19.468 08:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:36:20.038 08:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:36:20.038 08:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:36:20.038 08:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:36:20.038 08:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:36:20.038 08:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:36:20.038 08:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:36:21.948 08:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:36:21.948 08:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:36:21.948 08:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:36:21.948 08:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:36:21.948 08:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:36:21.948 08:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:36:21.948 08:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:36:21.948 [global] 00:36:21.948 thread=1 00:36:21.948 invalidate=1 00:36:21.948 rw=write 00:36:21.948 time_based=1 00:36:21.948 runtime=1 00:36:21.948 ioengine=libaio 00:36:21.948 direct=1 00:36:21.948 bs=4096 00:36:21.948 iodepth=1 00:36:21.948 norandommap=0 00:36:21.948 numjobs=1 00:36:21.948 00:36:21.948 verify_dump=1 00:36:21.948 verify_backlog=512 00:36:21.948 verify_state_save=0 00:36:21.948 do_verify=1 00:36:21.948 verify=crc32c-intel 00:36:21.948 [job0] 00:36:21.948 filename=/dev/nvme0n1 00:36:21.948 [job1] 00:36:21.948 filename=/dev/nvme0n2 00:36:21.948 [job2] 00:36:21.948 filename=/dev/nvme0n3 00:36:21.948 [job3] 00:36:21.948 filename=/dev/nvme0n4 00:36:21.948 Could not set queue depth (nvme0n1) 00:36:21.948 Could not set queue depth (nvme0n2) 00:36:21.948 Could not set queue depth (nvme0n3) 00:36:21.948 Could not set queue depth (nvme0n4) 00:36:22.515 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:22.515 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:22.515 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:22.515 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:22.515 fio-3.35 00:36:22.515 Starting 4 threads 00:36:23.893 00:36:23.893 job0: (groupid=0, jobs=1): err= 0: pid=4026714: Tue Oct 1 08:51:15 2024 00:36:23.893 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:36:23.893 slat (nsec): min=12339, max=60570, avg=27556.54, stdev=3564.74 00:36:23.893 clat (usec): min=626, max=1568, avg=1069.33, stdev=97.48 00:36:23.893 lat (usec): min=653, max=1595, avg=1096.89, stdev=97.45 00:36:23.893 clat percentiles (usec): 00:36:23.893 | 1.00th=[ 816], 5.00th=[ 914], 10.00th=[ 963], 20.00th=[ 1004], 00:36:23.893 | 30.00th=[ 1029], 40.00th=[ 1057], 50.00th=[ 1074], 60.00th=[ 1090], 00:36:23.893 | 70.00th=[ 1106], 80.00th=[ 1139], 90.00th=[ 1172], 95.00th=[ 1205], 00:36:23.893 | 99.00th=[ 1352], 99.50th=[ 1467], 99.90th=[ 1565], 99.95th=[ 1565], 00:36:23.893 | 99.99th=[ 1565] 00:36:23.893 write: IOPS=590, BW=2362KiB/s (2418kB/s)(2364KiB/1001msec); 0 zone resets 00:36:23.893 slat (usec): min=9, max=32817, avg=87.66, stdev=1348.65 00:36:23.893 clat (usec): min=220, max=1186, avg=637.59, stdev=138.52 00:36:23.893 lat (usec): min=232, max=33320, avg=725.25, stdev=1350.55 00:36:23.893 clat percentiles (usec): 00:36:23.893 | 1.00th=[ 265], 5.00th=[ 412], 10.00th=[ 461], 20.00th=[ 519], 00:36:23.893 | 30.00th=[ 570], 40.00th=[ 611], 50.00th=[ 644], 60.00th=[ 685], 00:36:23.893 | 70.00th=[ 717], 80.00th=[ 750], 90.00th=[ 799], 95.00th=[ 848], 00:36:23.893 | 99.00th=[ 955], 99.50th=[ 1029], 99.90th=[ 1188], 99.95th=[ 1188], 00:36:23.893 | 99.99th=[ 1188] 00:36:23.893 bw ( KiB/s): min= 4096, max= 4096, per=45.71%, avg=4096.00, stdev= 0.00, samples=1 00:36:23.893 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:23.893 lat (usec) : 250=0.27%, 500=9.07%, 750=33.45%, 1000=19.67% 00:36:23.893 lat (msec) : 2=37.53% 00:36:23.893 cpu : usr=2.50%, sys=4.20%, ctx=1106, majf=0, minf=1 00:36:23.893 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:23.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.893 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.893 issued rwts: total=512,591,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:23.893 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:23.893 job1: (groupid=0, jobs=1): err= 0: pid=4026717: Tue Oct 1 08:51:15 2024 00:36:23.893 read: IOPS=28, BW=113KiB/s (116kB/s)(116KiB/1028msec) 00:36:23.893 slat (nsec): min=5370, max=29868, avg=21987.34, stdev=8260.89 00:36:23.893 clat (usec): min=407, max=41921, avg=25915.93, stdev=19995.09 00:36:23.893 lat (usec): min=417, max=41948, avg=25937.92, stdev=20000.13 00:36:23.893 clat percentiles (usec): 00:36:23.893 | 1.00th=[ 408], 5.00th=[ 603], 10.00th=[ 668], 20.00th=[ 734], 00:36:23.893 | 30.00th=[ 889], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:36:23.893 | 70.00th=[41157], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:36:23.893 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:36:23.893 | 99.99th=[41681] 00:36:23.893 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:36:23.893 slat (nsec): min=10033, max=55387, avg=33248.89, stdev=8080.08 00:36:23.893 clat (usec): min=157, max=747, avg=495.88, stdev=100.25 00:36:23.893 lat (usec): min=192, max=781, avg=529.13, stdev=102.42 00:36:23.893 clat percentiles (usec): 00:36:23.893 | 1.00th=[ 231], 5.00th=[ 330], 10.00th=[ 367], 20.00th=[ 404], 00:36:23.893 | 30.00th=[ 445], 40.00th=[ 478], 50.00th=[ 506], 60.00th=[ 529], 00:36:23.893 | 70.00th=[ 553], 80.00th=[ 578], 90.00th=[ 619], 95.00th=[ 652], 00:36:23.893 | 99.00th=[ 709], 99.50th=[ 742], 99.90th=[ 750], 99.95th=[ 750], 00:36:23.893 | 99.99th=[ 750] 00:36:23.893 bw ( KiB/s): min= 4096, max= 4096, per=45.71%, avg=4096.00, stdev= 0.00, samples=1 00:36:23.893 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:23.893 lat (usec) : 250=1.48%, 500=43.44%, 750=50.83%, 1000=0.55% 00:36:23.893 lat (msec) : 2=0.37%, 50=3.33% 00:36:23.893 cpu : usr=0.49%, sys=1.95%, ctx=542, majf=0, minf=1 00:36:23.893 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:23.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.894 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.894 issued rwts: total=29,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:23.894 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:23.894 job2: (groupid=0, jobs=1): err= 0: pid=4026720: Tue Oct 1 08:51:15 2024 00:36:23.894 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:36:23.894 slat (nsec): min=14747, max=46698, avg=28000.11, stdev=2607.54 00:36:23.894 clat (usec): min=695, max=1498, avg=1042.65, stdev=86.56 00:36:23.894 lat (usec): min=724, max=1526, avg=1070.65, stdev=86.38 00:36:23.894 clat percentiles (usec): 00:36:23.894 | 1.00th=[ 840], 5.00th=[ 906], 10.00th=[ 947], 20.00th=[ 979], 00:36:23.894 | 30.00th=[ 1004], 40.00th=[ 1029], 50.00th=[ 1045], 60.00th=[ 1057], 00:36:23.894 | 70.00th=[ 1090], 80.00th=[ 1106], 90.00th=[ 1139], 95.00th=[ 1172], 00:36:23.894 | 99.00th=[ 1254], 99.50th=[ 1401], 99.90th=[ 1500], 99.95th=[ 1500], 00:36:23.894 | 99.99th=[ 1500] 00:36:23.894 write: IOPS=687, BW=2749KiB/s (2815kB/s)(2752KiB/1001msec); 0 zone resets 00:36:23.894 slat (nsec): min=9662, max=72680, avg=31022.37, stdev=10938.40 00:36:23.894 clat (usec): min=255, max=983, avg=610.80, stdev=120.16 00:36:23.894 lat (usec): min=267, max=1018, avg=641.82, stdev=125.63 00:36:23.894 clat percentiles (usec): 00:36:23.894 | 1.00th=[ 347], 5.00th=[ 400], 10.00th=[ 449], 20.00th=[ 498], 00:36:23.894 | 30.00th=[ 553], 40.00th=[ 586], 50.00th=[ 619], 60.00th=[ 652], 00:36:23.894 | 70.00th=[ 685], 80.00th=[ 709], 90.00th=[ 766], 95.00th=[ 799], 00:36:23.894 | 99.00th=[ 873], 99.50th=[ 922], 99.90th=[ 988], 99.95th=[ 988], 00:36:23.894 | 99.99th=[ 988] 00:36:23.894 bw ( KiB/s): min= 4096, max= 4096, per=45.71%, avg=4096.00, stdev= 0.00, samples=1 00:36:23.894 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:23.894 lat (usec) : 500=11.58%, 750=39.00%, 1000=18.92% 00:36:23.894 lat (msec) : 2=30.50% 00:36:23.894 cpu : usr=2.90%, sys=4.30%, ctx=1201, majf=0, minf=1 00:36:23.894 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:23.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.894 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.894 issued rwts: total=512,688,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:23.894 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:23.894 job3: (groupid=0, jobs=1): err= 0: pid=4026722: Tue Oct 1 08:51:15 2024 00:36:23.894 read: IOPS=15, BW=63.0KiB/s (64.5kB/s)(64.0KiB/1016msec) 00:36:23.894 slat (nsec): min=26573, max=27312, avg=26891.12, stdev=217.65 00:36:23.894 clat (usec): min=41346, max=42083, avg=41927.24, stdev=165.79 00:36:23.894 lat (usec): min=41373, max=42110, avg=41954.13, stdev=165.81 00:36:23.894 clat percentiles (usec): 00:36:23.894 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:36:23.894 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:36:23.894 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:36:23.894 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:36:23.894 | 99.99th=[42206] 00:36:23.894 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:36:23.894 slat (nsec): min=10291, max=74244, avg=32281.63, stdev=8871.19 00:36:23.894 clat (usec): min=321, max=945, avg=632.11, stdev=118.21 00:36:23.894 lat (usec): min=333, max=969, avg=664.39, stdev=120.76 00:36:23.894 clat percentiles (usec): 00:36:23.894 | 1.00th=[ 359], 5.00th=[ 433], 10.00th=[ 482], 20.00th=[ 529], 00:36:23.894 | 30.00th=[ 570], 40.00th=[ 603], 50.00th=[ 635], 60.00th=[ 660], 00:36:23.894 | 70.00th=[ 693], 80.00th=[ 734], 90.00th=[ 799], 95.00th=[ 824], 00:36:23.894 | 99.00th=[ 898], 99.50th=[ 922], 99.90th=[ 947], 99.95th=[ 947], 00:36:23.894 | 99.99th=[ 947] 00:36:23.894 bw ( KiB/s): min= 4096, max= 4096, per=45.71%, avg=4096.00, stdev= 0.00, samples=1 00:36:23.894 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:23.894 lat (usec) : 500=13.45%, 750=66.86%, 1000=16.67% 00:36:23.894 lat (msec) : 50=3.03% 00:36:23.894 cpu : usr=0.79%, sys=1.58%, ctx=530, majf=0, minf=1 00:36:23.894 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:23.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.894 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.894 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:23.894 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:23.894 00:36:23.894 Run status group 0 (all jobs): 00:36:23.894 READ: bw=4160KiB/s (4259kB/s), 63.0KiB/s-2046KiB/s (64.5kB/s-2095kB/s), io=4276KiB (4379kB), run=1001-1028msec 00:36:23.894 WRITE: bw=8961KiB/s (9176kB/s), 1992KiB/s-2749KiB/s (2040kB/s-2815kB/s), io=9212KiB (9433kB), run=1001-1028msec 00:36:23.894 00:36:23.894 Disk stats (read/write): 00:36:23.894 nvme0n1: ios=463/512, merge=0/0, ticks=1419/262, in_queue=1681, util=96.09% 00:36:23.894 nvme0n2: ios=74/512, merge=0/0, ticks=940/241, in_queue=1181, util=96.73% 00:36:23.894 nvme0n3: ios=509/512, merge=0/0, ticks=1465/256, in_queue=1721, util=96.51% 00:36:23.894 nvme0n4: ios=68/512, merge=0/0, ticks=857/306, in_queue=1163, util=96.78% 00:36:23.894 08:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:36:23.894 [global] 00:36:23.894 thread=1 00:36:23.894 invalidate=1 00:36:23.894 rw=randwrite 00:36:23.894 time_based=1 00:36:23.894 runtime=1 00:36:23.894 ioengine=libaio 00:36:23.894 direct=1 00:36:23.894 bs=4096 00:36:23.894 iodepth=1 00:36:23.894 norandommap=0 00:36:23.894 numjobs=1 00:36:23.894 00:36:23.894 verify_dump=1 00:36:23.894 verify_backlog=512 00:36:23.894 verify_state_save=0 00:36:23.894 do_verify=1 00:36:23.894 verify=crc32c-intel 00:36:23.894 [job0] 00:36:23.894 filename=/dev/nvme0n1 00:36:23.894 [job1] 00:36:23.894 filename=/dev/nvme0n2 00:36:23.894 [job2] 00:36:23.894 filename=/dev/nvme0n3 00:36:23.894 [job3] 00:36:23.894 filename=/dev/nvme0n4 00:36:23.894 Could not set queue depth (nvme0n1) 00:36:23.894 Could not set queue depth (nvme0n2) 00:36:23.894 Could not set queue depth (nvme0n3) 00:36:23.894 Could not set queue depth (nvme0n4) 00:36:24.153 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:24.153 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:24.153 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:24.153 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:24.153 fio-3.35 00:36:24.153 Starting 4 threads 00:36:25.550 00:36:25.550 job0: (groupid=0, jobs=1): err= 0: pid=4027237: Tue Oct 1 08:51:16 2024 00:36:25.550 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:36:25.550 slat (nsec): min=8186, max=46737, avg=27394.21, stdev=3289.34 00:36:25.550 clat (usec): min=750, max=1656, avg=1080.97, stdev=122.35 00:36:25.550 lat (usec): min=777, max=1683, avg=1108.36, stdev=122.35 00:36:25.550 clat percentiles (usec): 00:36:25.550 | 1.00th=[ 775], 5.00th=[ 898], 10.00th=[ 947], 20.00th=[ 988], 00:36:25.550 | 30.00th=[ 1029], 40.00th=[ 1045], 50.00th=[ 1074], 60.00th=[ 1090], 00:36:25.550 | 70.00th=[ 1123], 80.00th=[ 1172], 90.00th=[ 1237], 95.00th=[ 1303], 00:36:25.550 | 99.00th=[ 1434], 99.50th=[ 1450], 99.90th=[ 1663], 99.95th=[ 1663], 00:36:25.550 | 99.99th=[ 1663] 00:36:25.550 write: IOPS=621, BW=2486KiB/s (2545kB/s)(2488KiB/1001msec); 0 zone resets 00:36:25.550 slat (nsec): min=9317, max=69023, avg=31434.35, stdev=9225.47 00:36:25.550 clat (usec): min=207, max=1255, avg=649.65, stdev=127.18 00:36:25.550 lat (usec): min=217, max=1289, avg=681.09, stdev=130.59 00:36:25.551 clat percentiles (usec): 00:36:25.551 | 1.00th=[ 347], 5.00th=[ 433], 10.00th=[ 490], 20.00th=[ 553], 00:36:25.551 | 30.00th=[ 586], 40.00th=[ 627], 50.00th=[ 652], 60.00th=[ 676], 00:36:25.551 | 70.00th=[ 709], 80.00th=[ 750], 90.00th=[ 799], 95.00th=[ 848], 00:36:25.551 | 99.00th=[ 996], 99.50th=[ 1057], 99.90th=[ 1254], 99.95th=[ 1254], 00:36:25.551 | 99.99th=[ 1254] 00:36:25.551 bw ( KiB/s): min= 4096, max= 4096, per=42.80%, avg=4096.00, stdev= 0.00, samples=1 00:36:25.551 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:25.551 lat (usec) : 250=0.09%, 500=6.17%, 750=37.48%, 1000=21.52% 00:36:25.551 lat (msec) : 2=34.74% 00:36:25.551 cpu : usr=1.80%, sys=5.10%, ctx=1136, majf=0, minf=1 00:36:25.551 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:25.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.551 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.551 issued rwts: total=512,622,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:25.551 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:25.551 job1: (groupid=0, jobs=1): err= 0: pid=4027238: Tue Oct 1 08:51:16 2024 00:36:25.551 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:36:25.551 slat (nsec): min=8056, max=46808, avg=27838.78, stdev=2670.79 00:36:25.551 clat (usec): min=707, max=1285, avg=1010.88, stdev=109.80 00:36:25.551 lat (usec): min=735, max=1312, avg=1038.72, stdev=109.79 00:36:25.551 clat percentiles (usec): 00:36:25.551 | 1.00th=[ 734], 5.00th=[ 816], 10.00th=[ 857], 20.00th=[ 914], 00:36:25.551 | 30.00th=[ 971], 40.00th=[ 996], 50.00th=[ 1020], 60.00th=[ 1045], 00:36:25.551 | 70.00th=[ 1074], 80.00th=[ 1106], 90.00th=[ 1139], 95.00th=[ 1188], 00:36:25.551 | 99.00th=[ 1237], 99.50th=[ 1270], 99.90th=[ 1287], 99.95th=[ 1287], 00:36:25.551 | 99.99th=[ 1287] 00:36:25.551 write: IOPS=699, BW=2797KiB/s (2864kB/s)(2800KiB/1001msec); 0 zone resets 00:36:25.551 slat (nsec): min=9309, max=69131, avg=32245.53, stdev=8931.52 00:36:25.551 clat (usec): min=198, max=915, avg=621.89, stdev=124.84 00:36:25.551 lat (usec): min=233, max=955, avg=654.14, stdev=127.88 00:36:25.551 clat percentiles (usec): 00:36:25.551 | 1.00th=[ 334], 5.00th=[ 400], 10.00th=[ 461], 20.00th=[ 510], 00:36:25.551 | 30.00th=[ 562], 40.00th=[ 594], 50.00th=[ 627], 60.00th=[ 660], 00:36:25.551 | 70.00th=[ 693], 80.00th=[ 725], 90.00th=[ 783], 95.00th=[ 824], 00:36:25.551 | 99.00th=[ 881], 99.50th=[ 889], 99.90th=[ 914], 99.95th=[ 914], 00:36:25.551 | 99.99th=[ 914] 00:36:25.551 bw ( KiB/s): min= 4096, max= 4096, per=42.80%, avg=4096.00, stdev= 0.00, samples=1 00:36:25.551 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:25.551 lat (usec) : 250=0.17%, 500=10.40%, 750=38.70%, 1000=25.74% 00:36:25.551 lat (msec) : 2=25.00% 00:36:25.551 cpu : usr=2.30%, sys=5.30%, ctx=1213, majf=0, minf=1 00:36:25.551 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:25.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.551 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.551 issued rwts: total=512,700,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:25.551 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:25.551 job2: (groupid=0, jobs=1): err= 0: pid=4027239: Tue Oct 1 08:51:16 2024 00:36:25.551 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:36:25.551 slat (nsec): min=25136, max=64399, avg=26611.98, stdev=3575.09 00:36:25.551 clat (usec): min=717, max=1397, avg=1092.88, stdev=108.74 00:36:25.551 lat (usec): min=742, max=1423, avg=1119.49, stdev=108.63 00:36:25.551 clat percentiles (usec): 00:36:25.551 | 1.00th=[ 807], 5.00th=[ 881], 10.00th=[ 955], 20.00th=[ 1012], 00:36:25.551 | 30.00th=[ 1057], 40.00th=[ 1074], 50.00th=[ 1090], 60.00th=[ 1123], 00:36:25.551 | 70.00th=[ 1156], 80.00th=[ 1188], 90.00th=[ 1221], 95.00th=[ 1254], 00:36:25.551 | 99.00th=[ 1336], 99.50th=[ 1369], 99.90th=[ 1401], 99.95th=[ 1401], 00:36:25.551 | 99.99th=[ 1401] 00:36:25.551 write: IOPS=610, BW=2442KiB/s (2500kB/s)(2444KiB/1001msec); 0 zone resets 00:36:25.551 slat (nsec): min=9467, max=53378, avg=29624.79, stdev=8654.28 00:36:25.551 clat (usec): min=237, max=977, avg=654.02, stdev=124.45 00:36:25.551 lat (usec): min=250, max=1009, avg=683.65, stdev=127.53 00:36:25.551 clat percentiles (usec): 00:36:25.551 | 1.00th=[ 351], 5.00th=[ 445], 10.00th=[ 490], 20.00th=[ 562], 00:36:25.551 | 30.00th=[ 603], 40.00th=[ 627], 50.00th=[ 660], 60.00th=[ 685], 00:36:25.551 | 70.00th=[ 725], 80.00th=[ 758], 90.00th=[ 807], 95.00th=[ 848], 00:36:25.551 | 99.00th=[ 938], 99.50th=[ 971], 99.90th=[ 979], 99.95th=[ 979], 00:36:25.551 | 99.99th=[ 979] 00:36:25.551 bw ( KiB/s): min= 4096, max= 4096, per=42.80%, avg=4096.00, stdev= 0.00, samples=1 00:36:25.551 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:25.551 lat (usec) : 250=0.18%, 500=6.41%, 750=35.98%, 1000=19.32% 00:36:25.551 lat (msec) : 2=38.11% 00:36:25.551 cpu : usr=1.00%, sys=4.00%, ctx=1123, majf=0, minf=1 00:36:25.551 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:25.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.551 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.551 issued rwts: total=512,611,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:25.551 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:25.551 job3: (groupid=0, jobs=1): err= 0: pid=4027240: Tue Oct 1 08:51:16 2024 00:36:25.551 read: IOPS=15, BW=62.6KiB/s (64.1kB/s)(64.0KiB/1022msec) 00:36:25.551 slat (nsec): min=25555, max=26431, avg=25934.69, stdev=210.80 00:36:25.551 clat (usec): min=1250, max=42207, avg=36843.92, stdev=13888.42 00:36:25.551 lat (usec): min=1276, max=42233, avg=36869.86, stdev=13888.34 00:36:25.551 clat percentiles (usec): 00:36:25.551 | 1.00th=[ 1254], 5.00th=[ 1254], 10.00th=[ 1287], 20.00th=[41681], 00:36:25.551 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:36:25.551 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:36:25.551 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:36:25.551 | 99.99th=[42206] 00:36:25.551 write: IOPS=500, BW=2004KiB/s (2052kB/s)(2048KiB/1022msec); 0 zone resets 00:36:25.551 slat (nsec): min=9663, max=52202, avg=29811.60, stdev=7582.64 00:36:25.551 clat (usec): min=285, max=1162, avg=805.69, stdev=156.88 00:36:25.551 lat (usec): min=296, max=1194, avg=835.50, stdev=159.47 00:36:25.551 clat percentiles (usec): 00:36:25.551 | 1.00th=[ 375], 5.00th=[ 519], 10.00th=[ 570], 20.00th=[ 668], 00:36:25.551 | 30.00th=[ 742], 40.00th=[ 791], 50.00th=[ 832], 60.00th=[ 881], 00:36:25.551 | 70.00th=[ 906], 80.00th=[ 938], 90.00th=[ 979], 95.00th=[ 1004], 00:36:25.551 | 99.00th=[ 1057], 99.50th=[ 1057], 99.90th=[ 1156], 99.95th=[ 1156], 00:36:25.551 | 99.99th=[ 1156] 00:36:25.551 bw ( KiB/s): min= 4096, max= 4096, per=42.80%, avg=4096.00, stdev= 0.00, samples=1 00:36:25.551 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:25.551 lat (usec) : 500=3.98%, 750=26.33%, 1000=60.98% 00:36:25.551 lat (msec) : 2=6.06%, 50=2.65% 00:36:25.551 cpu : usr=0.88%, sys=1.47%, ctx=528, majf=0, minf=2 00:36:25.551 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:25.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.551 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.551 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:25.551 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:25.551 00:36:25.551 Run status group 0 (all jobs): 00:36:25.551 READ: bw=6074KiB/s (6220kB/s), 62.6KiB/s-2046KiB/s (64.1kB/s-2095kB/s), io=6208KiB (6357kB), run=1001-1022msec 00:36:25.551 WRITE: bw=9569KiB/s (9799kB/s), 2004KiB/s-2797KiB/s (2052kB/s-2864kB/s), io=9780KiB (10.0MB), run=1001-1022msec 00:36:25.551 00:36:25.551 Disk stats (read/write): 00:36:25.551 nvme0n1: ios=462/512, merge=0/0, ticks=1410/272, in_queue=1682, util=96.69% 00:36:25.551 nvme0n2: ios=525/512, merge=0/0, ticks=1453/250, in_queue=1703, util=97.25% 00:36:25.551 nvme0n3: ios=447/512, merge=0/0, ticks=647/324, in_queue=971, util=91.35% 00:36:25.551 nvme0n4: ios=50/512, merge=0/0, ticks=456/401, in_queue=857, util=94.88% 00:36:25.551 08:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:36:25.551 [global] 00:36:25.551 thread=1 00:36:25.551 invalidate=1 00:36:25.551 rw=write 00:36:25.551 time_based=1 00:36:25.551 runtime=1 00:36:25.551 ioengine=libaio 00:36:25.551 direct=1 00:36:25.551 bs=4096 00:36:25.551 iodepth=128 00:36:25.551 norandommap=0 00:36:25.551 numjobs=1 00:36:25.551 00:36:25.551 verify_dump=1 00:36:25.551 verify_backlog=512 00:36:25.552 verify_state_save=0 00:36:25.552 do_verify=1 00:36:25.552 verify=crc32c-intel 00:36:25.552 [job0] 00:36:25.552 filename=/dev/nvme0n1 00:36:25.552 [job1] 00:36:25.552 filename=/dev/nvme0n2 00:36:25.552 [job2] 00:36:25.552 filename=/dev/nvme0n3 00:36:25.552 [job3] 00:36:25.552 filename=/dev/nvme0n4 00:36:25.552 Could not set queue depth (nvme0n1) 00:36:25.552 Could not set queue depth (nvme0n2) 00:36:25.552 Could not set queue depth (nvme0n3) 00:36:25.552 Could not set queue depth (nvme0n4) 00:36:25.811 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:25.811 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:25.811 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:25.811 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:25.811 fio-3.35 00:36:25.811 Starting 4 threads 00:36:26.807 00:36:26.807 job0: (groupid=0, jobs=1): err= 0: pid=4027760: Tue Oct 1 08:51:18 2024 00:36:26.807 read: IOPS=6459, BW=25.2MiB/s (26.5MB/s)(25.4MiB/1005msec) 00:36:26.807 slat (nsec): min=921, max=10406k, avg=76521.02, stdev=519715.05 00:36:26.807 clat (usec): min=939, max=22798, avg=10233.84, stdev=3147.20 00:36:26.807 lat (usec): min=1204, max=22822, avg=10310.37, stdev=3177.36 00:36:26.807 clat percentiles (usec): 00:36:26.807 | 1.00th=[ 3982], 5.00th=[ 5538], 10.00th=[ 6259], 20.00th=[ 7439], 00:36:26.807 | 30.00th=[ 8094], 40.00th=[ 9110], 50.00th=[10290], 60.00th=[11338], 00:36:26.807 | 70.00th=[11994], 80.00th=[12780], 90.00th=[14091], 95.00th=[15664], 00:36:26.807 | 99.00th=[17171], 99.50th=[19792], 99.90th=[21890], 99.95th=[21890], 00:36:26.807 | 99.99th=[22676] 00:36:26.807 write: IOPS=6622, BW=25.9MiB/s (27.1MB/s)(26.0MiB/1005msec); 0 zone resets 00:36:26.807 slat (nsec): min=1602, max=7369.7k, avg=65869.03, stdev=443604.55 00:36:26.807 clat (usec): min=523, max=57104, avg=9076.14, stdev=6626.18 00:36:26.807 lat (usec): min=532, max=57113, avg=9142.01, stdev=6670.29 00:36:26.807 clat percentiles (usec): 00:36:26.807 | 1.00th=[ 2343], 5.00th=[ 3359], 10.00th=[ 4228], 20.00th=[ 5276], 00:36:26.807 | 30.00th=[ 6128], 40.00th=[ 6915], 50.00th=[ 7570], 60.00th=[ 8979], 00:36:26.807 | 70.00th=[10421], 80.00th=[11469], 90.00th=[12911], 95.00th=[15270], 00:36:26.807 | 99.00th=[47973], 99.50th=[54789], 99.90th=[56361], 99.95th=[56886], 00:36:26.807 | 99.99th=[56886] 00:36:26.807 bw ( KiB/s): min=24576, max=28672, per=33.60%, avg=26624.00, stdev=2896.31, samples=2 00:36:26.807 iops : min= 6144, max= 7168, avg=6656.00, stdev=724.08, samples=2 00:36:26.807 lat (usec) : 750=0.09%, 1000=0.05% 00:36:26.807 lat (msec) : 2=0.37%, 4=4.07%, 10=53.12%, 20=40.56%, 50=1.34% 00:36:26.807 lat (msec) : 100=0.41% 00:36:26.807 cpu : usr=4.48%, sys=6.27%, ctx=535, majf=0, minf=1 00:36:26.807 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:36:26.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:26.807 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:26.807 issued rwts: total=6492,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:26.807 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:26.807 job1: (groupid=0, jobs=1): err= 0: pid=4027761: Tue Oct 1 08:51:18 2024 00:36:26.807 read: IOPS=5632, BW=22.0MiB/s (23.1MB/s)(22.2MiB/1008msec) 00:36:26.807 slat (nsec): min=937, max=8076.1k, avg=77152.27, stdev=503979.21 00:36:26.807 clat (usec): min=2851, max=42467, avg=10012.68, stdev=3556.13 00:36:26.807 lat (usec): min=2863, max=43893, avg=10089.83, stdev=3592.90 00:36:26.807 clat percentiles (usec): 00:36:26.807 | 1.00th=[ 3556], 5.00th=[ 5014], 10.00th=[ 6194], 20.00th=[ 7111], 00:36:26.807 | 30.00th=[ 8094], 40.00th=[ 8717], 50.00th=[ 9765], 60.00th=[10683], 00:36:26.807 | 70.00th=[11600], 80.00th=[12387], 90.00th=[13960], 95.00th=[15795], 00:36:26.807 | 99.00th=[21365], 99.50th=[22414], 99.90th=[42206], 99.95th=[42206], 00:36:26.807 | 99.99th=[42206] 00:36:26.807 write: IOPS=6095, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1008msec); 0 zone resets 00:36:26.807 slat (nsec): min=1691, max=23742k, avg=78028.86, stdev=525248.70 00:36:26.807 clat (usec): min=706, max=39773, avg=11041.55, stdev=7042.02 00:36:26.807 lat (usec): min=803, max=39775, avg=11119.58, stdev=7091.20 00:36:26.807 clat percentiles (usec): 00:36:26.807 | 1.00th=[ 2474], 5.00th=[ 4047], 10.00th=[ 4817], 20.00th=[ 5735], 00:36:26.807 | 30.00th=[ 6652], 40.00th=[ 8455], 50.00th=[ 9634], 60.00th=[10683], 00:36:26.808 | 70.00th=[11600], 80.00th=[13435], 90.00th=[21365], 95.00th=[26870], 00:36:26.808 | 99.00th=[36439], 99.50th=[38011], 99.90th=[39584], 99.95th=[39584], 00:36:26.808 | 99.99th=[39584] 00:36:26.808 bw ( KiB/s): min=20936, max=27560, per=30.60%, avg=24248.00, stdev=4683.88, samples=2 00:36:26.808 iops : min= 5234, max= 6890, avg=6062.00, stdev=1170.97, samples=2 00:36:26.808 lat (usec) : 750=0.01%, 1000=0.02% 00:36:26.808 lat (msec) : 2=0.08%, 4=3.09%, 10=50.10%, 20=40.03%, 50=6.67% 00:36:26.808 cpu : usr=3.87%, sys=6.55%, ctx=470, majf=0, minf=1 00:36:26.808 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:36:26.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:26.808 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:26.808 issued rwts: total=5678,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:26.808 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:26.808 job2: (groupid=0, jobs=1): err= 0: pid=4027763: Tue Oct 1 08:51:18 2024 00:36:26.808 read: IOPS=3349, BW=13.1MiB/s (13.7MB/s)(13.1MiB/1003msec) 00:36:26.808 slat (nsec): min=920, max=8796.0k, avg=128108.88, stdev=701027.04 00:36:26.808 clat (usec): min=1072, max=40981, avg=16324.12, stdev=8165.32 00:36:26.808 lat (usec): min=4338, max=45008, avg=16452.23, stdev=8209.70 00:36:26.808 clat percentiles (usec): 00:36:26.808 | 1.00th=[ 5014], 5.00th=[ 8455], 10.00th=[ 8848], 20.00th=[ 9634], 00:36:26.808 | 30.00th=[10159], 40.00th=[10683], 50.00th=[11469], 60.00th=[16712], 00:36:26.808 | 70.00th=[21627], 80.00th=[24773], 90.00th=[28181], 95.00th=[31851], 00:36:26.808 | 99.00th=[36439], 99.50th=[39060], 99.90th=[41157], 99.95th=[41157], 00:36:26.808 | 99.99th=[41157] 00:36:26.808 write: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec); 0 zone resets 00:36:26.808 slat (nsec): min=1558, max=7621.9k, avg=153014.73, stdev=577644.27 00:36:26.808 clat (usec): min=765, max=50479, avg=20192.19, stdev=14065.07 00:36:26.808 lat (usec): min=774, max=50486, avg=20345.21, stdev=14159.91 00:36:26.808 clat percentiles (usec): 00:36:26.808 | 1.00th=[ 1467], 5.00th=[ 6063], 10.00th=[ 7504], 20.00th=[ 8029], 00:36:26.808 | 30.00th=[ 9503], 40.00th=[12518], 50.00th=[13042], 60.00th=[17171], 00:36:26.808 | 70.00th=[25822], 80.00th=[35914], 90.00th=[44827], 95.00th=[46924], 00:36:26.808 | 99.00th=[48497], 99.50th=[49546], 99.90th=[50594], 99.95th=[50594], 00:36:26.808 | 99.99th=[50594] 00:36:26.808 bw ( KiB/s): min=12288, max=16384, per=18.09%, avg=14336.00, stdev=2896.31, samples=2 00:36:26.808 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:36:26.808 lat (usec) : 1000=0.04% 00:36:26.808 lat (msec) : 2=0.66%, 4=0.49%, 10=27.40%, 20=35.15%, 50=36.15% 00:36:26.808 lat (msec) : 100=0.10% 00:36:26.808 cpu : usr=2.40%, sys=3.19%, ctx=557, majf=0, minf=2 00:36:26.808 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:36:26.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:26.808 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:26.808 issued rwts: total=3360,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:26.808 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:26.808 job3: (groupid=0, jobs=1): err= 0: pid=4027764: Tue Oct 1 08:51:18 2024 00:36:26.808 read: IOPS=3286, BW=12.8MiB/s (13.5MB/s)(12.9MiB/1003msec) 00:36:26.808 slat (nsec): min=928, max=8154.6k, avg=156080.12, stdev=851015.87 00:36:26.808 clat (usec): min=1394, max=32213, avg=19500.20, stdev=6398.36 00:36:26.808 lat (usec): min=4093, max=32219, avg=19656.28, stdev=6400.42 00:36:26.808 clat percentiles (usec): 00:36:26.808 | 1.00th=[ 5014], 5.00th=[ 8979], 10.00th=[10552], 20.00th=[12387], 00:36:26.808 | 30.00th=[15401], 40.00th=[17695], 50.00th=[21365], 60.00th=[22938], 00:36:26.808 | 70.00th=[24249], 80.00th=[25035], 90.00th=[26346], 95.00th=[28967], 00:36:26.808 | 99.00th=[30278], 99.50th=[32113], 99.90th=[32113], 99.95th=[32113], 00:36:26.808 | 99.99th=[32113] 00:36:26.808 write: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec); 0 zone resets 00:36:26.808 slat (nsec): min=1582, max=16472k, avg=130110.54, stdev=801233.45 00:36:26.808 clat (usec): min=1200, max=53542, avg=17565.84, stdev=8610.19 00:36:26.808 lat (usec): min=1214, max=53549, avg=17695.95, stdev=8640.51 00:36:26.808 clat percentiles (usec): 00:36:26.808 | 1.00th=[ 7111], 5.00th=[ 8717], 10.00th=[10028], 20.00th=[11600], 00:36:26.808 | 30.00th=[12649], 40.00th=[13435], 50.00th=[14222], 60.00th=[17433], 00:36:26.808 | 70.00th=[19530], 80.00th=[22152], 90.00th=[27919], 95.00th=[35390], 00:36:26.808 | 99.00th=[50594], 99.50th=[53740], 99.90th=[53740], 99.95th=[53740], 00:36:26.808 | 99.99th=[53740] 00:36:26.808 bw ( KiB/s): min=13888, max=14784, per=18.09%, avg=14336.00, stdev=633.57, samples=2 00:36:26.808 iops : min= 3472, max= 3696, avg=3584.00, stdev=158.39, samples=2 00:36:26.808 lat (msec) : 2=0.04%, 10=8.90%, 20=49.52%, 50=40.64%, 100=0.90% 00:36:26.808 cpu : usr=2.30%, sys=3.79%, ctx=309, majf=0, minf=2 00:36:26.808 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:36:26.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:26.808 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:26.808 issued rwts: total=3296,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:26.808 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:26.808 00:36:26.808 Run status group 0 (all jobs): 00:36:26.808 READ: bw=73.0MiB/s (76.5MB/s), 12.8MiB/s-25.2MiB/s (13.5MB/s-26.5MB/s), io=73.5MiB (77.1MB), run=1003-1008msec 00:36:26.808 WRITE: bw=77.4MiB/s (81.1MB/s), 14.0MiB/s-25.9MiB/s (14.6MB/s-27.1MB/s), io=78.0MiB (81.8MB), run=1003-1008msec 00:36:26.808 00:36:26.808 Disk stats (read/write): 00:36:26.808 nvme0n1: ios=5656/6007, merge=0/0, ticks=32386/25822, in_queue=58208, util=95.89% 00:36:26.808 nvme0n2: ios=4638/4811, merge=0/0, ticks=25810/30704, in_queue=56514, util=98.37% 00:36:26.808 nvme0n3: ios=2600/2655, merge=0/0, ticks=11811/17075, in_queue=28886, util=90.82% 00:36:26.808 nvme0n4: ios=2555/2560, merge=0/0, ticks=14095/13879, in_queue=27974, util=89.32% 00:36:26.808 08:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:36:27.121 [global] 00:36:27.121 thread=1 00:36:27.121 invalidate=1 00:36:27.121 rw=randwrite 00:36:27.121 time_based=1 00:36:27.121 runtime=1 00:36:27.121 ioengine=libaio 00:36:27.121 direct=1 00:36:27.121 bs=4096 00:36:27.121 iodepth=128 00:36:27.121 norandommap=0 00:36:27.121 numjobs=1 00:36:27.121 00:36:27.121 verify_dump=1 00:36:27.121 verify_backlog=512 00:36:27.121 verify_state_save=0 00:36:27.121 do_verify=1 00:36:27.121 verify=crc32c-intel 00:36:27.121 [job0] 00:36:27.121 filename=/dev/nvme0n1 00:36:27.121 [job1] 00:36:27.121 filename=/dev/nvme0n2 00:36:27.121 [job2] 00:36:27.121 filename=/dev/nvme0n3 00:36:27.121 [job3] 00:36:27.121 filename=/dev/nvme0n4 00:36:27.121 Could not set queue depth (nvme0n1) 00:36:27.121 Could not set queue depth (nvme0n2) 00:36:27.121 Could not set queue depth (nvme0n3) 00:36:27.121 Could not set queue depth (nvme0n4) 00:36:27.427 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:27.427 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:27.427 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:27.427 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:27.427 fio-3.35 00:36:27.427 Starting 4 threads 00:36:28.840 00:36:28.840 job0: (groupid=0, jobs=1): err= 0: pid=4028285: Tue Oct 1 08:51:20 2024 00:36:28.840 read: IOPS=7972, BW=31.1MiB/s (32.7MB/s)(31.3MiB/1006msec) 00:36:28.840 slat (nsec): min=909, max=7731.3k, avg=61435.72, stdev=444020.80 00:36:28.840 clat (usec): min=3805, max=18166, avg=8354.44, stdev=2470.02 00:36:28.840 lat (usec): min=3812, max=18923, avg=8415.88, stdev=2493.83 00:36:28.840 clat percentiles (usec): 00:36:28.840 | 1.00th=[ 4424], 5.00th=[ 5342], 10.00th=[ 6063], 20.00th=[ 6456], 00:36:28.840 | 30.00th=[ 6718], 40.00th=[ 7308], 50.00th=[ 7832], 60.00th=[ 8586], 00:36:28.840 | 70.00th=[ 8979], 80.00th=[ 9896], 90.00th=[11469], 95.00th=[13566], 00:36:28.840 | 99.00th=[16909], 99.50th=[17695], 99.90th=[17957], 99.95th=[17957], 00:36:28.840 | 99.99th=[18220] 00:36:28.840 write: IOPS=8143, BW=31.8MiB/s (33.4MB/s)(32.0MiB/1006msec); 0 zone resets 00:36:28.840 slat (nsec): min=1506, max=7279.4k, avg=54731.17, stdev=380234.76 00:36:28.840 clat (usec): min=1158, max=16781, avg=7393.38, stdev=2122.85 00:36:28.840 lat (usec): min=1167, max=16784, avg=7448.11, stdev=2134.73 00:36:28.840 clat percentiles (usec): 00:36:28.840 | 1.00th=[ 3720], 5.00th=[ 4228], 10.00th=[ 4883], 20.00th=[ 5538], 00:36:28.841 | 30.00th=[ 5932], 40.00th=[ 6259], 50.00th=[ 6980], 60.00th=[ 8094], 00:36:28.841 | 70.00th=[ 9110], 80.00th=[ 9634], 90.00th=[ 9896], 95.00th=[10552], 00:36:28.841 | 99.00th=[10945], 99.50th=[13435], 99.90th=[16319], 99.95th=[16450], 00:36:28.841 | 99.99th=[16909] 00:36:28.841 bw ( KiB/s): min=28672, max=36864, per=34.04%, avg=32768.00, stdev=5792.62, samples=2 00:36:28.841 iops : min= 7168, max= 9216, avg=8192.00, stdev=1448.15, samples=2 00:36:28.841 lat (msec) : 2=0.06%, 4=1.57%, 10=84.78%, 20=13.59% 00:36:28.841 cpu : usr=5.57%, sys=7.46%, ctx=531, majf=0, minf=1 00:36:28.841 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:36:28.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:28.841 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:28.841 issued rwts: total=8020,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:28.841 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:28.841 job1: (groupid=0, jobs=1): err= 0: pid=4028286: Tue Oct 1 08:51:20 2024 00:36:28.841 read: IOPS=6417, BW=25.1MiB/s (26.3MB/s)(25.2MiB/1004msec) 00:36:28.841 slat (nsec): min=923, max=8958.4k, avg=74952.38, stdev=442949.52 00:36:28.841 clat (usec): min=2241, max=34192, avg=9347.92, stdev=4905.55 00:36:28.841 lat (usec): min=3921, max=34199, avg=9422.87, stdev=4954.16 00:36:28.841 clat percentiles (usec): 00:36:28.841 | 1.00th=[ 5211], 5.00th=[ 5735], 10.00th=[ 6128], 20.00th=[ 6521], 00:36:28.841 | 30.00th=[ 6849], 40.00th=[ 7177], 50.00th=[ 7504], 60.00th=[ 7898], 00:36:28.841 | 70.00th=[ 8586], 80.00th=[ 9634], 90.00th=[17695], 95.00th=[22152], 00:36:28.841 | 99.00th=[25297], 99.50th=[26870], 99.90th=[34341], 99.95th=[34341], 00:36:28.841 | 99.99th=[34341] 00:36:28.841 write: IOPS=6629, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1004msec); 0 zone resets 00:36:28.841 slat (nsec): min=1569, max=14223k, avg=74247.33, stdev=554576.50 00:36:28.841 clat (usec): min=2245, max=48548, avg=9960.08, stdev=7251.86 00:36:28.841 lat (usec): min=2249, max=48580, avg=10034.33, stdev=7319.51 00:36:28.841 clat percentiles (usec): 00:36:28.841 | 1.00th=[ 4686], 5.00th=[ 5932], 10.00th=[ 6456], 20.00th=[ 6783], 00:36:28.841 | 30.00th=[ 6915], 40.00th=[ 6980], 50.00th=[ 7111], 60.00th=[ 7242], 00:36:28.841 | 70.00th=[ 7504], 80.00th=[ 8848], 90.00th=[19006], 95.00th=[30540], 00:36:28.841 | 99.00th=[38011], 99.50th=[39060], 99.90th=[40109], 99.95th=[44827], 00:36:28.841 | 99.99th=[48497] 00:36:28.841 bw ( KiB/s): min=17864, max=35384, per=27.66%, avg=26624.00, stdev=12388.51, samples=2 00:36:28.841 iops : min= 4466, max= 8846, avg=6656.00, stdev=3097.13, samples=2 00:36:28.841 lat (msec) : 4=0.11%, 10=81.59%, 20=10.44%, 50=7.86% 00:36:28.841 cpu : usr=1.89%, sys=6.08%, ctx=747, majf=0, minf=1 00:36:28.841 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:36:28.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:28.841 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:28.841 issued rwts: total=6443,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:28.841 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:28.841 job2: (groupid=0, jobs=1): err= 0: pid=4028287: Tue Oct 1 08:51:20 2024 00:36:28.841 read: IOPS=4588, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec) 00:36:28.841 slat (nsec): min=954, max=8436.9k, avg=84558.18, stdev=526998.57 00:36:28.841 clat (usec): min=1913, max=25469, avg=10939.52, stdev=3809.01 00:36:28.841 lat (usec): min=1920, max=25479, avg=11024.08, stdev=3852.67 00:36:28.841 clat percentiles (usec): 00:36:28.841 | 1.00th=[ 4555], 5.00th=[ 5407], 10.00th=[ 6325], 20.00th=[ 7373], 00:36:28.841 | 30.00th=[ 8979], 40.00th=[10159], 50.00th=[10683], 60.00th=[11469], 00:36:28.841 | 70.00th=[12125], 80.00th=[13304], 90.00th=[16057], 95.00th=[19006], 00:36:28.841 | 99.00th=[21627], 99.50th=[22152], 99.90th=[23462], 99.95th=[23725], 00:36:28.841 | 99.99th=[25560] 00:36:28.841 write: IOPS=5089, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec); 0 zone resets 00:36:28.841 slat (nsec): min=1606, max=8019.9k, avg=104770.43, stdev=498016.84 00:36:28.841 clat (usec): min=533, max=49883, avg=15024.77, stdev=9094.76 00:36:28.841 lat (usec): min=567, max=49888, avg=15129.54, stdev=9155.44 00:36:28.841 clat percentiles (usec): 00:36:28.841 | 1.00th=[ 1713], 5.00th=[ 3458], 10.00th=[ 5211], 20.00th=[ 7504], 00:36:28.841 | 30.00th=[ 8160], 40.00th=[10028], 50.00th=[12780], 60.00th=[15401], 00:36:28.841 | 70.00th=[19268], 80.00th=[24773], 90.00th=[28967], 95.00th=[31327], 00:36:28.841 | 99.00th=[37487], 99.50th=[42206], 99.90th=[45876], 99.95th=[45876], 00:36:28.841 | 99.99th=[50070] 00:36:28.841 bw ( KiB/s): min=16384, max=23616, per=20.78%, avg=20000.00, stdev=5113.80, samples=2 00:36:28.841 iops : min= 4096, max= 5904, avg=5000.00, stdev=1278.45, samples=2 00:36:28.841 lat (usec) : 750=0.03% 00:36:28.841 lat (msec) : 2=0.73%, 4=3.09%, 10=33.83%, 20=45.82%, 50=16.50% 00:36:28.841 cpu : usr=3.98%, sys=3.88%, ctx=560, majf=0, minf=1 00:36:28.841 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:36:28.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:28.841 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:28.841 issued rwts: total=4616,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:28.841 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:28.841 job3: (groupid=0, jobs=1): err= 0: pid=4028288: Tue Oct 1 08:51:20 2024 00:36:28.841 read: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec) 00:36:28.841 slat (nsec): min=931, max=9192.4k, avg=111879.00, stdev=624943.22 00:36:28.841 clat (usec): min=7135, max=37322, avg=13834.11, stdev=5274.52 00:36:28.841 lat (usec): min=7140, max=37355, avg=13945.98, stdev=5323.47 00:36:28.841 clat percentiles (usec): 00:36:28.841 | 1.00th=[ 8291], 5.00th=[ 9503], 10.00th=[10159], 20.00th=[10552], 00:36:28.841 | 30.00th=[10945], 40.00th=[11338], 50.00th=[11994], 60.00th=[12780], 00:36:28.841 | 70.00th=[13698], 80.00th=[15008], 90.00th=[22938], 95.00th=[25560], 00:36:28.841 | 99.00th=[32375], 99.50th=[33162], 99.90th=[36963], 99.95th=[36963], 00:36:28.841 | 99.99th=[37487] 00:36:28.841 write: IOPS=4217, BW=16.5MiB/s (17.3MB/s)(16.6MiB/1005msec); 0 zone resets 00:36:28.841 slat (nsec): min=1556, max=12559k, avg=123238.97, stdev=615394.11 00:36:28.841 clat (usec): min=4002, max=37109, avg=16651.37, stdev=6039.48 00:36:28.841 lat (usec): min=4666, max=38179, avg=16774.60, stdev=6085.72 00:36:28.841 clat percentiles (usec): 00:36:28.841 | 1.00th=[ 7832], 5.00th=[ 9503], 10.00th=[10028], 20.00th=[10945], 00:36:28.841 | 30.00th=[12125], 40.00th=[13829], 50.00th=[15270], 60.00th=[17171], 00:36:28.841 | 70.00th=[19268], 80.00th=[22152], 90.00th=[25560], 95.00th=[27919], 00:36:28.841 | 99.00th=[32375], 99.50th=[34341], 99.90th=[36439], 99.95th=[36439], 00:36:28.841 | 99.99th=[36963] 00:36:28.841 bw ( KiB/s): min=15112, max=17784, per=17.09%, avg=16448.00, stdev=1889.39, samples=2 00:36:28.841 iops : min= 3778, max= 4446, avg=4112.00, stdev=472.35, samples=2 00:36:28.841 lat (msec) : 10=8.84%, 20=71.43%, 50=19.72% 00:36:28.841 cpu : usr=2.59%, sys=4.08%, ctx=461, majf=0, minf=1 00:36:28.841 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:36:28.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:28.841 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:28.841 issued rwts: total=4096,4239,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:28.841 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:28.841 00:36:28.841 Run status group 0 (all jobs): 00:36:28.841 READ: bw=90.0MiB/s (94.4MB/s), 15.9MiB/s-31.1MiB/s (16.7MB/s-32.7MB/s), io=90.5MiB (94.9MB), run=1004-1006msec 00:36:28.841 WRITE: bw=94.0MiB/s (98.6MB/s), 16.5MiB/s-31.8MiB/s (17.3MB/s-33.4MB/s), io=94.6MiB (99.2MB), run=1004-1006msec 00:36:28.841 00:36:28.841 Disk stats (read/write): 00:36:28.841 nvme0n1: ios=6743/7168, merge=0/0, ticks=51681/49711, in_queue=101392, util=86.57% 00:36:28.841 nvme0n2: ios=5865/6144, merge=0/0, ticks=23085/24078, in_queue=47163, util=97.55% 00:36:28.841 nvme0n3: ios=3619/3687, merge=0/0, ticks=28670/34810, in_queue=63480, util=96.73% 00:36:28.841 nvme0n4: ios=3072/3584, merge=0/0, ticks=21876/28866, in_queue=50742, util=89.53% 00:36:28.841 08:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:36:28.841 08:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=4028575 00:36:28.841 08:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:36:28.841 08:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:36:28.841 [global] 00:36:28.841 thread=1 00:36:28.841 invalidate=1 00:36:28.841 rw=read 00:36:28.841 time_based=1 00:36:28.841 runtime=10 00:36:28.841 ioengine=libaio 00:36:28.841 direct=1 00:36:28.841 bs=4096 00:36:28.841 iodepth=1 00:36:28.841 norandommap=1 00:36:28.841 numjobs=1 00:36:28.841 00:36:28.841 [job0] 00:36:28.841 filename=/dev/nvme0n1 00:36:28.841 [job1] 00:36:28.841 filename=/dev/nvme0n2 00:36:28.841 [job2] 00:36:28.841 filename=/dev/nvme0n3 00:36:28.841 [job3] 00:36:28.841 filename=/dev/nvme0n4 00:36:28.841 Could not set queue depth (nvme0n1) 00:36:28.841 Could not set queue depth (nvme0n2) 00:36:28.841 Could not set queue depth (nvme0n3) 00:36:28.841 Could not set queue depth (nvme0n4) 00:36:29.100 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:29.100 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:29.100 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:29.100 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:29.100 fio-3.35 00:36:29.100 Starting 4 threads 00:36:31.648 08:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:36:31.648 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=3993600, buflen=4096 00:36:31.648 fio: pid=4028818, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:36:31.648 08:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:36:31.908 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=4173824, buflen=4096 00:36:31.908 fio: pid=4028817, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:36:31.908 08:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:31.908 08:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:36:32.168 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=10858496, buflen=4096 00:36:32.168 fio: pid=4028814, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:36:32.168 08:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:32.168 08:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:36:32.428 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=12009472, buflen=4096 00:36:32.428 fio: pid=4028816, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:36:32.428 08:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:32.428 08:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:36:32.428 00:36:32.428 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=4028814: Tue Oct 1 08:51:24 2024 00:36:32.428 read: IOPS=910, BW=3641KiB/s (3729kB/s)(10.4MiB/2912msec) 00:36:32.428 slat (usec): min=6, max=31299, avg=61.21, stdev=897.65 00:36:32.428 clat (usec): min=385, max=2839, avg=1022.47, stdev=141.90 00:36:32.428 lat (usec): min=413, max=32386, avg=1083.70, stdev=910.35 00:36:32.428 clat percentiles (usec): 00:36:32.428 | 1.00th=[ 709], 5.00th=[ 816], 10.00th=[ 873], 20.00th=[ 947], 00:36:32.428 | 30.00th=[ 979], 40.00th=[ 1004], 50.00th=[ 1029], 60.00th=[ 1057], 00:36:32.428 | 70.00th=[ 1074], 80.00th=[ 1106], 90.00th=[ 1139], 95.00th=[ 1172], 00:36:32.428 | 99.00th=[ 1319], 99.50th=[ 1811], 99.90th=[ 2540], 99.95th=[ 2606], 00:36:32.428 | 99.99th=[ 2835] 00:36:32.428 bw ( KiB/s): min= 3760, max= 3824, per=38.86%, avg=3793.60, stdev=27.94, samples=5 00:36:32.428 iops : min= 940, max= 956, avg=948.40, stdev= 6.99, samples=5 00:36:32.428 lat (usec) : 500=0.04%, 750=1.58%, 1000=36.92% 00:36:32.428 lat (msec) : 2=61.16%, 4=0.26% 00:36:32.428 cpu : usr=1.79%, sys=3.61%, ctx=2659, majf=0, minf=1 00:36:32.428 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:32.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.428 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.428 issued rwts: total=2652,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.428 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:32.428 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=4028816: Tue Oct 1 08:51:24 2024 00:36:32.428 read: IOPS=944, BW=3777KiB/s (3868kB/s)(11.5MiB/3105msec) 00:36:32.428 slat (usec): min=6, max=23017, avg=62.95, stdev=806.42 00:36:32.428 clat (usec): min=383, max=5362, avg=980.33, stdev=189.33 00:36:32.428 lat (usec): min=411, max=24146, avg=1043.30, stdev=832.41 00:36:32.428 clat percentiles (usec): 00:36:32.428 | 1.00th=[ 553], 5.00th=[ 668], 10.00th=[ 758], 20.00th=[ 865], 00:36:32.428 | 30.00th=[ 938], 40.00th=[ 963], 50.00th=[ 996], 60.00th=[ 1020], 00:36:32.428 | 70.00th=[ 1057], 80.00th=[ 1090], 90.00th=[ 1139], 95.00th=[ 1205], 00:36:32.428 | 99.00th=[ 1352], 99.50th=[ 1614], 99.90th=[ 2606], 99.95th=[ 3097], 00:36:32.428 | 99.99th=[ 5342] 00:36:32.428 bw ( KiB/s): min= 3537, max= 3968, per=38.91%, avg=3798.83, stdev=145.09, samples=6 00:36:32.428 iops : min= 884, max= 992, avg=949.67, stdev=36.36, samples=6 00:36:32.428 lat (usec) : 500=0.41%, 750=9.07%, 1000=43.71% 00:36:32.428 lat (msec) : 2=46.57%, 4=0.17%, 10=0.03% 00:36:32.428 cpu : usr=2.03%, sys=3.54%, ctx=2940, majf=0, minf=2 00:36:32.428 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:32.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.428 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.428 issued rwts: total=2933,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.428 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:32.428 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=4028817: Tue Oct 1 08:51:24 2024 00:36:32.428 read: IOPS=373, BW=1491KiB/s (1527kB/s)(4076KiB/2734msec) 00:36:32.428 slat (usec): min=25, max=22696, avg=62.56, stdev=828.16 00:36:32.428 clat (usec): min=850, max=42142, avg=2592.00, stdev=7406.14 00:36:32.428 lat (usec): min=877, max=42169, avg=2654.59, stdev=7445.71 00:36:32.428 clat percentiles (usec): 00:36:32.428 | 1.00th=[ 971], 5.00th=[ 1057], 10.00th=[ 1090], 20.00th=[ 1123], 00:36:32.428 | 30.00th=[ 1139], 40.00th=[ 1156], 50.00th=[ 1172], 60.00th=[ 1205], 00:36:32.428 | 70.00th=[ 1221], 80.00th=[ 1237], 90.00th=[ 1287], 95.00th=[ 1352], 00:36:32.428 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:36:32.428 | 99.99th=[42206] 00:36:32.428 bw ( KiB/s): min= 1368, max= 1848, per=16.01%, avg=1563.20, stdev=179.83, samples=5 00:36:32.428 iops : min= 342, max= 462, avg=390.80, stdev=44.96, samples=5 00:36:32.428 lat (usec) : 1000=1.86% 00:36:32.428 lat (msec) : 2=94.51%, 50=3.53% 00:36:32.428 cpu : usr=0.55%, sys=1.57%, ctx=1023, majf=0, minf=2 00:36:32.428 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:32.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.428 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.428 issued rwts: total=1020,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.428 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:32.428 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=4028818: Tue Oct 1 08:51:24 2024 00:36:32.428 read: IOPS=383, BW=1532KiB/s (1569kB/s)(3900KiB/2546msec) 00:36:32.428 slat (nsec): min=5274, max=50631, avg=8832.87, stdev=4319.00 00:36:32.428 clat (usec): min=333, max=41560, avg=2574.77, stdev=8256.18 00:36:32.428 lat (usec): min=340, max=41586, avg=2583.58, stdev=8260.13 00:36:32.428 clat percentiles (usec): 00:36:32.428 | 1.00th=[ 515], 5.00th=[ 627], 10.00th=[ 668], 20.00th=[ 725], 00:36:32.428 | 30.00th=[ 766], 40.00th=[ 799], 50.00th=[ 824], 60.00th=[ 840], 00:36:32.428 | 70.00th=[ 865], 80.00th=[ 889], 90.00th=[ 938], 95.00th=[ 1029], 00:36:32.428 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:36:32.428 | 99.99th=[41681] 00:36:32.428 bw ( KiB/s): min= 96, max= 4048, per=15.96%, avg=1558.40, stdev=2010.05, samples=5 00:36:32.428 iops : min= 24, max= 1012, avg=389.60, stdev=502.51, samples=5 00:36:32.428 lat (usec) : 500=0.82%, 750=23.77%, 1000=69.57% 00:36:32.428 lat (msec) : 2=1.33%, 50=4.41% 00:36:32.428 cpu : usr=0.12%, sys=0.51%, ctx=976, majf=0, minf=2 00:36:32.428 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:32.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.428 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.428 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.428 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:32.428 00:36:32.428 Run status group 0 (all jobs): 00:36:32.428 READ: bw=9761KiB/s (9995kB/s), 1491KiB/s-3777KiB/s (1527kB/s-3868kB/s), io=29.6MiB (31.0MB), run=2546-3105msec 00:36:32.428 00:36:32.428 Disk stats (read/write): 00:36:32.428 nvme0n1: ios=2585/0, merge=0/0, ticks=3039/0, in_queue=3039, util=96.89% 00:36:32.428 nvme0n2: ios=2909/0, merge=0/0, ticks=3389/0, in_queue=3389, util=96.64% 00:36:32.428 nvme0n3: ios=998/0, merge=0/0, ticks=2371/0, in_queue=2371, util=95.60% 00:36:32.428 nvme0n4: ios=976/0, merge=0/0, ticks=2514/0, in_queue=2514, util=96.09% 00:36:32.428 08:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:32.428 08:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:36:32.688 08:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:32.688 08:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:36:32.946 08:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:32.946 08:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:36:32.946 08:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:32.946 08:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:36:33.206 08:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:36:33.206 08:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 4028575 00:36:33.206 08:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:36:33.206 08:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:36:33.206 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:33.206 08:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:36:33.206 08:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:36:33.206 08:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:36:33.206 08:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:33.466 08:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:36:33.466 08:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:33.466 08:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:36:33.466 08:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:36:33.466 08:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:36:33.466 nvmf hotplug test: fio failed as expected 00:36:33.466 08:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:33.466 08:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:36:33.466 08:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:36:33.466 08:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:36:33.466 08:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:36:33.466 08:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:36:33.466 08:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:36:33.466 08:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:36:33.466 08:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:33.466 08:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:36:33.466 08:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:33.466 08:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:33.466 rmmod nvme_tcp 00:36:33.466 rmmod nvme_fabrics 00:36:33.728 rmmod nvme_keyring 00:36:33.728 08:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:33.728 08:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:36:33.728 08:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:36:33.728 08:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@513 -- # '[' -n 4024997 ']' 00:36:33.728 08:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@514 -- # killprocess 4024997 00:36:33.728 08:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 4024997 ']' 00:36:33.728 08:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 4024997 00:36:33.728 08:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:36:33.728 08:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:33.728 08:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4024997 00:36:33.728 08:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:33.728 08:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:33.728 08:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4024997' 00:36:33.728 killing process with pid 4024997 00:36:33.728 08:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 4024997 00:36:33.728 08:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 4024997 00:36:33.728 08:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:36:33.728 08:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:36:33.729 08:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:36:33.729 08:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:36:33.729 08:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-save 00:36:33.729 08:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:36:33.729 08:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-restore 00:36:33.729 08:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:33.729 08:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:33.729 08:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:33.729 08:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:33.729 08:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:36.274 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:36.274 00:36:36.274 real 0m27.233s 00:36:36.274 user 2m17.866s 00:36:36.274 sys 0m12.379s 00:36:36.274 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:36:36.275 ************************************ 00:36:36.275 END TEST nvmf_fio_target 00:36:36.275 ************************************ 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:36.275 ************************************ 00:36:36.275 START TEST nvmf_bdevio 00:36:36.275 ************************************ 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:36:36.275 * Looking for test storage... 00:36:36.275 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:36:36.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:36.275 --rc genhtml_branch_coverage=1 00:36:36.275 --rc genhtml_function_coverage=1 00:36:36.275 --rc genhtml_legend=1 00:36:36.275 --rc geninfo_all_blocks=1 00:36:36.275 --rc geninfo_unexecuted_blocks=1 00:36:36.275 00:36:36.275 ' 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:36:36.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:36.275 --rc genhtml_branch_coverage=1 00:36:36.275 --rc genhtml_function_coverage=1 00:36:36.275 --rc genhtml_legend=1 00:36:36.275 --rc geninfo_all_blocks=1 00:36:36.275 --rc geninfo_unexecuted_blocks=1 00:36:36.275 00:36:36.275 ' 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:36:36.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:36.275 --rc genhtml_branch_coverage=1 00:36:36.275 --rc genhtml_function_coverage=1 00:36:36.275 --rc genhtml_legend=1 00:36:36.275 --rc geninfo_all_blocks=1 00:36:36.275 --rc geninfo_unexecuted_blocks=1 00:36:36.275 00:36:36.275 ' 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:36:36.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:36.275 --rc genhtml_branch_coverage=1 00:36:36.275 --rc genhtml_function_coverage=1 00:36:36.275 --rc genhtml_legend=1 00:36:36.275 --rc geninfo_all_blocks=1 00:36:36.275 --rc geninfo_unexecuted_blocks=1 00:36:36.275 00:36:36.275 ' 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:36.275 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:36:36.276 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:36.276 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:36:36.276 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:36.276 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:36.276 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:36.276 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:36.276 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:36.276 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:36.276 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:36.276 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:36.276 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:36.276 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:36.276 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:36.276 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:36.276 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:36:36.276 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:36:36.276 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:36.276 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@472 -- # prepare_net_devs 00:36:36.276 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@434 -- # local -g is_hw=no 00:36:36.276 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@436 -- # remove_spdk_ns 00:36:36.276 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:36.276 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:36.276 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:36.276 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:36:36.276 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:36:36.276 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:36:36.276 08:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:44.418 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:44.418 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ up == up ]] 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:44.418 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ up == up ]] 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:44.418 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # is_hw=yes 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:44.418 08:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:44.419 08:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:44.419 08:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:44.419 08:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:44.419 08:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:44.419 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:44.419 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.570 ms 00:36:44.419 00:36:44.419 --- 10.0.0.2 ping statistics --- 00:36:44.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:44.419 rtt min/avg/max/mdev = 0.570/0.570/0.570/0.000 ms 00:36:44.419 08:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:44.419 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:44.419 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:36:44.419 00:36:44.419 --- 10.0.0.1 ping statistics --- 00:36:44.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:44.419 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:36:44.419 08:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:44.419 08:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # return 0 00:36:44.419 08:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:36:44.419 08:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:44.419 08:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:36:44.419 08:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:36:44.419 08:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:44.419 08:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:36:44.419 08:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:36:44.419 08:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:36:44.419 08:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:36:44.419 08:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:44.419 08:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:44.419 08:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@505 -- # nvmfpid=4033826 00:36:44.419 08:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@506 -- # waitforlisten 4033826 00:36:44.419 08:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:36:44.419 08:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 4033826 ']' 00:36:44.419 08:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:44.419 08:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:44.419 08:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:44.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:44.419 08:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:44.419 08:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:44.419 [2024-10-01 08:51:35.201056] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:44.419 [2024-10-01 08:51:35.202203] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:36:44.419 [2024-10-01 08:51:35.202257] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:44.419 [2024-10-01 08:51:35.277094] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:44.419 [2024-10-01 08:51:35.370412] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:44.419 [2024-10-01 08:51:35.370471] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:44.419 [2024-10-01 08:51:35.370480] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:44.419 [2024-10-01 08:51:35.370487] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:44.419 [2024-10-01 08:51:35.370493] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:44.419 [2024-10-01 08:51:35.372918] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:36:44.419 [2024-10-01 08:51:35.373079] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:36:44.419 [2024-10-01 08:51:35.373247] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:36:44.419 [2024-10-01 08:51:35.373247] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:36:44.419 [2024-10-01 08:51:35.458116] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:44.419 [2024-10-01 08:51:35.459170] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:44.419 [2024-10-01 08:51:35.459341] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:36:44.419 [2024-10-01 08:51:35.459696] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:44.419 [2024-10-01 08:51:35.459746] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:44.419 08:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:44.419 08:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:36:44.419 08:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:36:44.419 08:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:44.419 08:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:44.419 08:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:44.419 08:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:44.419 08:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.419 08:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:44.419 [2024-10-01 08:51:36.070156] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:44.419 08:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.419 08:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:44.419 08:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.419 08:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:44.419 Malloc0 00:36:44.419 08:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.419 08:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:44.419 08:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.419 08:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:44.419 08:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.419 08:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:44.419 08:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.419 08:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:44.419 08:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.419 08:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:44.419 08:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.419 08:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:44.419 [2024-10-01 08:51:36.154466] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:44.419 08:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.419 08:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:36:44.419 08:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:36:44.419 08:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@556 -- # config=() 00:36:44.419 08:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@556 -- # local subsystem config 00:36:44.419 08:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:36:44.419 08:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:36:44.419 { 00:36:44.419 "params": { 00:36:44.419 "name": "Nvme$subsystem", 00:36:44.419 "trtype": "$TEST_TRANSPORT", 00:36:44.419 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:44.419 "adrfam": "ipv4", 00:36:44.419 "trsvcid": "$NVMF_PORT", 00:36:44.419 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:44.419 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:44.419 "hdgst": ${hdgst:-false}, 00:36:44.419 "ddgst": ${ddgst:-false} 00:36:44.419 }, 00:36:44.419 "method": "bdev_nvme_attach_controller" 00:36:44.419 } 00:36:44.419 EOF 00:36:44.419 )") 00:36:44.419 08:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@578 -- # cat 00:36:44.419 08:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # jq . 00:36:44.419 08:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@581 -- # IFS=, 00:36:44.419 08:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:36:44.419 "params": { 00:36:44.419 "name": "Nvme1", 00:36:44.419 "trtype": "tcp", 00:36:44.419 "traddr": "10.0.0.2", 00:36:44.419 "adrfam": "ipv4", 00:36:44.419 "trsvcid": "4420", 00:36:44.419 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:44.419 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:44.419 "hdgst": false, 00:36:44.419 "ddgst": false 00:36:44.419 }, 00:36:44.419 "method": "bdev_nvme_attach_controller" 00:36:44.419 }' 00:36:44.419 [2024-10-01 08:51:36.195976] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:36:44.419 [2024-10-01 08:51:36.196043] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4033934 ] 00:36:44.680 [2024-10-01 08:51:36.259072] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:44.680 [2024-10-01 08:51:36.327042] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:36:44.680 [2024-10-01 08:51:36.327096] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:36:44.680 [2024-10-01 08:51:36.327099] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:36:44.940 I/O targets: 00:36:44.940 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:36:44.940 00:36:44.940 00:36:44.940 CUnit - A unit testing framework for C - Version 2.1-3 00:36:44.940 http://cunit.sourceforge.net/ 00:36:44.940 00:36:44.940 00:36:44.940 Suite: bdevio tests on: Nvme1n1 00:36:44.940 Test: blockdev write read block ...passed 00:36:44.940 Test: blockdev write zeroes read block ...passed 00:36:44.940 Test: blockdev write zeroes read no split ...passed 00:36:44.940 Test: blockdev write zeroes read split ...passed 00:36:44.940 Test: blockdev write zeroes read split partial ...passed 00:36:44.940 Test: blockdev reset ...[2024-10-01 08:51:36.668970] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:44.940 [2024-10-01 08:51:36.669037] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe68270 (9): Bad file descriptor 00:36:44.940 [2024-10-01 08:51:36.715811] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:36:44.940 passed 00:36:44.940 Test: blockdev write read 8 blocks ...passed 00:36:44.940 Test: blockdev write read size > 128k ...passed 00:36:44.940 Test: blockdev write read invalid size ...passed 00:36:45.201 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:36:45.201 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:36:45.201 Test: blockdev write read max offset ...passed 00:36:45.201 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:36:45.201 Test: blockdev writev readv 8 blocks ...passed 00:36:45.201 Test: blockdev writev readv 30 x 1block ...passed 00:36:45.201 Test: blockdev writev readv block ...passed 00:36:45.201 Test: blockdev writev readv size > 128k ...passed 00:36:45.201 Test: blockdev writev readv size > 128k in two iovs ...passed 00:36:45.201 Test: blockdev comparev and writev ...[2024-10-01 08:51:36.940666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:45.201 [2024-10-01 08:51:36.940693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:45.201 [2024-10-01 08:51:36.940704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:45.201 [2024-10-01 08:51:36.940710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:45.201 [2024-10-01 08:51:36.941216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:45.201 [2024-10-01 08:51:36.941225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:36:45.201 [2024-10-01 08:51:36.941235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:45.201 [2024-10-01 08:51:36.941241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:36:45.201 [2024-10-01 08:51:36.941777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:45.201 [2024-10-01 08:51:36.941790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:36:45.201 [2024-10-01 08:51:36.941800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:45.201 [2024-10-01 08:51:36.941805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:36:45.201 [2024-10-01 08:51:36.942355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:45.201 [2024-10-01 08:51:36.942363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:36:45.201 [2024-10-01 08:51:36.942373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:45.201 [2024-10-01 08:51:36.942378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:36:45.201 passed 00:36:45.463 Test: blockdev nvme passthru rw ...passed 00:36:45.463 Test: blockdev nvme passthru vendor specific ...[2024-10-01 08:51:37.026841] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:45.463 [2024-10-01 08:51:37.026852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:36:45.463 [2024-10-01 08:51:37.027199] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:45.463 [2024-10-01 08:51:37.027209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:36:45.463 [2024-10-01 08:51:37.027539] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:45.463 [2024-10-01 08:51:37.027547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:36:45.463 [2024-10-01 08:51:37.027873] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:45.463 [2024-10-01 08:51:37.027882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:36:45.463 passed 00:36:45.463 Test: blockdev nvme admin passthru ...passed 00:36:45.463 Test: blockdev copy ...passed 00:36:45.463 00:36:45.463 Run Summary: Type Total Ran Passed Failed Inactive 00:36:45.463 suites 1 1 n/a 0 0 00:36:45.463 tests 23 23 23 0 0 00:36:45.463 asserts 152 152 152 0 n/a 00:36:45.463 00:36:45.463 Elapsed time = 1.162 seconds 00:36:45.463 08:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:45.463 08:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:45.463 08:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:45.463 08:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:45.463 08:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:36:45.463 08:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:36:45.463 08:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # nvmfcleanup 00:36:45.463 08:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:36:45.463 08:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:45.463 08:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:36:45.463 08:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:45.463 08:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:45.463 rmmod nvme_tcp 00:36:45.463 rmmod nvme_fabrics 00:36:45.463 rmmod nvme_keyring 00:36:45.463 08:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:45.463 08:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:36:45.463 08:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:36:45.463 08:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@513 -- # '[' -n 4033826 ']' 00:36:45.463 08:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@514 -- # killprocess 4033826 00:36:45.463 08:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 4033826 ']' 00:36:45.463 08:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 4033826 00:36:45.463 08:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:36:45.463 08:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:45.463 08:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4033826 00:36:45.724 08:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:36:45.724 08:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:36:45.724 08:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4033826' 00:36:45.724 killing process with pid 4033826 00:36:45.724 08:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 4033826 00:36:45.724 08:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 4033826 00:36:45.724 08:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:36:45.724 08:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:36:45.724 08:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:36:45.724 08:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:36:45.724 08:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-restore 00:36:45.724 08:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-save 00:36:45.724 08:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:36:45.724 08:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:45.724 08:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:45.724 08:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:45.724 08:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:45.724 08:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:48.267 08:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:48.267 00:36:48.267 real 0m11.922s 00:36:48.267 user 0m9.115s 00:36:48.267 sys 0m6.289s 00:36:48.267 08:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:48.267 08:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:48.267 ************************************ 00:36:48.267 END TEST nvmf_bdevio 00:36:48.267 ************************************ 00:36:48.267 08:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:36:48.267 00:36:48.267 real 4m56.660s 00:36:48.267 user 10m14.847s 00:36:48.267 sys 2m2.455s 00:36:48.267 08:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:48.267 08:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:48.267 ************************************ 00:36:48.267 END TEST nvmf_target_core_interrupt_mode 00:36:48.267 ************************************ 00:36:48.267 08:51:39 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:36:48.267 08:51:39 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:36:48.267 08:51:39 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:48.267 08:51:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:48.267 ************************************ 00:36:48.267 START TEST nvmf_interrupt 00:36:48.267 ************************************ 00:36:48.267 08:51:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:36:48.267 * Looking for test storage... 00:36:48.267 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:48.267 08:51:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:36:48.267 08:51:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:36:48.267 08:51:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # lcov --version 00:36:48.267 08:51:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:36:48.267 08:51:39 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:48.267 08:51:39 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:48.267 08:51:39 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:48.267 08:51:39 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:36:48.267 08:51:39 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:36:48.267 08:51:39 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:36:48.267 08:51:39 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:36:48.267 08:51:39 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:36:48.267 08:51:39 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:36:48.267 08:51:39 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:36:48.267 08:51:39 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:48.267 08:51:39 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:36:48.267 08:51:39 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:36:48.267 08:51:39 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:48.267 08:51:39 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:48.267 08:51:39 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:36:48.267 08:51:39 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:36:48.267 08:51:39 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:48.267 08:51:39 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:36:48.267 08:51:39 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:36:48.267 08:51:39 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:36:48.267 08:51:39 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:36:48.267 08:51:39 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:48.267 08:51:39 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:36:48.268 08:51:39 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:36:48.268 08:51:39 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:48.268 08:51:39 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:48.268 08:51:39 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:36:48.268 08:51:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:48.268 08:51:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:36:48.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:48.268 --rc genhtml_branch_coverage=1 00:36:48.268 --rc genhtml_function_coverage=1 00:36:48.268 --rc genhtml_legend=1 00:36:48.268 --rc geninfo_all_blocks=1 00:36:48.268 --rc geninfo_unexecuted_blocks=1 00:36:48.268 00:36:48.268 ' 00:36:48.268 08:51:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:36:48.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:48.268 --rc genhtml_branch_coverage=1 00:36:48.268 --rc genhtml_function_coverage=1 00:36:48.268 --rc genhtml_legend=1 00:36:48.268 --rc geninfo_all_blocks=1 00:36:48.268 --rc geninfo_unexecuted_blocks=1 00:36:48.268 00:36:48.268 ' 00:36:48.268 08:51:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:36:48.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:48.268 --rc genhtml_branch_coverage=1 00:36:48.268 --rc genhtml_function_coverage=1 00:36:48.268 --rc genhtml_legend=1 00:36:48.268 --rc geninfo_all_blocks=1 00:36:48.268 --rc geninfo_unexecuted_blocks=1 00:36:48.268 00:36:48.268 ' 00:36:48.268 08:51:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:36:48.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:48.268 --rc genhtml_branch_coverage=1 00:36:48.268 --rc genhtml_function_coverage=1 00:36:48.268 --rc genhtml_legend=1 00:36:48.268 --rc geninfo_all_blocks=1 00:36:48.268 --rc geninfo_unexecuted_blocks=1 00:36:48.268 00:36:48.268 ' 00:36:48.268 08:51:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:48.268 08:51:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:36:48.268 08:51:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:48.268 08:51:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:48.268 08:51:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:48.268 08:51:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:48.268 08:51:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:48.268 08:51:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:48.268 08:51:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:48.268 08:51:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:48.268 08:51:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:48.268 08:51:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:48.268 08:51:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:48.268 08:51:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:48.268 08:51:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:48.268 08:51:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:48.268 08:51:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:48.268 08:51:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:48.268 08:51:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:48.268 08:51:39 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:36:48.268 08:51:39 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:48.268 08:51:39 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:48.268 08:51:39 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:48.268 08:51:39 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:48.268 08:51:39 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:48.268 08:51:39 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:48.268 08:51:39 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:36:48.268 08:51:39 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:48.268 08:51:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:36:48.268 08:51:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:48.268 08:51:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:48.268 08:51:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:48.268 08:51:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:48.268 08:51:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:48.268 08:51:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:48.268 08:51:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:48.268 08:51:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:48.268 08:51:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:48.268 08:51:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:48.268 08:51:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:36:48.268 08:51:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:36:48.268 08:51:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:36:48.268 08:51:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:36:48.268 08:51:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:48.268 08:51:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@472 -- # prepare_net_devs 00:36:48.268 08:51:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@434 -- # local -g is_hw=no 00:36:48.268 08:51:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@436 -- # remove_spdk_ns 00:36:48.268 08:51:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:48.268 08:51:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:48.268 08:51:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:48.268 08:51:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:36:48.268 08:51:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:36:48.268 08:51:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:36:48.268 08:51:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:56.406 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:56.406 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:36:56.406 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:56.406 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:56.406 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:56.406 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:56.406 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:56.406 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:36:56.406 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:56.407 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:56.407 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ up == up ]] 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:56.407 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ up == up ]] 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:56.407 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # is_hw=yes 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:56.407 08:51:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:56.407 08:51:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:56.407 08:51:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:56.407 08:51:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:56.407 08:51:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:56.408 08:51:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:56.408 08:51:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:56.408 08:51:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:56.408 08:51:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:56.408 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:56.408 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.645 ms 00:36:56.408 00:36:56.408 --- 10.0.0.2 ping statistics --- 00:36:56.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:56.408 rtt min/avg/max/mdev = 0.645/0.645/0.645/0.000 ms 00:36:56.408 08:51:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:56.408 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:56.408 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.257 ms 00:36:56.408 00:36:56.408 --- 10.0.0.1 ping statistics --- 00:36:56.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:56.408 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:36:56.408 08:51:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:56.408 08:51:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # return 0 00:36:56.408 08:51:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:36:56.408 08:51:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:56.408 08:51:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:36:56.408 08:51:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:36:56.408 08:51:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:56.408 08:51:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:36:56.408 08:51:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:36:56.408 08:51:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:36:56.408 08:51:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:36:56.408 08:51:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:56.408 08:51:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:56.408 08:51:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@505 -- # nvmfpid=4038347 00:36:56.408 08:51:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@506 -- # waitforlisten 4038347 00:36:56.408 08:51:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:36:56.408 08:51:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@831 -- # '[' -z 4038347 ']' 00:36:56.408 08:51:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:56.408 08:51:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:56.408 08:51:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:56.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:56.408 08:51:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:56.408 08:51:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:56.408 [2024-10-01 08:51:47.336868] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:56.408 [2024-10-01 08:51:47.338035] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:36:56.408 [2024-10-01 08:51:47.338091] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:56.408 [2024-10-01 08:51:47.412595] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:56.408 [2024-10-01 08:51:47.486172] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:56.408 [2024-10-01 08:51:47.486216] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:56.408 [2024-10-01 08:51:47.486224] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:56.408 [2024-10-01 08:51:47.486231] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:56.408 [2024-10-01 08:51:47.486237] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:56.408 [2024-10-01 08:51:47.487067] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:36:56.408 [2024-10-01 08:51:47.487083] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:36:56.408 [2024-10-01 08:51:47.541896] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:56.408 [2024-10-01 08:51:47.542468] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:56.408 [2024-10-01 08:51:47.542795] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:56.408 08:51:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:56.408 08:51:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # return 0 00:36:56.408 08:51:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:36:56.408 08:51:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:56.408 08:51:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:56.408 08:51:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:56.408 08:51:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:36:56.408 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:36:56.408 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:36:56.408 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:36:56.408 5000+0 records in 00:36:56.408 5000+0 records out 00:36:56.408 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0182387 s, 561 MB/s 00:36:56.408 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:36:56.408 08:51:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:56.408 08:51:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:56.669 AIO0 00:36:56.669 08:51:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:56.669 08:51:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:36:56.669 08:51:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:56.669 08:51:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:56.669 [2024-10-01 08:51:48.255975] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:56.669 08:51:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:56.669 08:51:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:36:56.669 08:51:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:56.669 08:51:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:56.669 08:51:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:56.669 08:51:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:36:56.669 08:51:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:56.670 08:51:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:56.670 08:51:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:56.670 08:51:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:56.670 08:51:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:56.670 08:51:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:56.670 [2024-10-01 08:51:48.304128] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:56.670 08:51:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:56.670 08:51:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:36:56.670 08:51:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 4038347 0 00:36:56.670 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 4038347 0 idle 00:36:56.670 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4038347 00:36:56.670 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:36:56.670 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:56.670 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:56.670 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:56.670 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:56.670 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:56.670 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:56.670 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:56.670 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:56.670 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4038347 -w 256 00:36:56.670 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:36:56.670 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4038347 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.27 reactor_0' 00:36:56.670 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4038347 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.27 reactor_0 00:36:56.670 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:56.670 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:56.929 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:56.929 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:56.929 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:56.929 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:56.929 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:56.929 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:56.929 08:51:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:36:56.929 08:51:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 4038347 1 00:36:56.929 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 4038347 1 idle 00:36:56.929 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4038347 00:36:56.929 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:36:56.929 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:56.929 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:56.929 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:56.929 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:56.929 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:56.930 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:56.930 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:56.930 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:56.930 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4038347 -w 256 00:36:56.930 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:36:56.930 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4038388 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.00 reactor_1' 00:36:56.930 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4038388 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.00 reactor_1 00:36:56.930 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:56.930 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:56.930 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:56.930 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:56.930 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:56.930 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:56.930 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:56.930 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:56.930 08:51:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:36:56.930 08:51:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=4038578 00:36:56.930 08:51:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:36:56.930 08:51:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:36:56.930 08:51:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:56.930 08:51:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 4038347 0 00:36:56.930 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 4038347 0 busy 00:36:56.930 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4038347 00:36:56.930 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:36:56.930 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:36:56.930 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:36:56.930 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:56.930 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:36:56.930 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:56.930 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:56.930 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:56.930 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4038347 -w 256 00:36:56.930 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:36:57.190 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4038347 root 20 0 128.2g 44928 32256 R 46.7 0.0 0:00.34 reactor_0' 00:36:57.190 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4038347 root 20 0 128.2g 44928 32256 R 46.7 0.0 0:00.34 reactor_0 00:36:57.190 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:57.190 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:57.190 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=46.7 00:36:57.190 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=46 00:36:57.190 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:36:57.190 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:36:57.190 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:36:57.190 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:57.190 08:51:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:36:57.190 08:51:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:36:57.190 08:51:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 4038347 1 00:36:57.190 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 4038347 1 busy 00:36:57.190 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4038347 00:36:57.190 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:36:57.190 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:36:57.190 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:36:57.190 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:57.190 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:36:57.190 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:57.190 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:57.190 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:57.190 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4038347 -w 256 00:36:57.190 08:51:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:36:57.451 08:51:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4038388 root 20 0 128.2g 44928 32256 R 93.3 0.0 0:00.22 reactor_1' 00:36:57.451 08:51:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4038388 root 20 0 128.2g 44928 32256 R 93.3 0.0 0:00.22 reactor_1 00:36:57.451 08:51:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:57.451 08:51:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:57.451 08:51:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.3 00:36:57.451 08:51:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:36:57.451 08:51:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:36:57.451 08:51:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:36:57.451 08:51:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:36:57.451 08:51:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:57.451 08:51:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 4038578 00:37:07.441 Initializing NVMe Controllers 00:37:07.441 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:07.441 Controller IO queue size 256, less than required. 00:37:07.441 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:07.441 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:37:07.441 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:37:07.441 Initialization complete. Launching workers. 00:37:07.441 ======================================================== 00:37:07.441 Latency(us) 00:37:07.441 Device Information : IOPS MiB/s Average min max 00:37:07.441 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 19641.90 76.73 13038.59 3240.65 33993.43 00:37:07.441 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16723.40 65.33 15313.07 7296.39 17654.75 00:37:07.441 ======================================================== 00:37:07.441 Total : 36365.30 142.05 14084.56 3240.65 33993.43 00:37:07.441 00:37:07.441 [2024-10-01 08:51:58.868879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d79b0 is same with the state(6) to be set 00:37:07.441 08:51:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:37:07.441 08:51:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 4038347 0 00:37:07.441 08:51:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 4038347 0 idle 00:37:07.441 08:51:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4038347 00:37:07.441 08:51:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:37:07.441 08:51:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:07.441 08:51:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:07.441 08:51:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:07.441 08:51:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:07.441 08:51:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:07.441 08:51:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:07.441 08:51:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:07.441 08:51:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:07.441 08:51:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4038347 -w 256 00:37:07.441 08:51:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:37:07.442 08:51:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4038347 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.27 reactor_0' 00:37:07.442 08:51:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4038347 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.27 reactor_0 00:37:07.442 08:51:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:07.442 08:51:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:07.442 08:51:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:37:07.442 08:51:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:37:07.442 08:51:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:07.442 08:51:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:07.442 08:51:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:07.442 08:51:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:07.442 08:51:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:37:07.442 08:51:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 4038347 1 00:37:07.442 08:51:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 4038347 1 idle 00:37:07.442 08:51:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4038347 00:37:07.442 08:51:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:37:07.442 08:51:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:07.442 08:51:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:07.442 08:51:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:07.442 08:51:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:07.442 08:51:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:07.442 08:51:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:07.442 08:51:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:07.442 08:51:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:07.442 08:51:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4038347 -w 256 00:37:07.442 08:51:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:37:07.442 08:51:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4038388 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.01 reactor_1' 00:37:07.442 08:51:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:07.442 08:51:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4038388 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.01 reactor_1 00:37:07.442 08:51:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:07.442 08:51:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:37:07.442 08:51:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:37:07.442 08:51:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:07.442 08:51:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:07.442 08:51:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:07.442 08:51:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:07.442 08:51:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:37:08.011 08:51:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:37:08.011 08:51:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1198 -- # local i=0 00:37:08.011 08:51:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:37:08.011 08:51:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:37:08.011 08:51:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1205 -- # sleep 2 00:37:10.554 08:52:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:37:10.554 08:52:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:37:10.554 08:52:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:37:10.554 08:52:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:37:10.554 08:52:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:37:10.554 08:52:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # return 0 00:37:10.554 08:52:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:37:10.554 08:52:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 4038347 0 00:37:10.554 08:52:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 4038347 0 idle 00:37:10.554 08:52:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4038347 00:37:10.554 08:52:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:37:10.554 08:52:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:10.554 08:52:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:10.554 08:52:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:10.554 08:52:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:10.554 08:52:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:10.554 08:52:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:10.554 08:52:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:10.554 08:52:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:10.554 08:52:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4038347 -w 256 00:37:10.554 08:52:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:37:10.554 08:52:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4038347 root 20 0 128.2g 79488 32256 S 6.2 0.1 0:20.52 reactor_0' 00:37:10.554 08:52:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4038347 root 20 0 128.2g 79488 32256 S 6.2 0.1 0:20.52 reactor_0 00:37:10.554 08:52:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:10.554 08:52:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:10.554 08:52:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.2 00:37:10.554 08:52:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:37:10.554 08:52:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:10.554 08:52:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:10.554 08:52:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:10.554 08:52:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:10.554 08:52:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:37:10.554 08:52:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 4038347 1 00:37:10.554 08:52:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 4038347 1 idle 00:37:10.554 08:52:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4038347 00:37:10.554 08:52:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:37:10.554 08:52:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:10.554 08:52:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:10.554 08:52:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:10.554 08:52:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:10.554 08:52:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:10.554 08:52:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:10.554 08:52:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:10.554 08:52:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:10.554 08:52:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4038347 -w 256 00:37:10.554 08:52:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:37:10.555 08:52:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4038388 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.15 reactor_1' 00:37:10.555 08:52:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4038388 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.15 reactor_1 00:37:10.555 08:52:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:10.555 08:52:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:10.555 08:52:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:37:10.555 08:52:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:37:10.555 08:52:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:10.555 08:52:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:10.555 08:52:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:10.555 08:52:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:10.555 08:52:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:37:10.555 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:37:10.555 08:52:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:37:10.555 08:52:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1219 -- # local i=0 00:37:10.555 08:52:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:37:10.555 08:52:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:10.555 08:52:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:37:10.555 08:52:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:10.555 08:52:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # return 0 00:37:10.555 08:52:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:37:10.555 08:52:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:37:10.555 08:52:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # nvmfcleanup 00:37:10.555 08:52:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:37:10.555 08:52:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:10.555 08:52:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:37:10.555 08:52:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:10.555 08:52:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:10.555 rmmod nvme_tcp 00:37:10.555 rmmod nvme_fabrics 00:37:10.815 rmmod nvme_keyring 00:37:10.815 08:52:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:10.815 08:52:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:37:10.815 08:52:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:37:10.815 08:52:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@513 -- # '[' -n 4038347 ']' 00:37:10.815 08:52:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@514 -- # killprocess 4038347 00:37:10.815 08:52:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@950 -- # '[' -z 4038347 ']' 00:37:10.815 08:52:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # kill -0 4038347 00:37:10.815 08:52:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # uname 00:37:10.815 08:52:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:10.815 08:52:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4038347 00:37:10.815 08:52:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:10.815 08:52:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:10.815 08:52:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4038347' 00:37:10.815 killing process with pid 4038347 00:37:10.815 08:52:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@969 -- # kill 4038347 00:37:10.815 08:52:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@974 -- # wait 4038347 00:37:10.816 08:52:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:37:10.816 08:52:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:37:10.816 08:52:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:37:10.816 08:52:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:37:11.074 08:52:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@787 -- # iptables-save 00:37:11.074 08:52:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:37:11.074 08:52:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@787 -- # iptables-restore 00:37:11.074 08:52:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:11.074 08:52:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:11.074 08:52:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:11.074 08:52:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:11.074 08:52:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:12.987 08:52:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:12.987 00:37:12.987 real 0m24.983s 00:37:12.987 user 0m40.260s 00:37:12.987 sys 0m9.413s 00:37:12.987 08:52:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:12.987 08:52:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:12.987 ************************************ 00:37:12.987 END TEST nvmf_interrupt 00:37:12.987 ************************************ 00:37:12.987 00:37:12.987 real 29m33.638s 00:37:12.987 user 61m17.686s 00:37:12.987 sys 9m56.396s 00:37:12.987 08:52:04 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:12.987 08:52:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:12.987 ************************************ 00:37:12.987 END TEST nvmf_tcp 00:37:12.987 ************************************ 00:37:12.987 08:52:04 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:37:12.987 08:52:04 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:37:12.987 08:52:04 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:37:12.987 08:52:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:12.987 08:52:04 -- common/autotest_common.sh@10 -- # set +x 00:37:13.247 ************************************ 00:37:13.247 START TEST spdkcli_nvmf_tcp 00:37:13.247 ************************************ 00:37:13.247 08:52:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:37:13.247 * Looking for test storage... 00:37:13.247 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:37:13.247 08:52:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:13.247 08:52:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:37:13.247 08:52:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:13.247 08:52:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:13.247 08:52:05 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:13.247 08:52:05 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:13.247 08:52:05 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:13.247 08:52:05 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:37:13.247 08:52:05 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:37:13.247 08:52:05 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:37:13.247 08:52:05 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:37:13.247 08:52:05 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:37:13.247 08:52:05 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:37:13.247 08:52:05 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:37:13.247 08:52:05 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:13.247 08:52:05 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:37:13.247 08:52:05 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:37:13.247 08:52:05 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:13.247 08:52:05 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:13.247 08:52:05 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:37:13.247 08:52:05 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:37:13.247 08:52:05 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:13.247 08:52:05 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:37:13.247 08:52:05 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:37:13.247 08:52:05 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:37:13.247 08:52:05 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:37:13.247 08:52:05 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:13.247 08:52:05 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:37:13.247 08:52:05 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:37:13.247 08:52:05 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:13.247 08:52:05 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:13.247 08:52:05 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:37:13.247 08:52:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:13.248 08:52:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:13.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:13.248 --rc genhtml_branch_coverage=1 00:37:13.248 --rc genhtml_function_coverage=1 00:37:13.248 --rc genhtml_legend=1 00:37:13.248 --rc geninfo_all_blocks=1 00:37:13.248 --rc geninfo_unexecuted_blocks=1 00:37:13.248 00:37:13.248 ' 00:37:13.248 08:52:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:13.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:13.248 --rc genhtml_branch_coverage=1 00:37:13.248 --rc genhtml_function_coverage=1 00:37:13.248 --rc genhtml_legend=1 00:37:13.248 --rc geninfo_all_blocks=1 00:37:13.248 --rc geninfo_unexecuted_blocks=1 00:37:13.248 00:37:13.248 ' 00:37:13.248 08:52:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:13.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:13.248 --rc genhtml_branch_coverage=1 00:37:13.248 --rc genhtml_function_coverage=1 00:37:13.248 --rc genhtml_legend=1 00:37:13.248 --rc geninfo_all_blocks=1 00:37:13.248 --rc geninfo_unexecuted_blocks=1 00:37:13.248 00:37:13.248 ' 00:37:13.248 08:52:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:13.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:13.248 --rc genhtml_branch_coverage=1 00:37:13.248 --rc genhtml_function_coverage=1 00:37:13.248 --rc genhtml_legend=1 00:37:13.248 --rc geninfo_all_blocks=1 00:37:13.248 --rc geninfo_unexecuted_blocks=1 00:37:13.248 00:37:13.248 ' 00:37:13.248 08:52:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:37:13.248 08:52:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:37:13.248 08:52:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:37:13.248 08:52:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:13.248 08:52:05 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:37:13.248 08:52:05 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:13.248 08:52:05 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:13.248 08:52:05 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:13.248 08:52:05 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:13.248 08:52:05 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:13.248 08:52:05 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:13.248 08:52:05 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:13.248 08:52:05 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:13.248 08:52:05 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:13.248 08:52:05 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:13.248 08:52:05 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:13.248 08:52:05 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:13.248 08:52:05 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:13.248 08:52:05 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:13.248 08:52:05 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:13.248 08:52:05 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:13.248 08:52:05 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:13.248 08:52:05 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:37:13.248 08:52:05 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:13.248 08:52:05 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:13.248 08:52:05 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:13.248 08:52:05 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:13.248 08:52:05 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:13.248 08:52:05 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:13.248 08:52:05 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:37:13.248 08:52:05 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:13.248 08:52:05 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:37:13.248 08:52:05 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:13.248 08:52:05 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:13.248 08:52:05 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:13.248 08:52:05 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:13.248 08:52:05 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:13.248 08:52:05 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:13.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:13.248 08:52:05 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:13.248 08:52:05 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:13.248 08:52:05 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:13.248 08:52:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:37:13.248 08:52:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:37:13.248 08:52:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:37:13.248 08:52:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:37:13.248 08:52:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:13.248 08:52:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:13.509 08:52:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:37:13.509 08:52:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=4041786 00:37:13.509 08:52:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 4041786 00:37:13.509 08:52:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 4041786 ']' 00:37:13.509 08:52:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:13.509 08:52:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:37:13.509 08:52:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:13.509 08:52:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:13.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:13.509 08:52:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:13.509 08:52:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:13.509 [2024-10-01 08:52:05.140701] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:37:13.509 [2024-10-01 08:52:05.140774] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4041786 ] 00:37:13.509 [2024-10-01 08:52:05.208051] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:13.509 [2024-10-01 08:52:05.283736] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:37:13.509 [2024-10-01 08:52:05.283737] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:37:14.450 08:52:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:14.450 08:52:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:37:14.450 08:52:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:37:14.450 08:52:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:14.450 08:52:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:14.450 08:52:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:37:14.450 08:52:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:37:14.450 08:52:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:37:14.450 08:52:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:14.450 08:52:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:14.450 08:52:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:37:14.450 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:37:14.450 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:37:14.450 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:37:14.450 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:37:14.450 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:37:14.450 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:37:14.450 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:37:14.450 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:37:14.450 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:37:14.450 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:37:14.450 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:14.450 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:37:14.450 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:37:14.450 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:14.450 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:37:14.450 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:37:14.450 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:37:14.450 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:37:14.450 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:14.450 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:37:14.450 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:37:14.450 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:37:14.450 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:37:14.450 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:14.450 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:37:14.450 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:37:14.450 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:37:14.450 ' 00:37:16.993 [2024-10-01 08:52:08.400707] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:17.936 [2024-10-01 08:52:09.608577] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:37:20.478 [2024-10-01 08:52:11.827129] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:37:22.387 [2024-10-01 08:52:13.732746] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:37:23.771 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:37:23.771 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:37:23.771 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:37:23.771 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:37:23.771 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:37:23.771 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:37:23.771 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:37:23.771 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:37:23.771 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:37:23.771 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:37:23.771 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:37:23.771 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:23.771 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:37:23.771 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:37:23.771 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:23.771 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:37:23.771 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:37:23.771 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:37:23.771 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:37:23.771 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:23.771 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:37:23.771 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:37:23.771 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:37:23.771 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:37:23.771 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:23.771 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:37:23.771 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:37:23.771 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:37:23.771 08:52:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:37:23.771 08:52:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:23.771 08:52:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:23.771 08:52:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:37:23.771 08:52:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:23.771 08:52:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:23.771 08:52:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:37:23.771 08:52:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:37:24.030 08:52:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:37:24.030 08:52:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:37:24.031 08:52:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:37:24.031 08:52:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:24.031 08:52:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:24.031 08:52:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:37:24.031 08:52:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:24.031 08:52:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:24.031 08:52:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:37:24.031 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:37:24.031 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:37:24.031 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:37:24.031 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:37:24.031 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:37:24.031 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:37:24.031 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:37:24.031 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:37:24.031 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:37:24.031 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:37:24.031 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:37:24.031 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:37:24.031 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:37:24.031 ' 00:37:29.315 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:37:29.315 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:37:29.315 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:37:29.315 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:37:29.315 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:37:29.315 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:37:29.315 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:37:29.315 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:37:29.316 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:37:29.316 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:37:29.316 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:37:29.316 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:37:29.316 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:37:29.316 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:37:29.316 08:52:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:37:29.316 08:52:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:29.316 08:52:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:29.316 08:52:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 4041786 00:37:29.316 08:52:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 4041786 ']' 00:37:29.316 08:52:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 4041786 00:37:29.316 08:52:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:37:29.316 08:52:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:29.316 08:52:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4041786 00:37:29.316 08:52:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:29.316 08:52:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:29.316 08:52:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4041786' 00:37:29.316 killing process with pid 4041786 00:37:29.316 08:52:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 4041786 00:37:29.316 08:52:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 4041786 00:37:29.316 08:52:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:37:29.316 08:52:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:37:29.316 08:52:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 4041786 ']' 00:37:29.316 08:52:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 4041786 00:37:29.316 08:52:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 4041786 ']' 00:37:29.316 08:52:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 4041786 00:37:29.316 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (4041786) - No such process 00:37:29.316 08:52:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 4041786 is not found' 00:37:29.316 Process with pid 4041786 is not found 00:37:29.316 08:52:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:37:29.316 08:52:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:37:29.316 08:52:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:37:29.316 00:37:29.316 real 0m16.271s 00:37:29.316 user 0m33.627s 00:37:29.316 sys 0m0.723s 00:37:29.316 08:52:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:29.316 08:52:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:29.316 ************************************ 00:37:29.316 END TEST spdkcli_nvmf_tcp 00:37:29.316 ************************************ 00:37:29.576 08:52:21 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:37:29.576 08:52:21 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:37:29.576 08:52:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:29.576 08:52:21 -- common/autotest_common.sh@10 -- # set +x 00:37:29.576 ************************************ 00:37:29.576 START TEST nvmf_identify_passthru 00:37:29.576 ************************************ 00:37:29.576 08:52:21 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:37:29.576 * Looking for test storage... 00:37:29.576 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:29.576 08:52:21 nvmf_identify_passthru -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:29.576 08:52:21 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # lcov --version 00:37:29.576 08:52:21 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:29.576 08:52:21 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:29.576 08:52:21 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:29.576 08:52:21 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:29.576 08:52:21 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:29.576 08:52:21 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:37:29.576 08:52:21 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:37:29.576 08:52:21 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:37:29.576 08:52:21 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:37:29.576 08:52:21 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:37:29.576 08:52:21 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:37:29.576 08:52:21 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:37:29.576 08:52:21 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:29.576 08:52:21 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:37:29.576 08:52:21 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:37:29.576 08:52:21 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:29.576 08:52:21 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:29.576 08:52:21 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:37:29.576 08:52:21 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:37:29.576 08:52:21 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:29.576 08:52:21 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:37:29.576 08:52:21 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:37:29.576 08:52:21 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:37:29.576 08:52:21 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:37:29.576 08:52:21 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:29.576 08:52:21 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:37:29.576 08:52:21 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:37:29.576 08:52:21 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:29.576 08:52:21 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:29.576 08:52:21 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:37:29.576 08:52:21 nvmf_identify_passthru -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:29.576 08:52:21 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:29.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:29.576 --rc genhtml_branch_coverage=1 00:37:29.576 --rc genhtml_function_coverage=1 00:37:29.576 --rc genhtml_legend=1 00:37:29.576 --rc geninfo_all_blocks=1 00:37:29.576 --rc geninfo_unexecuted_blocks=1 00:37:29.576 00:37:29.576 ' 00:37:29.576 08:52:21 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:29.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:29.576 --rc genhtml_branch_coverage=1 00:37:29.576 --rc genhtml_function_coverage=1 00:37:29.577 --rc genhtml_legend=1 00:37:29.577 --rc geninfo_all_blocks=1 00:37:29.577 --rc geninfo_unexecuted_blocks=1 00:37:29.577 00:37:29.577 ' 00:37:29.577 08:52:21 nvmf_identify_passthru -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:29.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:29.577 --rc genhtml_branch_coverage=1 00:37:29.577 --rc genhtml_function_coverage=1 00:37:29.577 --rc genhtml_legend=1 00:37:29.577 --rc geninfo_all_blocks=1 00:37:29.577 --rc geninfo_unexecuted_blocks=1 00:37:29.577 00:37:29.577 ' 00:37:29.577 08:52:21 nvmf_identify_passthru -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:29.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:29.577 --rc genhtml_branch_coverage=1 00:37:29.577 --rc genhtml_function_coverage=1 00:37:29.577 --rc genhtml_legend=1 00:37:29.577 --rc geninfo_all_blocks=1 00:37:29.577 --rc geninfo_unexecuted_blocks=1 00:37:29.577 00:37:29.577 ' 00:37:29.577 08:52:21 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:29.577 08:52:21 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:37:29.577 08:52:21 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:29.577 08:52:21 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:29.577 08:52:21 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:29.577 08:52:21 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:29.577 08:52:21 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:29.577 08:52:21 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:29.577 08:52:21 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:29.577 08:52:21 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:29.577 08:52:21 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:29.577 08:52:21 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:29.837 08:52:21 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:29.837 08:52:21 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:29.837 08:52:21 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:29.837 08:52:21 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:29.837 08:52:21 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:29.837 08:52:21 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:29.837 08:52:21 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:29.837 08:52:21 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:37:29.837 08:52:21 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:29.837 08:52:21 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:29.837 08:52:21 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:29.837 08:52:21 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.837 08:52:21 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.837 08:52:21 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.837 08:52:21 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:37:29.837 08:52:21 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.837 08:52:21 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:37:29.837 08:52:21 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:29.837 08:52:21 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:29.837 08:52:21 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:29.837 08:52:21 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:29.837 08:52:21 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:29.837 08:52:21 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:29.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:29.838 08:52:21 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:29.838 08:52:21 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:29.838 08:52:21 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:29.838 08:52:21 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:29.838 08:52:21 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:37:29.838 08:52:21 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:29.838 08:52:21 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:29.838 08:52:21 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:29.838 08:52:21 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.838 08:52:21 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.838 08:52:21 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.838 08:52:21 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:37:29.838 08:52:21 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.838 08:52:21 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:37:29.838 08:52:21 nvmf_identify_passthru -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:37:29.838 08:52:21 nvmf_identify_passthru -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:29.838 08:52:21 nvmf_identify_passthru -- nvmf/common.sh@472 -- # prepare_net_devs 00:37:29.838 08:52:21 nvmf_identify_passthru -- nvmf/common.sh@434 -- # local -g is_hw=no 00:37:29.838 08:52:21 nvmf_identify_passthru -- nvmf/common.sh@436 -- # remove_spdk_ns 00:37:29.838 08:52:21 nvmf_identify_passthru -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:29.838 08:52:21 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:29.838 08:52:21 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:29.838 08:52:21 nvmf_identify_passthru -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:37:29.838 08:52:21 nvmf_identify_passthru -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:37:29.838 08:52:21 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:37:29.838 08:52:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:36.491 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:36.491 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:37:36.491 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:36.491 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:36.491 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:36.491 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:36.491 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:36.491 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:37:36.491 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:36.491 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:37:36.491 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:37:36.491 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:37:36.491 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:37:36.491 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:37:36.491 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:37:36.491 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:36.491 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:36.491 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:36.491 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:36.491 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:36.491 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:36.491 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:36.491 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:36.491 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:36.491 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:36.491 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:36.491 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:37:36.491 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:37:36.491 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:37:36.491 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:37:36.491 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:37:36.491 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:37:36.491 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:36.491 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:36.491 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:36.491 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:36.491 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:36.491 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:36.491 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:36.491 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:36.491 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:36.491 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:36.491 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:36.491 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:36.491 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:36.491 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:36.491 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:36.491 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:36.491 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:37:36.491 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:37:36.491 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:37:36.491 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:36.491 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:36.491 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:36.491 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:36.491 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:36.491 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:36.492 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:36.492 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:36.492 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:36.492 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:36.492 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:36.492 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:36.492 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:36.492 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:36.492 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:36.492 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:36.492 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:36.492 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:36.492 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:36.492 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:36.492 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:37:36.492 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@438 -- # is_hw=yes 00:37:36.492 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:37:36.492 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:37:36.492 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:37:36.492 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:36.492 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:36.492 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:36.492 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:36.492 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:36.492 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:36.492 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:36.492 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:36.492 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:36.492 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:36.492 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:36.492 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:36.492 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:36.761 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:36.761 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:36.761 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:36.761 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:36.761 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:36.761 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:36.761 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:36.761 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:36.761 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:37.023 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:37.023 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:37.023 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.587 ms 00:37:37.023 00:37:37.023 --- 10.0.0.2 ping statistics --- 00:37:37.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:37.023 rtt min/avg/max/mdev = 0.587/0.587/0.587/0.000 ms 00:37:37.023 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:37.023 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:37.023 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:37:37.023 00:37:37.023 --- 10.0.0.1 ping statistics --- 00:37:37.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:37.023 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:37:37.023 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:37.023 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@446 -- # return 0 00:37:37.023 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:37:37.023 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:37.023 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:37:37.023 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:37:37.023 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:37.023 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:37:37.023 08:52:28 nvmf_identify_passthru -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:37:37.023 08:52:28 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:37:37.023 08:52:28 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:37.023 08:52:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:37.023 08:52:28 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:37:37.023 08:52:28 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:37:37.023 08:52:28 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:37:37.023 08:52:28 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:37:37.023 08:52:28 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:37:37.023 08:52:28 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:37:37.023 08:52:28 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:37:37.023 08:52:28 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:37:37.023 08:52:28 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:37:37.023 08:52:28 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:37:37.023 08:52:28 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:37:37.023 08:52:28 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:37:37.023 08:52:28 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:65:00.0 00:37:37.023 08:52:28 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:37:37.023 08:52:28 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:37:37.023 08:52:28 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:37:37.023 08:52:28 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:37:37.023 08:52:28 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:37:37.595 08:52:29 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:37:37.595 08:52:29 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:37:37.595 08:52:29 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:37:37.595 08:52:29 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:37:38.167 08:52:29 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:37:38.167 08:52:29 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:37:38.167 08:52:29 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:38.167 08:52:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:38.167 08:52:29 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:37:38.167 08:52:29 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:38.167 08:52:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:38.167 08:52:29 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=4048842 00:37:38.167 08:52:29 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:38.167 08:52:29 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:37:38.167 08:52:29 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 4048842 00:37:38.167 08:52:29 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 4048842 ']' 00:37:38.167 08:52:29 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:38.167 08:52:29 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:38.167 08:52:29 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:38.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:38.168 08:52:29 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:38.168 08:52:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:38.168 [2024-10-01 08:52:29.808814] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:37:38.168 [2024-10-01 08:52:29.808870] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:38.168 [2024-10-01 08:52:29.876208] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:38.168 [2024-10-01 08:52:29.941095] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:38.168 [2024-10-01 08:52:29.941135] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:38.168 [2024-10-01 08:52:29.941143] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:38.168 [2024-10-01 08:52:29.941150] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:38.168 [2024-10-01 08:52:29.941156] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:38.168 [2024-10-01 08:52:29.942906] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:37:38.168 [2024-10-01 08:52:29.943019] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:37:38.168 [2024-10-01 08:52:29.943121] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:37:38.168 [2024-10-01 08:52:29.943121] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:37:39.107 08:52:30 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:39.107 08:52:30 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:37:39.107 08:52:30 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:37:39.107 08:52:30 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.107 08:52:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:39.107 INFO: Log level set to 20 00:37:39.107 INFO: Requests: 00:37:39.107 { 00:37:39.107 "jsonrpc": "2.0", 00:37:39.107 "method": "nvmf_set_config", 00:37:39.107 "id": 1, 00:37:39.107 "params": { 00:37:39.107 "admin_cmd_passthru": { 00:37:39.107 "identify_ctrlr": true 00:37:39.107 } 00:37:39.107 } 00:37:39.107 } 00:37:39.107 00:37:39.107 INFO: response: 00:37:39.107 { 00:37:39.107 "jsonrpc": "2.0", 00:37:39.107 "id": 1, 00:37:39.107 "result": true 00:37:39.107 } 00:37:39.107 00:37:39.107 08:52:30 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.107 08:52:30 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:37:39.107 08:52:30 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.107 08:52:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:39.107 INFO: Setting log level to 20 00:37:39.107 INFO: Setting log level to 20 00:37:39.107 INFO: Log level set to 20 00:37:39.107 INFO: Log level set to 20 00:37:39.107 INFO: Requests: 00:37:39.107 { 00:37:39.107 "jsonrpc": "2.0", 00:37:39.107 "method": "framework_start_init", 00:37:39.107 "id": 1 00:37:39.107 } 00:37:39.107 00:37:39.107 INFO: Requests: 00:37:39.107 { 00:37:39.107 "jsonrpc": "2.0", 00:37:39.107 "method": "framework_start_init", 00:37:39.107 "id": 1 00:37:39.107 } 00:37:39.107 00:37:39.107 [2024-10-01 08:52:30.694031] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:37:39.107 INFO: response: 00:37:39.107 { 00:37:39.107 "jsonrpc": "2.0", 00:37:39.107 "id": 1, 00:37:39.107 "result": true 00:37:39.107 } 00:37:39.107 00:37:39.107 INFO: response: 00:37:39.107 { 00:37:39.107 "jsonrpc": "2.0", 00:37:39.107 "id": 1, 00:37:39.107 "result": true 00:37:39.107 } 00:37:39.107 00:37:39.107 08:52:30 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.107 08:52:30 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:39.107 08:52:30 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.107 08:52:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:39.107 INFO: Setting log level to 40 00:37:39.107 INFO: Setting log level to 40 00:37:39.107 INFO: Setting log level to 40 00:37:39.107 [2024-10-01 08:52:30.707383] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:39.107 08:52:30 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.107 08:52:30 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:37:39.107 08:52:30 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:39.107 08:52:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:39.107 08:52:30 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:37:39.107 08:52:30 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.107 08:52:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:39.367 Nvme0n1 00:37:39.367 08:52:31 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.367 08:52:31 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:37:39.367 08:52:31 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.367 08:52:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:39.367 08:52:31 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.367 08:52:31 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:37:39.367 08:52:31 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.367 08:52:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:39.367 08:52:31 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.367 08:52:31 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:39.367 08:52:31 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.367 08:52:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:39.367 [2024-10-01 08:52:31.092282] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:39.367 08:52:31 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.367 08:52:31 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:37:39.367 08:52:31 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.367 08:52:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:39.367 [ 00:37:39.367 { 00:37:39.367 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:37:39.367 "subtype": "Discovery", 00:37:39.367 "listen_addresses": [], 00:37:39.367 "allow_any_host": true, 00:37:39.367 "hosts": [] 00:37:39.367 }, 00:37:39.367 { 00:37:39.367 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:37:39.367 "subtype": "NVMe", 00:37:39.367 "listen_addresses": [ 00:37:39.367 { 00:37:39.367 "trtype": "TCP", 00:37:39.367 "adrfam": "IPv4", 00:37:39.367 "traddr": "10.0.0.2", 00:37:39.367 "trsvcid": "4420" 00:37:39.367 } 00:37:39.367 ], 00:37:39.367 "allow_any_host": true, 00:37:39.367 "hosts": [], 00:37:39.367 "serial_number": "SPDK00000000000001", 00:37:39.367 "model_number": "SPDK bdev Controller", 00:37:39.367 "max_namespaces": 1, 00:37:39.367 "min_cntlid": 1, 00:37:39.367 "max_cntlid": 65519, 00:37:39.367 "namespaces": [ 00:37:39.367 { 00:37:39.367 "nsid": 1, 00:37:39.367 "bdev_name": "Nvme0n1", 00:37:39.367 "name": "Nvme0n1", 00:37:39.367 "nguid": "36344730526054870025384500000044", 00:37:39.367 "uuid": "36344730-5260-5487-0025-384500000044" 00:37:39.367 } 00:37:39.367 ] 00:37:39.367 } 00:37:39.367 ] 00:37:39.367 08:52:31 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.367 08:52:31 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:37:39.367 08:52:31 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:37:39.367 08:52:31 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:37:39.627 08:52:31 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:37:39.627 08:52:31 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:37:39.627 08:52:31 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:37:39.627 08:52:31 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:37:39.887 08:52:31 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:37:39.887 08:52:31 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:37:39.887 08:52:31 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:37:39.887 08:52:31 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:39.887 08:52:31 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.887 08:52:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:39.887 08:52:31 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.887 08:52:31 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:37:39.887 08:52:31 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:37:39.887 08:52:31 nvmf_identify_passthru -- nvmf/common.sh@512 -- # nvmfcleanup 00:37:39.887 08:52:31 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:37:39.887 08:52:31 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:39.887 08:52:31 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:37:39.887 08:52:31 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:39.887 08:52:31 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:39.887 rmmod nvme_tcp 00:37:39.887 rmmod nvme_fabrics 00:37:39.887 rmmod nvme_keyring 00:37:40.147 08:52:31 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:40.147 08:52:31 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:37:40.147 08:52:31 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:37:40.147 08:52:31 nvmf_identify_passthru -- nvmf/common.sh@513 -- # '[' -n 4048842 ']' 00:37:40.147 08:52:31 nvmf_identify_passthru -- nvmf/common.sh@514 -- # killprocess 4048842 00:37:40.147 08:52:31 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 4048842 ']' 00:37:40.147 08:52:31 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 4048842 00:37:40.147 08:52:31 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:37:40.147 08:52:31 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:40.147 08:52:31 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4048842 00:37:40.147 08:52:31 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:40.147 08:52:31 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:40.147 08:52:31 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4048842' 00:37:40.147 killing process with pid 4048842 00:37:40.147 08:52:31 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 4048842 00:37:40.148 08:52:31 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 4048842 00:37:40.408 08:52:32 nvmf_identify_passthru -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:37:40.408 08:52:32 nvmf_identify_passthru -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:37:40.408 08:52:32 nvmf_identify_passthru -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:37:40.408 08:52:32 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:37:40.408 08:52:32 nvmf_identify_passthru -- nvmf/common.sh@787 -- # iptables-save 00:37:40.408 08:52:32 nvmf_identify_passthru -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:37:40.408 08:52:32 nvmf_identify_passthru -- nvmf/common.sh@787 -- # iptables-restore 00:37:40.408 08:52:32 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:40.408 08:52:32 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:40.408 08:52:32 nvmf_identify_passthru -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:40.408 08:52:32 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:40.408 08:52:32 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:42.949 08:52:34 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:42.949 00:37:42.949 real 0m12.962s 00:37:42.949 user 0m10.623s 00:37:42.949 sys 0m6.442s 00:37:42.949 08:52:34 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:42.949 08:52:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:42.949 ************************************ 00:37:42.949 END TEST nvmf_identify_passthru 00:37:42.949 ************************************ 00:37:42.949 08:52:34 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:37:42.949 08:52:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:42.949 08:52:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:42.949 08:52:34 -- common/autotest_common.sh@10 -- # set +x 00:37:42.949 ************************************ 00:37:42.949 START TEST nvmf_dif 00:37:42.949 ************************************ 00:37:42.949 08:52:34 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:37:42.949 * Looking for test storage... 00:37:42.949 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:42.949 08:52:34 nvmf_dif -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:42.949 08:52:34 nvmf_dif -- common/autotest_common.sh@1681 -- # lcov --version 00:37:42.949 08:52:34 nvmf_dif -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:42.949 08:52:34 nvmf_dif -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:42.949 08:52:34 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:42.949 08:52:34 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:42.949 08:52:34 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:42.949 08:52:34 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:37:42.949 08:52:34 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:37:42.950 08:52:34 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:37:42.950 08:52:34 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:37:42.950 08:52:34 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:37:42.950 08:52:34 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:37:42.950 08:52:34 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:37:42.950 08:52:34 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:42.950 08:52:34 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:37:42.950 08:52:34 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:37:42.950 08:52:34 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:42.950 08:52:34 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:42.950 08:52:34 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:37:42.950 08:52:34 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:37:42.950 08:52:34 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:42.950 08:52:34 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:37:42.950 08:52:34 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:37:42.950 08:52:34 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:37:42.950 08:52:34 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:37:42.950 08:52:34 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:42.950 08:52:34 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:37:42.950 08:52:34 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:37:42.950 08:52:34 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:42.950 08:52:34 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:42.950 08:52:34 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:37:42.950 08:52:34 nvmf_dif -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:42.950 08:52:34 nvmf_dif -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:42.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:42.950 --rc genhtml_branch_coverage=1 00:37:42.950 --rc genhtml_function_coverage=1 00:37:42.950 --rc genhtml_legend=1 00:37:42.950 --rc geninfo_all_blocks=1 00:37:42.950 --rc geninfo_unexecuted_blocks=1 00:37:42.950 00:37:42.950 ' 00:37:42.950 08:52:34 nvmf_dif -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:42.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:42.950 --rc genhtml_branch_coverage=1 00:37:42.950 --rc genhtml_function_coverage=1 00:37:42.950 --rc genhtml_legend=1 00:37:42.950 --rc geninfo_all_blocks=1 00:37:42.950 --rc geninfo_unexecuted_blocks=1 00:37:42.950 00:37:42.950 ' 00:37:42.950 08:52:34 nvmf_dif -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:42.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:42.950 --rc genhtml_branch_coverage=1 00:37:42.950 --rc genhtml_function_coverage=1 00:37:42.950 --rc genhtml_legend=1 00:37:42.950 --rc geninfo_all_blocks=1 00:37:42.950 --rc geninfo_unexecuted_blocks=1 00:37:42.950 00:37:42.950 ' 00:37:42.950 08:52:34 nvmf_dif -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:42.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:42.950 --rc genhtml_branch_coverage=1 00:37:42.950 --rc genhtml_function_coverage=1 00:37:42.950 --rc genhtml_legend=1 00:37:42.950 --rc geninfo_all_blocks=1 00:37:42.950 --rc geninfo_unexecuted_blocks=1 00:37:42.950 00:37:42.950 ' 00:37:42.950 08:52:34 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:42.950 08:52:34 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:37:42.950 08:52:34 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:42.950 08:52:34 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:42.950 08:52:34 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:42.950 08:52:34 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:42.950 08:52:34 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:42.950 08:52:34 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:42.950 08:52:34 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:42.950 08:52:34 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:42.950 08:52:34 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:42.950 08:52:34 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:42.950 08:52:34 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:42.950 08:52:34 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:42.950 08:52:34 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:42.950 08:52:34 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:42.950 08:52:34 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:42.950 08:52:34 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:42.950 08:52:34 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:42.950 08:52:34 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:37:42.950 08:52:34 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:42.950 08:52:34 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:42.950 08:52:34 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:42.950 08:52:34 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:42.950 08:52:34 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:42.950 08:52:34 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:42.950 08:52:34 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:37:42.950 08:52:34 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:42.950 08:52:34 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:37:42.950 08:52:34 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:42.950 08:52:34 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:42.950 08:52:34 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:42.950 08:52:34 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:42.950 08:52:34 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:42.950 08:52:34 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:42.950 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:42.950 08:52:34 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:42.950 08:52:34 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:42.950 08:52:34 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:42.950 08:52:34 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:37:42.950 08:52:34 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:37:42.950 08:52:34 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:37:42.950 08:52:34 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:37:42.950 08:52:34 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:37:42.950 08:52:34 nvmf_dif -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:37:42.950 08:52:34 nvmf_dif -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:42.950 08:52:34 nvmf_dif -- nvmf/common.sh@472 -- # prepare_net_devs 00:37:42.950 08:52:34 nvmf_dif -- nvmf/common.sh@434 -- # local -g is_hw=no 00:37:42.950 08:52:34 nvmf_dif -- nvmf/common.sh@436 -- # remove_spdk_ns 00:37:42.950 08:52:34 nvmf_dif -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:42.950 08:52:34 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:42.950 08:52:34 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:42.950 08:52:34 nvmf_dif -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:37:42.950 08:52:34 nvmf_dif -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:37:42.950 08:52:34 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:37:42.950 08:52:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:49.533 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:49.533 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:49.533 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:49.533 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@438 -- # is_hw=yes 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:49.533 08:52:41 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:49.795 08:52:41 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:49.795 08:52:41 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:49.795 08:52:41 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:49.795 08:52:41 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:49.795 08:52:41 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:49.795 08:52:41 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:49.795 08:52:41 nvmf_dif -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:49.795 08:52:41 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:49.795 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:49.795 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.695 ms 00:37:49.795 00:37:49.795 --- 10.0.0.2 ping statistics --- 00:37:49.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:49.795 rtt min/avg/max/mdev = 0.695/0.695/0.695/0.000 ms 00:37:49.795 08:52:41 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:49.795 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:49.795 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:37:49.795 00:37:49.795 --- 10.0.0.1 ping statistics --- 00:37:49.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:49.795 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:37:49.795 08:52:41 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:49.795 08:52:41 nvmf_dif -- nvmf/common.sh@446 -- # return 0 00:37:49.795 08:52:41 nvmf_dif -- nvmf/common.sh@474 -- # '[' iso == iso ']' 00:37:49.795 08:52:41 nvmf_dif -- nvmf/common.sh@475 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:53.117 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:37:53.117 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:37:53.117 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:37:53.117 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:37:53.117 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:37:53.117 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:37:53.117 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:37:53.117 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:37:53.117 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:37:53.117 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:37:53.117 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:37:53.117 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:37:53.117 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:37:53.117 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:37:53.117 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:37:53.117 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:37:53.117 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:37:53.689 08:52:45 nvmf_dif -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:53.689 08:52:45 nvmf_dif -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:37:53.689 08:52:45 nvmf_dif -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:37:53.689 08:52:45 nvmf_dif -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:53.689 08:52:45 nvmf_dif -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:37:53.689 08:52:45 nvmf_dif -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:37:53.689 08:52:45 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:37:53.689 08:52:45 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:37:53.689 08:52:45 nvmf_dif -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:37:53.689 08:52:45 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:53.689 08:52:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:53.689 08:52:45 nvmf_dif -- nvmf/common.sh@505 -- # nvmfpid=4055009 00:37:53.689 08:52:45 nvmf_dif -- nvmf/common.sh@506 -- # waitforlisten 4055009 00:37:53.689 08:52:45 nvmf_dif -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:37:53.689 08:52:45 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 4055009 ']' 00:37:53.689 08:52:45 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:53.689 08:52:45 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:53.689 08:52:45 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:53.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:53.689 08:52:45 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:53.689 08:52:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:53.689 [2024-10-01 08:52:45.375186] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:37:53.689 [2024-10-01 08:52:45.375237] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:53.689 [2024-10-01 08:52:45.442327] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:53.689 [2024-10-01 08:52:45.505027] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:53.689 [2024-10-01 08:52:45.505064] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:53.689 [2024-10-01 08:52:45.505073] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:53.689 [2024-10-01 08:52:45.505079] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:53.689 [2024-10-01 08:52:45.505085] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:53.689 [2024-10-01 08:52:45.505656] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:37:54.629 08:52:46 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:54.629 08:52:46 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:37:54.629 08:52:46 nvmf_dif -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:37:54.629 08:52:46 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:54.629 08:52:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:54.629 08:52:46 nvmf_dif -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:54.629 08:52:46 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:37:54.629 08:52:46 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:37:54.629 08:52:46 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:54.629 08:52:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:54.629 [2024-10-01 08:52:46.221383] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:54.629 08:52:46 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:54.629 08:52:46 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:37:54.629 08:52:46 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:54.629 08:52:46 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:54.629 08:52:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:54.629 ************************************ 00:37:54.629 START TEST fio_dif_1_default 00:37:54.629 ************************************ 00:37:54.629 08:52:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:37:54.629 08:52:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:37:54.629 08:52:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:37:54.629 08:52:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:37:54.629 08:52:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:37:54.629 08:52:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:37:54.629 08:52:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:54.629 08:52:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:54.629 08:52:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:54.629 bdev_null0 00:37:54.629 08:52:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:54.629 08:52:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:54.629 08:52:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:54.629 08:52:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:54.629 08:52:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:54.629 08:52:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:54.629 08:52:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:54.629 08:52:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:54.629 08:52:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:54.629 08:52:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:54.629 08:52:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:54.629 08:52:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:54.629 [2024-10-01 08:52:46.289713] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:54.629 08:52:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:54.629 08:52:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:37:54.629 08:52:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:54.629 08:52:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:54.629 08:52:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:37:54.629 08:52:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:54.629 08:52:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:37:54.630 08:52:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:54.630 08:52:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:37:54.630 08:52:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:37:54.630 08:52:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:54.630 08:52:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:37:54.630 08:52:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:54.630 08:52:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:37:54.630 08:52:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # config=() 00:37:54.630 08:52:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:37:54.630 08:52:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # local subsystem config 00:37:54.630 08:52:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:37:54.630 08:52:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:37:54.630 08:52:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:37:54.630 { 00:37:54.630 "params": { 00:37:54.630 "name": "Nvme$subsystem", 00:37:54.630 "trtype": "$TEST_TRANSPORT", 00:37:54.630 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:54.630 "adrfam": "ipv4", 00:37:54.630 "trsvcid": "$NVMF_PORT", 00:37:54.630 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:54.630 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:54.630 "hdgst": ${hdgst:-false}, 00:37:54.630 "ddgst": ${ddgst:-false} 00:37:54.630 }, 00:37:54.630 "method": "bdev_nvme_attach_controller" 00:37:54.630 } 00:37:54.630 EOF 00:37:54.630 )") 00:37:54.630 08:52:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:54.630 08:52:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:37:54.630 08:52:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:54.630 08:52:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@578 -- # cat 00:37:54.630 08:52:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:37:54.630 08:52:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:37:54.630 08:52:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # jq . 00:37:54.630 08:52:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@581 -- # IFS=, 00:37:54.630 08:52:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:37:54.630 "params": { 00:37:54.630 "name": "Nvme0", 00:37:54.630 "trtype": "tcp", 00:37:54.630 "traddr": "10.0.0.2", 00:37:54.630 "adrfam": "ipv4", 00:37:54.630 "trsvcid": "4420", 00:37:54.630 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:54.630 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:54.630 "hdgst": false, 00:37:54.630 "ddgst": false 00:37:54.630 }, 00:37:54.630 "method": "bdev_nvme_attach_controller" 00:37:54.630 }' 00:37:54.630 08:52:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:54.630 08:52:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:54.630 08:52:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:54.630 08:52:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:54.630 08:52:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:37:54.630 08:52:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:54.630 08:52:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:54.630 08:52:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:54.630 08:52:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:54.630 08:52:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:55.198 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:55.198 fio-3.35 00:37:55.198 Starting 1 thread 00:38:07.423 00:38:07.423 filename0: (groupid=0, jobs=1): err= 0: pid=4055509: Tue Oct 1 08:52:57 2024 00:38:07.423 read: IOPS=97, BW=389KiB/s (399kB/s)(3904KiB/10025msec) 00:38:07.423 slat (nsec): min=2789, max=18068, avg=5546.97, stdev=657.13 00:38:07.423 clat (usec): min=886, max=47908, avg=41070.72, stdev=2651.33 00:38:07.423 lat (usec): min=892, max=47918, avg=41076.27, stdev=2651.17 00:38:07.423 clat percentiles (usec): 00:38:07.423 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:38:07.423 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:38:07.423 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:38:07.423 | 99.00th=[42730], 99.50th=[42730], 99.90th=[47973], 99.95th=[47973], 00:38:07.423 | 99.99th=[47973] 00:38:07.423 bw ( KiB/s): min= 352, max= 416, per=99.63%, avg=388.80, stdev=15.66, samples=20 00:38:07.423 iops : min= 88, max= 104, avg=97.20, stdev= 3.91, samples=20 00:38:07.423 lat (usec) : 1000=0.41% 00:38:07.423 lat (msec) : 50=99.59% 00:38:07.423 cpu : usr=93.78%, sys=6.02%, ctx=17, majf=0, minf=227 00:38:07.423 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:07.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:07.423 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:07.423 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:07.423 latency : target=0, window=0, percentile=100.00%, depth=4 00:38:07.423 00:38:07.423 Run status group 0 (all jobs): 00:38:07.423 READ: bw=389KiB/s (399kB/s), 389KiB/s-389KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10025-10025msec 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:07.423 00:38:07.423 real 0m11.091s 00:38:07.423 user 0m24.554s 00:38:07.423 sys 0m0.962s 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:07.423 ************************************ 00:38:07.423 END TEST fio_dif_1_default 00:38:07.423 ************************************ 00:38:07.423 08:52:57 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:38:07.423 08:52:57 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:38:07.423 08:52:57 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:07.423 08:52:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:07.423 ************************************ 00:38:07.423 START TEST fio_dif_1_multi_subsystems 00:38:07.423 ************************************ 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:07.423 bdev_null0 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:07.423 [2024-10-01 08:52:57.472951] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:07.423 bdev_null1 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # config=() 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # local subsystem config 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:38:07.423 { 00:38:07.423 "params": { 00:38:07.423 "name": "Nvme$subsystem", 00:38:07.423 "trtype": "$TEST_TRANSPORT", 00:38:07.423 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:07.423 "adrfam": "ipv4", 00:38:07.423 "trsvcid": "$NVMF_PORT", 00:38:07.423 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:07.423 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:07.423 "hdgst": ${hdgst:-false}, 00:38:07.423 "ddgst": ${ddgst:-false} 00:38:07.423 }, 00:38:07.423 "method": "bdev_nvme_attach_controller" 00:38:07.423 } 00:38:07.423 EOF 00:38:07.423 )") 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # cat 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:38:07.423 { 00:38:07.423 "params": { 00:38:07.423 "name": "Nvme$subsystem", 00:38:07.423 "trtype": "$TEST_TRANSPORT", 00:38:07.423 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:07.423 "adrfam": "ipv4", 00:38:07.423 "trsvcid": "$NVMF_PORT", 00:38:07.423 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:07.423 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:07.423 "hdgst": ${hdgst:-false}, 00:38:07.423 "ddgst": ${ddgst:-false} 00:38:07.423 }, 00:38:07.423 "method": "bdev_nvme_attach_controller" 00:38:07.423 } 00:38:07.423 EOF 00:38:07.423 )") 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # cat 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # jq . 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@581 -- # IFS=, 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:38:07.423 "params": { 00:38:07.423 "name": "Nvme0", 00:38:07.423 "trtype": "tcp", 00:38:07.423 "traddr": "10.0.0.2", 00:38:07.423 "adrfam": "ipv4", 00:38:07.423 "trsvcid": "4420", 00:38:07.423 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:07.423 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:07.423 "hdgst": false, 00:38:07.423 "ddgst": false 00:38:07.423 }, 00:38:07.423 "method": "bdev_nvme_attach_controller" 00:38:07.423 },{ 00:38:07.423 "params": { 00:38:07.423 "name": "Nvme1", 00:38:07.423 "trtype": "tcp", 00:38:07.423 "traddr": "10.0.0.2", 00:38:07.423 "adrfam": "ipv4", 00:38:07.423 "trsvcid": "4420", 00:38:07.423 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:07.423 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:07.423 "hdgst": false, 00:38:07.423 "ddgst": false 00:38:07.423 }, 00:38:07.423 "method": "bdev_nvme_attach_controller" 00:38:07.423 }' 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:07.423 08:52:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:07.423 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:38:07.423 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:38:07.423 fio-3.35 00:38:07.423 Starting 2 threads 00:38:17.417 00:38:17.417 filename0: (groupid=0, jobs=1): err= 0: pid=4057745: Tue Oct 1 08:53:08 2024 00:38:17.417 read: IOPS=190, BW=762KiB/s (781kB/s)(7632KiB/10010msec) 00:38:17.417 slat (nsec): min=5407, max=45641, avg=6499.40, stdev=2176.61 00:38:17.417 clat (usec): min=508, max=42461, avg=20965.30, stdev=20144.00 00:38:17.417 lat (usec): min=516, max=42469, avg=20971.80, stdev=20143.77 00:38:17.417 clat percentiles (usec): 00:38:17.417 | 1.00th=[ 586], 5.00th=[ 685], 10.00th=[ 848], 20.00th=[ 889], 00:38:17.417 | 30.00th=[ 906], 40.00th=[ 922], 50.00th=[ 2212], 60.00th=[41157], 00:38:17.417 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:38:17.417 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:38:17.417 | 99.99th=[42206] 00:38:17.417 bw ( KiB/s): min= 704, max= 768, per=66.12%, avg=761.60, stdev=19.70, samples=20 00:38:17.417 iops : min= 176, max= 192, avg=190.40, stdev= 4.92, samples=20 00:38:17.417 lat (usec) : 750=7.86%, 1000=41.46% 00:38:17.417 lat (msec) : 2=0.58%, 4=0.21%, 50=49.90% 00:38:17.417 cpu : usr=95.34%, sys=4.42%, ctx=12, majf=0, minf=163 00:38:17.417 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:17.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:17.417 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:17.417 issued rwts: total=1908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:17.417 latency : target=0, window=0, percentile=100.00%, depth=4 00:38:17.417 filename1: (groupid=0, jobs=1): err= 0: pid=4057746: Tue Oct 1 08:53:08 2024 00:38:17.417 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10023msec) 00:38:17.417 slat (nsec): min=5405, max=32778, avg=6782.46, stdev=1911.79 00:38:17.417 clat (usec): min=938, max=43063, avg=41054.97, stdev=2626.86 00:38:17.417 lat (usec): min=943, max=43095, avg=41061.75, stdev=2626.96 00:38:17.417 clat percentiles (usec): 00:38:17.417 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:38:17.417 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:38:17.417 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42730], 00:38:17.418 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:38:17.418 | 99.99th=[43254] 00:38:17.418 bw ( KiB/s): min= 384, max= 416, per=33.71%, avg=388.80, stdev=11.72, samples=20 00:38:17.418 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:38:17.418 lat (usec) : 1000=0.41% 00:38:17.418 lat (msec) : 50=99.59% 00:38:17.418 cpu : usr=95.67%, sys=4.08%, ctx=13, majf=0, minf=146 00:38:17.418 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:17.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:17.418 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:17.418 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:17.418 latency : target=0, window=0, percentile=100.00%, depth=4 00:38:17.418 00:38:17.418 Run status group 0 (all jobs): 00:38:17.418 READ: bw=1151KiB/s (1179kB/s), 390KiB/s-762KiB/s (399kB/s-781kB/s), io=11.3MiB (11.8MB), run=10010-10023msec 00:38:17.418 08:53:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:38:17.418 08:53:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:38:17.418 08:53:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:38:17.418 08:53:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:17.418 08:53:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:38:17.418 08:53:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:17.418 08:53:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.418 08:53:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:17.418 08:53:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:17.418 08:53:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:17.418 08:53:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.418 08:53:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:17.418 08:53:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:17.418 08:53:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:38:17.418 08:53:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:38:17.418 08:53:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:38:17.418 08:53:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:17.418 08:53:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.418 08:53:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:17.418 08:53:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:17.418 08:53:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:38:17.418 08:53:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.418 08:53:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:17.418 08:53:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:17.418 00:38:17.418 real 0m11.418s 00:38:17.418 user 0m37.001s 00:38:17.418 sys 0m1.229s 00:38:17.418 08:53:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:17.418 08:53:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:17.418 ************************************ 00:38:17.418 END TEST fio_dif_1_multi_subsystems 00:38:17.418 ************************************ 00:38:17.418 08:53:08 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:38:17.418 08:53:08 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:38:17.418 08:53:08 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:17.418 08:53:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:17.418 ************************************ 00:38:17.418 START TEST fio_dif_rand_params 00:38:17.418 ************************************ 00:38:17.418 08:53:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:38:17.418 08:53:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:38:17.418 08:53:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:38:17.418 08:53:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:38:17.418 08:53:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:38:17.418 08:53:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:38:17.418 08:53:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:38:17.418 08:53:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:38:17.418 08:53:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:38:17.418 08:53:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:38:17.418 08:53:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:17.418 08:53:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:38:17.418 08:53:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:38:17.418 08:53:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:38:17.418 08:53:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.418 08:53:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:17.418 bdev_null0 00:38:17.418 08:53:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:17.418 08:53:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:17.418 08:53:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.418 08:53:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:17.418 08:53:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:17.418 08:53:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:17.418 08:53:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.418 08:53:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:17.418 08:53:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:17.418 08:53:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:17.418 08:53:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.418 08:53:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:17.418 [2024-10-01 08:53:08.970932] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:17.418 08:53:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:17.418 08:53:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:38:17.418 08:53:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:38:17.418 08:53:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:38:17.418 08:53:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:38:17.418 08:53:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:17.418 08:53:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:38:17.418 08:53:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:17.418 08:53:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:38:17.418 08:53:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:38:17.418 08:53:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:38:17.418 08:53:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:38:17.418 { 00:38:17.418 "params": { 00:38:17.418 "name": "Nvme$subsystem", 00:38:17.418 "trtype": "$TEST_TRANSPORT", 00:38:17.418 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:17.418 "adrfam": "ipv4", 00:38:17.418 "trsvcid": "$NVMF_PORT", 00:38:17.418 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:17.418 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:17.418 "hdgst": ${hdgst:-false}, 00:38:17.418 "ddgst": ${ddgst:-false} 00:38:17.418 }, 00:38:17.418 "method": "bdev_nvme_attach_controller" 00:38:17.418 } 00:38:17.418 EOF 00:38:17.418 )") 00:38:17.418 08:53:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:17.418 08:53:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:38:17.418 08:53:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:38:17.418 08:53:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:38:17.418 08:53:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:17.418 08:53:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:38:17.418 08:53:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:38:17.418 08:53:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:17.418 08:53:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:38:17.418 08:53:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:17.418 08:53:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:38:17.418 08:53:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:38:17.418 08:53:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:17.418 08:53:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:17.418 08:53:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:38:17.418 08:53:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:38:17.418 08:53:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:38:17.419 "params": { 00:38:17.419 "name": "Nvme0", 00:38:17.419 "trtype": "tcp", 00:38:17.419 "traddr": "10.0.0.2", 00:38:17.419 "adrfam": "ipv4", 00:38:17.419 "trsvcid": "4420", 00:38:17.419 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:17.419 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:17.419 "hdgst": false, 00:38:17.419 "ddgst": false 00:38:17.419 }, 00:38:17.419 "method": "bdev_nvme_attach_controller" 00:38:17.419 }' 00:38:17.419 08:53:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:38:17.419 08:53:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:38:17.419 08:53:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:17.419 08:53:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:17.419 08:53:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:38:17.419 08:53:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:17.419 08:53:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:38:17.419 08:53:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:38:17.419 08:53:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:17.419 08:53:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:17.679 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:38:17.679 ... 00:38:17.679 fio-3.35 00:38:17.679 Starting 3 threads 00:38:24.266 00:38:24.266 filename0: (groupid=0, jobs=1): err= 0: pid=4059990: Tue Oct 1 08:53:14 2024 00:38:24.266 read: IOPS=249, BW=31.2MiB/s (32.7MB/s)(157MiB/5043msec) 00:38:24.266 slat (nsec): min=5441, max=45016, avg=8031.44, stdev=1873.06 00:38:24.266 clat (usec): min=4937, max=88514, avg=11973.01, stdev=9572.28 00:38:24.266 lat (usec): min=4946, max=88520, avg=11981.04, stdev=9572.14 00:38:24.266 clat percentiles (usec): 00:38:24.266 | 1.00th=[ 5800], 5.00th=[ 6652], 10.00th=[ 6980], 20.00th=[ 7832], 00:38:24.266 | 30.00th=[ 9110], 40.00th=[10028], 50.00th=[10552], 60.00th=[10814], 00:38:24.266 | 70.00th=[11207], 80.00th=[11600], 90.00th=[12125], 95.00th=[46400], 00:38:24.266 | 99.00th=[51643], 99.50th=[52691], 99.90th=[88605], 99.95th=[88605], 00:38:24.266 | 99.99th=[88605] 00:38:24.267 bw ( KiB/s): min=20480, max=40192, per=36.60%, avg=32179.20, stdev=5839.01, samples=10 00:38:24.267 iops : min= 160, max= 314, avg=251.40, stdev=45.62, samples=10 00:38:24.267 lat (msec) : 10=39.48%, 20=55.36%, 50=2.78%, 100=2.38% 00:38:24.267 cpu : usr=95.93%, sys=3.79%, ctx=13, majf=0, minf=91 00:38:24.267 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:24.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.267 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.267 issued rwts: total=1259,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:24.267 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:24.267 filename0: (groupid=0, jobs=1): err= 0: pid=4059991: Tue Oct 1 08:53:14 2024 00:38:24.267 read: IOPS=220, BW=27.6MiB/s (29.0MB/s)(138MiB/5007msec) 00:38:24.267 slat (nsec): min=5434, max=33079, avg=8214.43, stdev=1782.77 00:38:24.267 clat (usec): min=4249, max=53290, avg=13566.80, stdev=11390.76 00:38:24.267 lat (usec): min=4257, max=53296, avg=13575.02, stdev=11390.77 00:38:24.267 clat percentiles (usec): 00:38:24.267 | 1.00th=[ 5342], 5.00th=[ 6652], 10.00th=[ 7242], 20.00th=[ 8586], 00:38:24.267 | 30.00th=[10159], 40.00th=[10552], 50.00th=[10814], 60.00th=[11076], 00:38:24.267 | 70.00th=[11469], 80.00th=[11863], 90.00th=[12649], 95.00th=[50594], 00:38:24.267 | 99.00th=[52691], 99.50th=[52691], 99.90th=[52691], 99.95th=[53216], 00:38:24.267 | 99.99th=[53216] 00:38:24.267 bw ( KiB/s): min=19200, max=38400, per=32.11%, avg=28236.80, stdev=5368.61, samples=10 00:38:24.267 iops : min= 150, max= 300, avg=220.60, stdev=41.94, samples=10 00:38:24.267 lat (msec) : 10=28.57%, 20=63.02%, 50=2.35%, 100=6.06% 00:38:24.267 cpu : usr=95.49%, sys=4.25%, ctx=11, majf=0, minf=105 00:38:24.267 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:24.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.267 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.267 issued rwts: total=1106,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:24.267 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:24.267 filename0: (groupid=0, jobs=1): err= 0: pid=4059992: Tue Oct 1 08:53:14 2024 00:38:24.267 read: IOPS=219, BW=27.4MiB/s (28.8MB/s)(137MiB/5007msec) 00:38:24.267 slat (nsec): min=5442, max=31957, avg=7681.92, stdev=1696.32 00:38:24.267 clat (usec): min=5685, max=91870, avg=13654.10, stdev=7204.74 00:38:24.267 lat (usec): min=5690, max=91878, avg=13661.78, stdev=7204.89 00:38:24.267 clat percentiles (usec): 00:38:24.267 | 1.00th=[ 6259], 5.00th=[ 7832], 10.00th=[ 8717], 20.00th=[ 9896], 00:38:24.267 | 30.00th=[10683], 40.00th=[11469], 50.00th=[12649], 60.00th=[14353], 00:38:24.267 | 70.00th=[15139], 80.00th=[15664], 90.00th=[16581], 95.00th=[17433], 00:38:24.267 | 99.00th=[50070], 99.50th=[52691], 99.90th=[88605], 99.95th=[91751], 00:38:24.267 | 99.99th=[91751] 00:38:24.267 bw ( KiB/s): min=19968, max=32768, per=31.91%, avg=28057.60, stdev=3882.81, samples=10 00:38:24.267 iops : min= 156, max= 256, avg=219.20, stdev=30.33, samples=10 00:38:24.267 lat (msec) : 10=20.84%, 20=76.62%, 50=1.18%, 100=1.36% 00:38:24.267 cpu : usr=95.05%, sys=4.69%, ctx=9, majf=0, minf=88 00:38:24.267 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:24.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.267 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.267 issued rwts: total=1099,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:24.267 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:24.267 00:38:24.267 Run status group 0 (all jobs): 00:38:24.267 READ: bw=85.9MiB/s (90.0MB/s), 27.4MiB/s-31.2MiB/s (28.8MB/s-32.7MB/s), io=433MiB (454MB), run=5007-5043msec 00:38:24.267 08:53:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:38:24.267 08:53:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:38:24.267 08:53:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:24.267 08:53:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:24.267 08:53:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:38:24.267 08:53:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:24.267 08:53:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.267 08:53:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:24.267 08:53:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.267 08:53:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:24.267 08:53:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.267 08:53:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:24.267 08:53:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.267 08:53:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:38:24.267 08:53:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:38:24.267 08:53:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:38:24.267 08:53:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:38:24.267 08:53:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:38:24.267 08:53:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:38:24.267 08:53:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:38:24.267 08:53:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:38:24.267 08:53:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:24.267 08:53:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:38:24.267 08:53:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:38:24.267 08:53:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:38:24.267 08:53:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.267 08:53:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:24.267 bdev_null0 00:38:24.267 08:53:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.267 08:53:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:24.267 08:53:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.267 08:53:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:24.267 08:53:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.267 08:53:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:24.267 08:53:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.267 08:53:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:24.267 08:53:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.267 08:53:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:24.267 08:53:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.267 08:53:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:24.267 [2024-10-01 08:53:15.093963] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:24.267 08:53:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.267 08:53:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:24.267 08:53:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:38:24.267 08:53:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:38:24.267 08:53:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:38:24.267 08:53:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.267 08:53:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:24.267 bdev_null1 00:38:24.267 08:53:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.267 08:53:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:38:24.267 08:53:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.267 08:53:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:24.267 08:53:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.267 08:53:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:38:24.267 08:53:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.267 08:53:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:24.267 08:53:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.267 08:53:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:24.267 08:53:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.267 08:53:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:24.267 08:53:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.268 08:53:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:24.268 08:53:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:38:24.268 08:53:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:38:24.268 08:53:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:38:24.268 08:53:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.268 08:53:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:24.268 bdev_null2 00:38:24.268 08:53:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.268 08:53:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:38:24.268 08:53:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.268 08:53:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:24.268 08:53:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.268 08:53:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:38:24.268 08:53:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.268 08:53:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:24.268 08:53:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.268 08:53:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:38:24.268 08:53:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.268 08:53:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:24.268 08:53:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.268 08:53:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:38:24.268 08:53:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:38:24.268 08:53:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:38:24.268 08:53:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:38:24.268 08:53:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:24.268 08:53:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:38:24.268 08:53:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:38:24.268 08:53:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:24.268 08:53:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:38:24.268 { 00:38:24.268 "params": { 00:38:24.268 "name": "Nvme$subsystem", 00:38:24.268 "trtype": "$TEST_TRANSPORT", 00:38:24.268 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:24.268 "adrfam": "ipv4", 00:38:24.268 "trsvcid": "$NVMF_PORT", 00:38:24.268 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:24.268 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:24.268 "hdgst": ${hdgst:-false}, 00:38:24.268 "ddgst": ${ddgst:-false} 00:38:24.268 }, 00:38:24.268 "method": "bdev_nvme_attach_controller" 00:38:24.268 } 00:38:24.268 EOF 00:38:24.268 )") 00:38:24.268 08:53:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:38:24.268 08:53:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:38:24.268 08:53:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:24.268 08:53:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:38:24.268 08:53:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:38:24.268 08:53:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:38:24.268 08:53:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:24.268 08:53:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:38:24.268 08:53:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:38:24.268 08:53:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:24.268 08:53:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:38:24.268 08:53:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:24.268 08:53:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:38:24.268 08:53:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:38:24.268 08:53:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:24.268 08:53:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:24.268 08:53:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:38:24.268 08:53:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:38:24.268 08:53:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:38:24.268 { 00:38:24.268 "params": { 00:38:24.268 "name": "Nvme$subsystem", 00:38:24.268 "trtype": "$TEST_TRANSPORT", 00:38:24.268 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:24.268 "adrfam": "ipv4", 00:38:24.268 "trsvcid": "$NVMF_PORT", 00:38:24.268 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:24.268 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:24.268 "hdgst": ${hdgst:-false}, 00:38:24.268 "ddgst": ${ddgst:-false} 00:38:24.268 }, 00:38:24.268 "method": "bdev_nvme_attach_controller" 00:38:24.268 } 00:38:24.268 EOF 00:38:24.268 )") 00:38:24.268 08:53:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:38:24.268 08:53:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:24.268 08:53:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:38:24.268 08:53:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:38:24.268 08:53:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:38:24.268 08:53:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:24.268 08:53:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:38:24.268 08:53:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:38:24.268 { 00:38:24.268 "params": { 00:38:24.268 "name": "Nvme$subsystem", 00:38:24.268 "trtype": "$TEST_TRANSPORT", 00:38:24.268 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:24.268 "adrfam": "ipv4", 00:38:24.268 "trsvcid": "$NVMF_PORT", 00:38:24.268 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:24.268 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:24.268 "hdgst": ${hdgst:-false}, 00:38:24.268 "ddgst": ${ddgst:-false} 00:38:24.268 }, 00:38:24.268 "method": "bdev_nvme_attach_controller" 00:38:24.268 } 00:38:24.268 EOF 00:38:24.268 )") 00:38:24.268 08:53:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:38:24.268 08:53:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:38:24.268 08:53:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:38:24.268 08:53:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:38:24.268 "params": { 00:38:24.268 "name": "Nvme0", 00:38:24.268 "trtype": "tcp", 00:38:24.268 "traddr": "10.0.0.2", 00:38:24.268 "adrfam": "ipv4", 00:38:24.268 "trsvcid": "4420", 00:38:24.268 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:24.268 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:24.268 "hdgst": false, 00:38:24.268 "ddgst": false 00:38:24.268 }, 00:38:24.268 "method": "bdev_nvme_attach_controller" 00:38:24.268 },{ 00:38:24.268 "params": { 00:38:24.268 "name": "Nvme1", 00:38:24.268 "trtype": "tcp", 00:38:24.268 "traddr": "10.0.0.2", 00:38:24.268 "adrfam": "ipv4", 00:38:24.268 "trsvcid": "4420", 00:38:24.268 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:24.268 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:24.268 "hdgst": false, 00:38:24.268 "ddgst": false 00:38:24.268 }, 00:38:24.268 "method": "bdev_nvme_attach_controller" 00:38:24.268 },{ 00:38:24.268 "params": { 00:38:24.268 "name": "Nvme2", 00:38:24.268 "trtype": "tcp", 00:38:24.268 "traddr": "10.0.0.2", 00:38:24.268 "adrfam": "ipv4", 00:38:24.268 "trsvcid": "4420", 00:38:24.268 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:38:24.268 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:38:24.268 "hdgst": false, 00:38:24.268 "ddgst": false 00:38:24.268 }, 00:38:24.268 "method": "bdev_nvme_attach_controller" 00:38:24.268 }' 00:38:24.268 08:53:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:38:24.268 08:53:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:38:24.268 08:53:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:24.268 08:53:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:24.268 08:53:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:38:24.268 08:53:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:24.268 08:53:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:38:24.268 08:53:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:38:24.268 08:53:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:24.268 08:53:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:24.268 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:38:24.268 ... 00:38:24.268 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:38:24.268 ... 00:38:24.268 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:38:24.268 ... 00:38:24.268 fio-3.35 00:38:24.268 Starting 24 threads 00:38:36.550 00:38:36.550 filename0: (groupid=0, jobs=1): err= 0: pid=4061447: Tue Oct 1 08:53:26 2024 00:38:36.550 read: IOPS=578, BW=2314KiB/s (2370kB/s)(22.6MiB/10010msec) 00:38:36.550 slat (nsec): min=5562, max=84950, avg=7072.63, stdev=3613.21 00:38:36.550 clat (usec): min=14916, max=34660, avg=27588.84, stdev=5357.17 00:38:36.551 lat (usec): min=14933, max=34666, avg=27595.91, stdev=5357.39 00:38:36.551 clat percentiles (usec): 00:38:36.551 | 1.00th=[18220], 5.00th=[20055], 10.00th=[20579], 20.00th=[21890], 00:38:36.551 | 30.00th=[22676], 40.00th=[24511], 50.00th=[31851], 60.00th=[32375], 00:38:36.551 | 70.00th=[32375], 80.00th=[32375], 90.00th=[32900], 95.00th=[33162], 00:38:36.551 | 99.00th=[33817], 99.50th=[34341], 99.90th=[34866], 99.95th=[34866], 00:38:36.551 | 99.99th=[34866] 00:38:36.551 bw ( KiB/s): min= 1920, max= 2816, per=4.85%, avg=2297.00, stdev=326.48, samples=19 00:38:36.551 iops : min= 480, max= 704, avg=574.21, stdev=81.65, samples=19 00:38:36.551 lat (msec) : 20=5.40%, 50=94.60% 00:38:36.551 cpu : usr=99.14%, sys=0.58%, ctx=12, majf=0, minf=9 00:38:36.551 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:36.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:36.551 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:36.551 issued rwts: total=5792,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:36.551 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:36.551 filename0: (groupid=0, jobs=1): err= 0: pid=4061448: Tue Oct 1 08:53:26 2024 00:38:36.551 read: IOPS=486, BW=1947KiB/s (1994kB/s)(19.0MiB/10003msec) 00:38:36.551 slat (nsec): min=5563, max=60832, avg=9770.57, stdev=6837.58 00:38:36.551 clat (usec): min=5025, max=63400, avg=32814.72, stdev=2782.92 00:38:36.551 lat (usec): min=5031, max=63428, avg=32824.49, stdev=2783.15 00:38:36.551 clat percentiles (usec): 00:38:36.551 | 1.00th=[24249], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:38:36.551 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32900], 00:38:36.551 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:38:36.551 | 99.00th=[38011], 99.50th=[43779], 99.90th=[63177], 99.95th=[63177], 00:38:36.551 | 99.99th=[63177] 00:38:36.551 bw ( KiB/s): min= 1792, max= 2032, per=4.10%, avg=1941.05, stdev=53.08, samples=19 00:38:36.551 iops : min= 448, max= 508, avg=485.26, stdev=13.27, samples=19 00:38:36.551 lat (msec) : 10=0.21%, 20=0.37%, 50=99.10%, 100=0.33% 00:38:36.551 cpu : usr=98.58%, sys=0.90%, ctx=76, majf=0, minf=9 00:38:36.551 IO depths : 1=0.1%, 2=2.2%, 4=8.6%, 8=72.7%, 16=16.4%, 32=0.0%, >=64=0.0% 00:38:36.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:36.551 complete : 0=0.0%, 4=91.1%, 8=7.0%, 16=1.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:36.551 issued rwts: total=4870,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:36.551 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:36.551 filename0: (groupid=0, jobs=1): err= 0: pid=4061449: Tue Oct 1 08:53:26 2024 00:38:36.551 read: IOPS=487, BW=1948KiB/s (1995kB/s)(19.1MiB/10020msec) 00:38:36.551 slat (nsec): min=5232, max=67172, avg=14504.84, stdev=10666.24 00:38:36.551 clat (usec): min=21102, max=36378, avg=32723.07, stdev=1240.79 00:38:36.551 lat (usec): min=21109, max=36384, avg=32737.58, stdev=1241.65 00:38:36.551 clat percentiles (usec): 00:38:36.551 | 1.00th=[30540], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:38:36.551 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:38:36.551 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:38:36.551 | 99.00th=[34866], 99.50th=[35390], 99.90th=[36439], 99.95th=[36439], 00:38:36.551 | 99.99th=[36439] 00:38:36.551 bw ( KiB/s): min= 1912, max= 2048, per=4.11%, avg=1944.75, stdev=52.37, samples=20 00:38:36.551 iops : min= 478, max= 512, avg=486.15, stdev=13.02, samples=20 00:38:36.551 lat (msec) : 50=100.00% 00:38:36.551 cpu : usr=98.95%, sys=0.69%, ctx=69, majf=0, minf=9 00:38:36.551 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:36.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:36.551 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:36.551 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:36.551 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:36.551 filename0: (groupid=0, jobs=1): err= 0: pid=4061450: Tue Oct 1 08:53:26 2024 00:38:36.551 read: IOPS=495, BW=1982KiB/s (2030kB/s)(19.4MiB/10010msec) 00:38:36.551 slat (nsec): min=5588, max=85402, avg=12767.88, stdev=9135.49 00:38:36.551 clat (usec): min=15196, max=35040, avg=32183.72, stdev=2558.51 00:38:36.551 lat (usec): min=15206, max=35056, avg=32196.49, stdev=2558.65 00:38:36.551 clat percentiles (usec): 00:38:36.551 | 1.00th=[21627], 5.00th=[25297], 10.00th=[31851], 20.00th=[32113], 00:38:36.551 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:38:36.551 | 70.00th=[32900], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:38:36.551 | 99.00th=[34866], 99.50th=[34866], 99.90th=[34866], 99.95th=[34866], 00:38:36.551 | 99.99th=[34866] 00:38:36.551 bw ( KiB/s): min= 1920, max= 2048, per=4.19%, avg=1980.37, stdev=65.39, samples=19 00:38:36.551 iops : min= 480, max= 512, avg=495.05, stdev=16.31, samples=19 00:38:36.551 lat (msec) : 20=0.65%, 50=99.35% 00:38:36.551 cpu : usr=99.04%, sys=0.66%, ctx=10, majf=0, minf=9 00:38:36.551 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:36.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:36.551 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:36.551 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:36.551 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:36.551 filename0: (groupid=0, jobs=1): err= 0: pid=4061451: Tue Oct 1 08:53:26 2024 00:38:36.551 read: IOPS=486, BW=1944KiB/s (1991kB/s)(19.0MiB/10006msec) 00:38:36.551 slat (nsec): min=5494, max=73319, avg=16089.65, stdev=10559.74 00:38:36.551 clat (usec): min=7137, max=66618, avg=32764.84, stdev=2741.54 00:38:36.551 lat (usec): min=7142, max=66640, avg=32780.93, stdev=2741.21 00:38:36.551 clat percentiles (usec): 00:38:36.551 | 1.00th=[30540], 5.00th=[31851], 10.00th=[32113], 20.00th=[32113], 00:38:36.551 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:38:36.551 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:38:36.551 | 99.00th=[35390], 99.50th=[43254], 99.90th=[66323], 99.95th=[66847], 00:38:36.551 | 99.99th=[66847] 00:38:36.551 bw ( KiB/s): min= 1792, max= 2048, per=4.09%, avg=1933.05, stdev=58.85, samples=19 00:38:36.551 iops : min= 448, max= 512, avg=483.26, stdev=14.71, samples=19 00:38:36.551 lat (msec) : 10=0.33%, 20=0.04%, 50=99.30%, 100=0.33% 00:38:36.551 cpu : usr=99.09%, sys=0.63%, ctx=15, majf=0, minf=9 00:38:36.551 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:38:36.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:36.551 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:36.551 issued rwts: total=4864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:36.551 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:36.551 filename0: (groupid=0, jobs=1): err= 0: pid=4061452: Tue Oct 1 08:53:26 2024 00:38:36.551 read: IOPS=485, BW=1940KiB/s (1987kB/s)(19.0MiB/10014msec) 00:38:36.551 slat (usec): min=5, max=132, avg=23.59, stdev=19.71 00:38:36.551 clat (usec): min=16442, max=59735, avg=32766.29, stdev=3831.37 00:38:36.551 lat (usec): min=16466, max=59742, avg=32789.88, stdev=3830.97 00:38:36.551 clat percentiles (usec): 00:38:36.551 | 1.00th=[21890], 5.00th=[26870], 10.00th=[31589], 20.00th=[31851], 00:38:36.551 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:38:36.551 | 70.00th=[33162], 80.00th=[33817], 90.00th=[34341], 95.00th=[38536], 00:38:36.551 | 99.00th=[46924], 99.50th=[54264], 99.90th=[55837], 99.95th=[55837], 00:38:36.551 | 99.99th=[59507] 00:38:36.551 bw ( KiB/s): min= 1792, max= 2112, per=4.11%, avg=1944.16, stdev=84.16, samples=19 00:38:36.551 iops : min= 448, max= 528, avg=486.00, stdev=21.01, samples=19 00:38:36.551 lat (msec) : 20=0.29%, 50=98.76%, 100=0.95% 00:38:36.551 cpu : usr=99.01%, sys=0.70%, ctx=14, majf=0, minf=9 00:38:36.551 IO depths : 1=4.8%, 2=9.8%, 4=20.7%, 8=56.5%, 16=8.2%, 32=0.0%, >=64=0.0% 00:38:36.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:36.551 complete : 0=0.0%, 4=93.0%, 8=1.7%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:36.551 issued rwts: total=4858,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:36.551 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:36.551 filename0: (groupid=0, jobs=1): err= 0: pid=4061453: Tue Oct 1 08:53:26 2024 00:38:36.551 read: IOPS=486, BW=1947KiB/s (1993kB/s)(19.0MiB/10015msec) 00:38:36.551 slat (usec): min=5, max=129, avg=24.40, stdev=18.91 00:38:36.551 clat (usec): min=12677, max=46493, avg=32631.19, stdev=2003.46 00:38:36.551 lat (usec): min=12683, max=46527, avg=32655.59, stdev=2003.03 00:38:36.551 clat percentiles (usec): 00:38:36.551 | 1.00th=[23725], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:38:36.551 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:38:36.551 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:38:36.551 | 99.00th=[40109], 99.50th=[42206], 99.90th=[45351], 99.95th=[46400], 00:38:36.551 | 99.99th=[46400] 00:38:36.551 bw ( KiB/s): min= 1904, max= 2048, per=4.11%, avg=1944.21, stdev=51.37, samples=19 00:38:36.551 iops : min= 476, max= 512, avg=486.05, stdev=12.84, samples=19 00:38:36.551 lat (msec) : 20=0.45%, 50=99.55% 00:38:36.551 cpu : usr=99.10%, sys=0.61%, ctx=13, majf=0, minf=9 00:38:36.551 IO depths : 1=5.8%, 2=11.5%, 4=23.3%, 8=52.4%, 16=7.0%, 32=0.0%, >=64=0.0% 00:38:36.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:36.551 complete : 0=0.0%, 4=93.7%, 8=0.7%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:36.551 issued rwts: total=4874,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:36.551 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:36.551 filename0: (groupid=0, jobs=1): err= 0: pid=4061454: Tue Oct 1 08:53:26 2024 00:38:36.551 read: IOPS=496, BW=1985KiB/s (2032kB/s)(19.4MiB/10008msec) 00:38:36.551 slat (usec): min=5, max=129, avg=19.79, stdev=18.53 00:38:36.551 clat (usec): min=15944, max=55299, avg=32087.65, stdev=4582.49 00:38:36.551 lat (usec): min=15952, max=55334, avg=32107.44, stdev=4584.35 00:38:36.551 clat percentiles (usec): 00:38:36.551 | 1.00th=[20317], 5.00th=[22414], 10.00th=[26084], 20.00th=[31851], 00:38:36.551 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:38:36.551 | 70.00th=[32900], 80.00th=[33424], 90.00th=[34341], 95.00th=[38536], 00:38:36.551 | 99.00th=[49021], 99.50th=[52691], 99.90th=[53216], 99.95th=[53216], 00:38:36.551 | 99.99th=[55313] 00:38:36.551 bw ( KiB/s): min= 1792, max= 2096, per=4.17%, avg=1972.79, stdev=80.11, samples=19 00:38:36.551 iops : min= 448, max= 524, avg=493.16, stdev=19.99, samples=19 00:38:36.552 lat (msec) : 20=0.48%, 50=98.59%, 100=0.93% 00:38:36.552 cpu : usr=98.59%, sys=1.04%, ctx=63, majf=0, minf=9 00:38:36.552 IO depths : 1=4.4%, 2=8.8%, 4=19.2%, 8=59.2%, 16=8.4%, 32=0.0%, >=64=0.0% 00:38:36.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:36.552 complete : 0=0.0%, 4=92.5%, 8=2.0%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:36.552 issued rwts: total=4966,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:36.552 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:36.552 filename1: (groupid=0, jobs=1): err= 0: pid=4061455: Tue Oct 1 08:53:26 2024 00:38:36.552 read: IOPS=488, BW=1954KiB/s (2001kB/s)(19.1MiB/10023msec) 00:38:36.552 slat (usec): min=4, max=124, avg=18.60, stdev=16.46 00:38:36.552 clat (usec): min=15364, max=44300, avg=32606.20, stdev=1759.04 00:38:36.552 lat (usec): min=15396, max=44307, avg=32624.80, stdev=1757.49 00:38:36.552 clat percentiles (usec): 00:38:36.552 | 1.00th=[22676], 5.00th=[31851], 10.00th=[31851], 20.00th=[32113], 00:38:36.552 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:38:36.552 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:38:36.552 | 99.00th=[35390], 99.50th=[35914], 99.90th=[36439], 99.95th=[36963], 00:38:36.552 | 99.99th=[44303] 00:38:36.552 bw ( KiB/s): min= 1916, max= 2052, per=4.13%, avg=1952.00, stdev=57.35, samples=20 00:38:36.552 iops : min= 479, max= 513, avg=488.00, stdev=14.34, samples=20 00:38:36.552 lat (msec) : 20=0.37%, 50=99.63% 00:38:36.552 cpu : usr=99.06%, sys=0.65%, ctx=11, majf=0, minf=9 00:38:36.552 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:38:36.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:36.552 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:36.552 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:36.552 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:36.552 filename1: (groupid=0, jobs=1): err= 0: pid=4061456: Tue Oct 1 08:53:26 2024 00:38:36.552 read: IOPS=488, BW=1954KiB/s (2001kB/s)(19.1MiB/10021msec) 00:38:36.552 slat (usec): min=5, max=127, avg=16.23, stdev=14.78 00:38:36.552 clat (usec): min=15271, max=44039, avg=32621.01, stdev=1837.92 00:38:36.552 lat (usec): min=15291, max=44063, avg=32637.24, stdev=1836.18 00:38:36.552 clat percentiles (usec): 00:38:36.552 | 1.00th=[21890], 5.00th=[31851], 10.00th=[32113], 20.00th=[32113], 00:38:36.552 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32900], 00:38:36.552 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:38:36.552 | 99.00th=[34866], 99.50th=[35914], 99.90th=[36439], 99.95th=[36439], 00:38:36.552 | 99.99th=[43779] 00:38:36.552 bw ( KiB/s): min= 1792, max= 2052, per=4.13%, avg=1952.20, stdev=70.71, samples=20 00:38:36.552 iops : min= 448, max= 513, avg=488.05, stdev=17.68, samples=20 00:38:36.552 lat (msec) : 20=0.65%, 50=99.35% 00:38:36.552 cpu : usr=99.06%, sys=0.66%, ctx=13, majf=0, minf=9 00:38:36.552 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:36.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:36.552 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:36.552 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:36.552 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:36.552 filename1: (groupid=0, jobs=1): err= 0: pid=4061457: Tue Oct 1 08:53:26 2024 00:38:36.552 read: IOPS=484, BW=1938KiB/s (1984kB/s)(18.9MiB/10007msec) 00:38:36.552 slat (usec): min=5, max=121, avg=18.04, stdev=13.52 00:38:36.552 clat (usec): min=20239, max=55083, avg=32874.85, stdev=1524.88 00:38:36.552 lat (usec): min=20245, max=55109, avg=32892.90, stdev=1523.64 00:38:36.552 clat percentiles (usec): 00:38:36.552 | 1.00th=[31327], 5.00th=[31851], 10.00th=[32113], 20.00th=[32113], 00:38:36.552 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32900], 00:38:36.552 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:38:36.552 | 99.00th=[36439], 99.50th=[37487], 99.90th=[52691], 99.95th=[52691], 00:38:36.552 | 99.99th=[55313] 00:38:36.552 bw ( KiB/s): min= 1795, max= 2048, per=4.09%, avg=1933.42, stdev=58.39, samples=19 00:38:36.552 iops : min= 448, max= 512, avg=483.32, stdev=14.70, samples=19 00:38:36.552 lat (msec) : 50=99.67%, 100=0.33% 00:38:36.552 cpu : usr=98.43%, sys=0.99%, ctx=156, majf=0, minf=9 00:38:36.552 IO depths : 1=5.7%, 2=12.0%, 4=24.9%, 8=50.6%, 16=6.8%, 32=0.0%, >=64=0.0% 00:38:36.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:36.552 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:36.552 issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:36.552 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:36.552 filename1: (groupid=0, jobs=1): err= 0: pid=4061458: Tue Oct 1 08:53:26 2024 00:38:36.552 read: IOPS=486, BW=1946KiB/s (1993kB/s)(19.0MiB/10011msec) 00:38:36.552 slat (nsec): min=5872, max=88814, avg=22433.88, stdev=13870.24 00:38:36.552 clat (usec): min=11174, max=60955, avg=32689.06, stdev=2172.18 00:38:36.552 lat (usec): min=11183, max=60986, avg=32711.49, stdev=2172.02 00:38:36.552 clat percentiles (usec): 00:38:36.552 | 1.00th=[24511], 5.00th=[31851], 10.00th=[32113], 20.00th=[32113], 00:38:36.552 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:38:36.552 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:38:36.552 | 99.00th=[35914], 99.50th=[41681], 99.90th=[60556], 99.95th=[61080], 00:38:36.552 | 99.99th=[61080] 00:38:36.552 bw ( KiB/s): min= 1916, max= 2048, per=4.11%, avg=1942.26, stdev=47.61, samples=19 00:38:36.552 iops : min= 479, max= 512, avg=485.53, stdev=11.82, samples=19 00:38:36.552 lat (msec) : 20=0.21%, 50=99.61%, 100=0.18% 00:38:36.552 cpu : usr=98.88%, sys=0.81%, ctx=23, majf=0, minf=9 00:38:36.552 IO depths : 1=6.1%, 2=12.2%, 4=24.5%, 8=50.8%, 16=6.4%, 32=0.0%, >=64=0.0% 00:38:36.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:36.552 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:36.552 issued rwts: total=4870,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:36.552 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:36.552 filename1: (groupid=0, jobs=1): err= 0: pid=4061459: Tue Oct 1 08:53:26 2024 00:38:36.552 read: IOPS=485, BW=1942KiB/s (1989kB/s)(19.0MiB/10012msec) 00:38:36.552 slat (usec): min=5, max=122, avg=24.85, stdev=18.87 00:38:36.552 clat (usec): min=16817, max=64604, avg=32754.84, stdev=1596.81 00:38:36.552 lat (usec): min=16823, max=64629, avg=32779.69, stdev=1594.78 00:38:36.552 clat percentiles (usec): 00:38:36.552 | 1.00th=[31065], 5.00th=[31851], 10.00th=[31851], 20.00th=[32113], 00:38:36.552 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32900], 00:38:36.552 | 70.00th=[33162], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:38:36.552 | 99.00th=[35390], 99.50th=[40633], 99.90th=[43254], 99.95th=[49021], 00:38:36.552 | 99.99th=[64750] 00:38:36.552 bw ( KiB/s): min= 1792, max= 2032, per=4.10%, avg=1938.89, stdev=57.48, samples=19 00:38:36.552 iops : min= 448, max= 508, avg=484.68, stdev=14.31, samples=19 00:38:36.552 lat (msec) : 20=0.29%, 50=99.67%, 100=0.04% 00:38:36.552 cpu : usr=99.13%, sys=0.57%, ctx=14, majf=0, minf=9 00:38:36.552 IO depths : 1=0.1%, 2=6.2%, 4=24.9%, 8=56.4%, 16=12.5%, 32=0.0%, >=64=0.0% 00:38:36.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:36.552 complete : 0=0.0%, 4=94.4%, 8=0.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:36.552 issued rwts: total=4862,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:36.552 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:36.552 filename1: (groupid=0, jobs=1): err= 0: pid=4061460: Tue Oct 1 08:53:26 2024 00:38:36.552 read: IOPS=501, BW=2008KiB/s (2056kB/s)(19.6MiB/10010msec) 00:38:36.552 slat (nsec): min=5596, max=97407, avg=12029.97, stdev=8822.95 00:38:36.552 clat (usec): min=14910, max=47240, avg=31777.27, stdev=3224.82 00:38:36.552 lat (usec): min=14923, max=47248, avg=31789.30, stdev=3225.31 00:38:36.552 clat percentiles (usec): 00:38:36.552 | 1.00th=[19792], 5.00th=[23200], 10.00th=[30278], 20.00th=[32113], 00:38:36.552 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:38:36.552 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[33817], 00:38:36.552 | 99.00th=[34866], 99.50th=[34866], 99.90th=[36963], 99.95th=[43779], 00:38:36.552 | 99.99th=[47449] 00:38:36.552 bw ( KiB/s): min= 1920, max= 2176, per=4.24%, avg=2007.32, stdev=85.77, samples=19 00:38:36.552 iops : min= 480, max= 544, avg=501.79, stdev=21.43, samples=19 00:38:36.552 lat (msec) : 20=1.31%, 50=98.69% 00:38:36.552 cpu : usr=98.97%, sys=0.73%, ctx=9, majf=0, minf=9 00:38:36.552 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:38:36.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:36.552 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:36.552 issued rwts: total=5024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:36.552 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:36.552 filename1: (groupid=0, jobs=1): err= 0: pid=4061461: Tue Oct 1 08:53:26 2024 00:38:36.552 read: IOPS=486, BW=1945KiB/s (1992kB/s)(19.0MiB/10003msec) 00:38:36.552 slat (nsec): min=5487, max=73987, avg=15625.18, stdev=10425.78 00:38:36.552 clat (usec): min=7224, max=63979, avg=32766.11, stdev=2702.46 00:38:36.552 lat (usec): min=7230, max=63998, avg=32781.73, stdev=2702.03 00:38:36.552 clat percentiles (usec): 00:38:36.552 | 1.00th=[28443], 5.00th=[31851], 10.00th=[32113], 20.00th=[32113], 00:38:36.552 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:38:36.552 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:38:36.552 | 99.00th=[35914], 99.50th=[44303], 99.90th=[63701], 99.95th=[63701], 00:38:36.552 | 99.99th=[64226] 00:38:36.552 bw ( KiB/s): min= 1792, max= 2048, per=4.10%, avg=1940.21, stdev=64.19, samples=19 00:38:36.552 iops : min= 448, max= 512, avg=485.05, stdev=16.05, samples=19 00:38:36.552 lat (msec) : 10=0.33%, 20=0.12%, 50=99.22%, 100=0.33% 00:38:36.552 cpu : usr=98.79%, sys=0.77%, ctx=64, majf=0, minf=9 00:38:36.552 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:38:36.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:36.552 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:36.552 issued rwts: total=4864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:36.552 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:36.552 filename1: (groupid=0, jobs=1): err= 0: pid=4061462: Tue Oct 1 08:53:26 2024 00:38:36.552 read: IOPS=490, BW=1963KiB/s (2010kB/s)(19.2MiB/10010msec) 00:38:36.552 slat (usec): min=4, max=125, avg=18.59, stdev=15.22 00:38:36.552 clat (usec): min=15777, max=45063, avg=32456.81, stdev=2014.41 00:38:36.552 lat (usec): min=15793, max=45074, avg=32475.40, stdev=2014.52 00:38:36.552 clat percentiles (usec): 00:38:36.553 | 1.00th=[22938], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:38:36.553 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:38:36.553 | 70.00th=[32900], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:38:36.553 | 99.00th=[34866], 99.50th=[34866], 99.90th=[41681], 99.95th=[44303], 00:38:36.553 | 99.99th=[44827] 00:38:36.553 bw ( KiB/s): min= 1920, max= 2048, per=4.14%, avg=1960.16, stdev=60.74, samples=19 00:38:36.553 iops : min= 480, max= 512, avg=490.00, stdev=15.13, samples=19 00:38:36.553 lat (msec) : 20=0.65%, 50=99.35% 00:38:36.553 cpu : usr=98.65%, sys=0.78%, ctx=147, majf=0, minf=9 00:38:36.553 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:38:36.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:36.553 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:36.553 issued rwts: total=4912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:36.553 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:36.553 filename2: (groupid=0, jobs=1): err= 0: pid=4061463: Tue Oct 1 08:53:26 2024 00:38:36.553 read: IOPS=488, BW=1952KiB/s (1999kB/s)(19.1MiB/10003msec) 00:38:36.553 slat (nsec): min=5424, max=79834, avg=19184.25, stdev=12616.12 00:38:36.553 clat (usec): min=5158, max=63571, avg=32627.92, stdev=3752.16 00:38:36.553 lat (usec): min=5164, max=63595, avg=32647.11, stdev=3752.75 00:38:36.553 clat percentiles (usec): 00:38:36.553 | 1.00th=[20579], 5.00th=[27395], 10.00th=[31851], 20.00th=[32113], 00:38:36.553 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:38:36.553 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[35390], 00:38:36.553 | 99.00th=[47449], 99.50th=[50070], 99.90th=[63701], 99.95th=[63701], 00:38:36.553 | 99.99th=[63701] 00:38:36.553 bw ( KiB/s): min= 1792, max= 2160, per=4.11%, avg=1946.95, stdev=86.01, samples=19 00:38:36.553 iops : min= 448, max= 540, avg=486.74, stdev=21.50, samples=19 00:38:36.553 lat (msec) : 10=0.04%, 20=0.61%, 50=98.75%, 100=0.59% 00:38:36.553 cpu : usr=99.05%, sys=0.66%, ctx=41, majf=0, minf=9 00:38:36.553 IO depths : 1=2.3%, 2=7.2%, 4=20.3%, 8=59.2%, 16=11.0%, 32=0.0%, >=64=0.0% 00:38:36.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:36.553 complete : 0=0.0%, 4=93.2%, 8=1.9%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:36.553 issued rwts: total=4882,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:36.553 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:36.553 filename2: (groupid=0, jobs=1): err= 0: pid=4061464: Tue Oct 1 08:53:26 2024 00:38:36.553 read: IOPS=504, BW=2016KiB/s (2065kB/s)(19.7MiB/10010msec) 00:38:36.553 slat (nsec): min=5586, max=66035, avg=8638.26, stdev=4969.38 00:38:36.553 clat (usec): min=9154, max=45623, avg=31665.36, stdev=3644.06 00:38:36.553 lat (usec): min=9175, max=45629, avg=31674.00, stdev=3643.75 00:38:36.553 clat percentiles (usec): 00:38:36.553 | 1.00th=[18744], 5.00th=[21365], 10.00th=[27132], 20.00th=[32113], 00:38:36.553 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:38:36.553 | 70.00th=[32900], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:38:36.553 | 99.00th=[34866], 99.50th=[35390], 99.90th=[36439], 99.95th=[36439], 00:38:36.553 | 99.99th=[45876] 00:38:36.553 bw ( KiB/s): min= 1920, max= 2480, per=4.26%, avg=2016.58, stdev=128.88, samples=19 00:38:36.553 iops : min= 480, max= 620, avg=504.11, stdev=32.21, samples=19 00:38:36.553 lat (msec) : 10=0.32%, 20=1.70%, 50=97.98% 00:38:36.553 cpu : usr=98.68%, sys=0.89%, ctx=103, majf=0, minf=9 00:38:36.553 IO depths : 1=5.8%, 2=11.8%, 4=24.1%, 8=51.6%, 16=6.7%, 32=0.0%, >=64=0.0% 00:38:36.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:36.553 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:36.553 issued rwts: total=5046,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:36.553 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:36.553 filename2: (groupid=0, jobs=1): err= 0: pid=4061465: Tue Oct 1 08:53:26 2024 00:38:36.553 read: IOPS=484, BW=1938KiB/s (1984kB/s)(18.9MiB/10007msec) 00:38:36.553 slat (usec): min=5, max=125, avg=23.59, stdev=18.70 00:38:36.553 clat (usec): min=20928, max=55597, avg=32825.40, stdev=2260.79 00:38:36.553 lat (usec): min=20935, max=55630, avg=32849.00, stdev=2259.31 00:38:36.553 clat percentiles (usec): 00:38:36.553 | 1.00th=[25035], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:38:36.553 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:38:36.553 | 70.00th=[33162], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:38:36.553 | 99.00th=[41681], 99.50th=[45876], 99.90th=[52691], 99.95th=[55313], 00:38:36.553 | 99.99th=[55837] 00:38:36.553 bw ( KiB/s): min= 1792, max= 2048, per=4.09%, avg=1933.26, stdev=57.07, samples=19 00:38:36.553 iops : min= 448, max= 512, avg=483.32, stdev=14.27, samples=19 00:38:36.553 lat (msec) : 50=99.67%, 100=0.33% 00:38:36.553 cpu : usr=98.25%, sys=1.11%, ctx=105, majf=0, minf=9 00:38:36.553 IO depths : 1=4.5%, 2=10.6%, 4=24.5%, 8=52.4%, 16=8.0%, 32=0.0%, >=64=0.0% 00:38:36.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:36.553 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:36.553 issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:36.553 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:36.553 filename2: (groupid=0, jobs=1): err= 0: pid=4061466: Tue Oct 1 08:53:26 2024 00:38:36.553 read: IOPS=486, BW=1947KiB/s (1994kB/s)(19.1MiB/10025msec) 00:38:36.553 slat (usec): min=5, max=134, avg=26.39, stdev=19.34 00:38:36.553 clat (usec): min=14951, max=40164, avg=32643.89, stdev=1478.09 00:38:36.553 lat (usec): min=14960, max=40199, avg=32670.28, stdev=1475.29 00:38:36.553 clat percentiles (usec): 00:38:36.553 | 1.00th=[26608], 5.00th=[31851], 10.00th=[31851], 20.00th=[32113], 00:38:36.553 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:38:36.553 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:38:36.553 | 99.00th=[34866], 99.50th=[36439], 99.90th=[40109], 99.95th=[40109], 00:38:36.553 | 99.99th=[40109] 00:38:36.553 bw ( KiB/s): min= 1916, max= 2048, per=4.11%, avg=1945.40, stdev=52.64, samples=20 00:38:36.553 iops : min= 479, max= 512, avg=486.35, stdev=13.16, samples=20 00:38:36.553 lat (msec) : 20=0.33%, 50=99.67% 00:38:36.553 cpu : usr=98.72%, sys=0.83%, ctx=44, majf=0, minf=9 00:38:36.553 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:36.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:36.553 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:36.553 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:36.553 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:36.553 filename2: (groupid=0, jobs=1): err= 0: pid=4061467: Tue Oct 1 08:53:26 2024 00:38:36.553 read: IOPS=484, BW=1938KiB/s (1985kB/s)(18.9MiB/10005msec) 00:38:36.553 slat (usec): min=5, max=124, avg=28.00, stdev=19.65 00:38:36.553 clat (usec): min=30251, max=42891, avg=32741.45, stdev=1007.22 00:38:36.553 lat (usec): min=30260, max=42914, avg=32769.45, stdev=1004.27 00:38:36.553 clat percentiles (usec): 00:38:36.553 | 1.00th=[31327], 5.00th=[31851], 10.00th=[31851], 20.00th=[32113], 00:38:36.553 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:38:36.553 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:38:36.553 | 99.00th=[35914], 99.50th=[36439], 99.90th=[42730], 99.95th=[42730], 00:38:36.553 | 99.99th=[42730] 00:38:36.553 bw ( KiB/s): min= 1792, max= 2048, per=4.09%, avg=1933.05, stdev=58.85, samples=19 00:38:36.553 iops : min= 448, max= 512, avg=483.26, stdev=14.71, samples=19 00:38:36.553 lat (msec) : 50=100.00% 00:38:36.553 cpu : usr=98.38%, sys=1.01%, ctx=207, majf=0, minf=9 00:38:36.553 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:36.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:36.553 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:36.553 issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:36.553 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:36.553 filename2: (groupid=0, jobs=1): err= 0: pid=4061468: Tue Oct 1 08:53:26 2024 00:38:36.553 read: IOPS=499, BW=1999KiB/s (2047kB/s)(19.6MiB/10023msec) 00:38:36.553 slat (usec): min=4, max=133, avg=18.87, stdev=17.77 00:38:36.553 clat (usec): min=14813, max=63263, avg=31882.77, stdev=5159.17 00:38:36.553 lat (usec): min=14821, max=63272, avg=31901.64, stdev=5161.54 00:38:36.553 clat percentiles (usec): 00:38:36.553 | 1.00th=[19530], 5.00th=[21890], 10.00th=[24249], 20.00th=[31589], 00:38:36.553 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:38:36.553 | 70.00th=[32900], 80.00th=[33424], 90.00th=[34341], 95.00th=[39060], 00:38:36.553 | 99.00th=[52691], 99.50th=[56886], 99.90th=[58983], 99.95th=[63177], 00:38:36.553 | 99.99th=[63177] 00:38:36.553 bw ( KiB/s): min= 1792, max= 2192, per=4.22%, avg=1996.80, stdev=100.87, samples=20 00:38:36.553 iops : min= 448, max= 548, avg=499.20, stdev=25.22, samples=20 00:38:36.553 lat (msec) : 20=1.52%, 50=97.00%, 100=1.48% 00:38:36.553 cpu : usr=98.69%, sys=0.85%, ctx=152, majf=0, minf=9 00:38:36.553 IO depths : 1=3.7%, 2=7.7%, 4=18.3%, 8=60.9%, 16=9.4%, 32=0.0%, >=64=0.0% 00:38:36.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:36.553 complete : 0=0.0%, 4=92.4%, 8=2.4%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:36.553 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:36.553 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:36.553 filename2: (groupid=0, jobs=1): err= 0: pid=4061469: Tue Oct 1 08:53:26 2024 00:38:36.553 read: IOPS=488, BW=1954KiB/s (2001kB/s)(19.1MiB/10021msec) 00:38:36.553 slat (usec): min=5, max=117, avg=20.75, stdev=18.87 00:38:36.553 clat (usec): min=15289, max=35942, avg=32569.60, stdev=1873.78 00:38:36.553 lat (usec): min=15300, max=35952, avg=32590.36, stdev=1871.49 00:38:36.553 clat percentiles (usec): 00:38:36.553 | 1.00th=[21627], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:38:36.553 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:38:36.553 | 70.00th=[33162], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:38:36.553 | 99.00th=[34866], 99.50th=[35390], 99.90th=[35914], 99.95th=[35914], 00:38:36.553 | 99.99th=[35914] 00:38:36.553 bw ( KiB/s): min= 1792, max= 2048, per=4.13%, avg=1951.80, stdev=70.52, samples=20 00:38:36.553 iops : min= 448, max= 512, avg=487.95, stdev=17.63, samples=20 00:38:36.553 lat (msec) : 20=0.94%, 50=99.06% 00:38:36.553 cpu : usr=99.00%, sys=0.69%, ctx=33, majf=0, minf=9 00:38:36.553 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:36.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:36.554 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:36.554 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:36.554 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:36.554 filename2: (groupid=0, jobs=1): err= 0: pid=4061470: Tue Oct 1 08:53:26 2024 00:38:36.554 read: IOPS=488, BW=1952KiB/s (1999kB/s)(19.1MiB/10006msec) 00:38:36.554 slat (nsec): min=5538, max=95927, avg=19519.48, stdev=12985.18 00:38:36.554 clat (usec): min=7484, max=66593, avg=32624.38, stdev=3577.72 00:38:36.554 lat (usec): min=7491, max=66616, avg=32643.90, stdev=3578.22 00:38:36.554 clat percentiles (usec): 00:38:36.554 | 1.00th=[21365], 5.00th=[28443], 10.00th=[32113], 20.00th=[32113], 00:38:36.554 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:38:36.554 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:38:36.554 | 99.00th=[44303], 99.50th=[47973], 99.90th=[66323], 99.95th=[66323], 00:38:36.554 | 99.99th=[66847] 00:38:36.554 bw ( KiB/s): min= 1792, max= 2096, per=4.12%, avg=1948.21, stdev=66.35, samples=19 00:38:36.554 iops : min= 448, max= 524, avg=487.05, stdev=16.59, samples=19 00:38:36.554 lat (msec) : 10=0.04%, 20=0.61%, 50=98.85%, 100=0.49% 00:38:36.554 cpu : usr=99.12%, sys=0.60%, ctx=12, majf=0, minf=9 00:38:36.554 IO depths : 1=2.5%, 2=7.7%, 4=21.2%, 8=57.9%, 16=10.8%, 32=0.0%, >=64=0.0% 00:38:36.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:36.554 complete : 0=0.0%, 4=93.4%, 8=1.6%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:36.554 issued rwts: total=4884,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:36.554 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:36.554 00:38:36.554 Run status group 0 (all jobs): 00:38:36.554 READ: bw=46.2MiB/s (48.4MB/s), 1938KiB/s-2314KiB/s (1984kB/s-2370kB/s), io=463MiB (485MB), run=10003-10025msec 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:36.554 bdev_null0 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:36.554 [2024-10-01 08:53:26.924477] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:36.554 bdev_null1 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:38:36.554 08:53:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:38:36.555 08:53:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:38:36.555 08:53:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:38:36.555 08:53:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:38:36.555 08:53:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:38:36.555 08:53:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:38:36.555 { 00:38:36.555 "params": { 00:38:36.555 "name": "Nvme$subsystem", 00:38:36.555 "trtype": "$TEST_TRANSPORT", 00:38:36.555 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:36.555 "adrfam": "ipv4", 00:38:36.555 "trsvcid": "$NVMF_PORT", 00:38:36.555 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:36.555 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:36.555 "hdgst": ${hdgst:-false}, 00:38:36.555 "ddgst": ${ddgst:-false} 00:38:36.555 }, 00:38:36.555 "method": "bdev_nvme_attach_controller" 00:38:36.555 } 00:38:36.555 EOF 00:38:36.555 )") 00:38:36.555 08:53:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:36.555 08:53:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:38:36.555 08:53:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:36.555 08:53:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:38:36.555 08:53:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:38:36.555 08:53:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:36.555 08:53:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:38:36.555 08:53:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:38:36.555 08:53:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:38:36.555 { 00:38:36.555 "params": { 00:38:36.555 "name": "Nvme$subsystem", 00:38:36.555 "trtype": "$TEST_TRANSPORT", 00:38:36.555 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:36.555 "adrfam": "ipv4", 00:38:36.555 "trsvcid": "$NVMF_PORT", 00:38:36.555 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:36.555 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:36.555 "hdgst": ${hdgst:-false}, 00:38:36.555 "ddgst": ${ddgst:-false} 00:38:36.555 }, 00:38:36.555 "method": "bdev_nvme_attach_controller" 00:38:36.555 } 00:38:36.555 EOF 00:38:36.555 )") 00:38:36.555 08:53:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:38:36.555 08:53:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:36.555 08:53:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:38:36.555 08:53:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:38:36.555 08:53:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:38:36.555 08:53:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:38:36.555 "params": { 00:38:36.555 "name": "Nvme0", 00:38:36.555 "trtype": "tcp", 00:38:36.555 "traddr": "10.0.0.2", 00:38:36.555 "adrfam": "ipv4", 00:38:36.555 "trsvcid": "4420", 00:38:36.555 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:36.555 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:36.555 "hdgst": false, 00:38:36.555 "ddgst": false 00:38:36.555 }, 00:38:36.555 "method": "bdev_nvme_attach_controller" 00:38:36.555 },{ 00:38:36.555 "params": { 00:38:36.555 "name": "Nvme1", 00:38:36.555 "trtype": "tcp", 00:38:36.555 "traddr": "10.0.0.2", 00:38:36.555 "adrfam": "ipv4", 00:38:36.555 "trsvcid": "4420", 00:38:36.555 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:36.555 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:36.555 "hdgst": false, 00:38:36.555 "ddgst": false 00:38:36.555 }, 00:38:36.555 "method": "bdev_nvme_attach_controller" 00:38:36.555 }' 00:38:36.555 08:53:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:38:36.555 08:53:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:38:36.555 08:53:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:36.555 08:53:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:36.555 08:53:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:38:36.555 08:53:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:36.555 08:53:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:38:36.555 08:53:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:38:36.555 08:53:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:36.555 08:53:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:36.555 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:38:36.555 ... 00:38:36.555 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:38:36.555 ... 00:38:36.555 fio-3.35 00:38:36.555 Starting 4 threads 00:38:41.842 00:38:41.842 filename0: (groupid=0, jobs=1): err= 0: pid=4063725: Tue Oct 1 08:53:33 2024 00:38:41.842 read: IOPS=2012, BW=15.7MiB/s (16.5MB/s)(78.6MiB/5002msec) 00:38:41.842 slat (nsec): min=5414, max=82479, avg=6342.11, stdev=2905.85 00:38:41.842 clat (usec): min=1948, max=7924, avg=3957.96, stdev=728.65 00:38:41.842 lat (usec): min=1954, max=7952, avg=3964.31, stdev=728.55 00:38:41.842 clat percentiles (usec): 00:38:41.842 | 1.00th=[ 2868], 5.00th=[ 3195], 10.00th=[ 3392], 20.00th=[ 3523], 00:38:41.842 | 30.00th=[ 3589], 40.00th=[ 3654], 50.00th=[ 3752], 60.00th=[ 3818], 00:38:41.842 | 70.00th=[ 3884], 80.00th=[ 4146], 90.00th=[ 5342], 95.00th=[ 5604], 00:38:41.842 | 99.00th=[ 6063], 99.50th=[ 6128], 99.90th=[ 6718], 99.95th=[ 7570], 00:38:41.842 | 99.99th=[ 7635] 00:38:41.843 bw ( KiB/s): min=15760, max=16240, per=24.28%, avg=16046.22, stdev=171.29, samples=9 00:38:41.843 iops : min= 1970, max= 2030, avg=2005.78, stdev=21.41, samples=9 00:38:41.843 lat (msec) : 2=0.03%, 4=74.93%, 10=25.04% 00:38:41.843 cpu : usr=97.34%, sys=2.42%, ctx=6, majf=0, minf=62 00:38:41.843 IO depths : 1=0.1%, 2=0.1%, 4=72.7%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:41.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:41.843 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:41.843 issued rwts: total=10065,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:41.843 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:41.843 filename0: (groupid=0, jobs=1): err= 0: pid=4063727: Tue Oct 1 08:53:33 2024 00:38:41.843 read: IOPS=2017, BW=15.8MiB/s (16.5MB/s)(78.8MiB/5002msec) 00:38:41.843 slat (nsec): min=5418, max=81511, avg=6386.51, stdev=2973.03 00:38:41.843 clat (usec): min=1951, max=6549, avg=3947.65, stdev=653.09 00:38:41.843 lat (usec): min=1957, max=6554, avg=3954.04, stdev=653.01 00:38:41.843 clat percentiles (usec): 00:38:41.843 | 1.00th=[ 2999], 5.00th=[ 3326], 10.00th=[ 3458], 20.00th=[ 3556], 00:38:41.843 | 30.00th=[ 3621], 40.00th=[ 3687], 50.00th=[ 3752], 60.00th=[ 3818], 00:38:41.843 | 70.00th=[ 3884], 80.00th=[ 4146], 90.00th=[ 5276], 95.00th=[ 5538], 00:38:41.843 | 99.00th=[ 5932], 99.50th=[ 6128], 99.90th=[ 6390], 99.95th=[ 6390], 00:38:41.843 | 99.99th=[ 6521] 00:38:41.843 bw ( KiB/s): min=15824, max=16592, per=24.42%, avg=16137.60, stdev=196.71, samples=10 00:38:41.843 iops : min= 1978, max= 2074, avg=2017.20, stdev=24.59, samples=10 00:38:41.843 lat (msec) : 2=0.03%, 4=75.25%, 10=24.72% 00:38:41.843 cpu : usr=97.00%, sys=2.78%, ctx=9, majf=0, minf=152 00:38:41.843 IO depths : 1=0.1%, 2=0.2%, 4=72.8%, 8=26.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:41.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:41.843 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:41.843 issued rwts: total=10092,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:41.843 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:41.843 filename1: (groupid=0, jobs=1): err= 0: pid=4063728: Tue Oct 1 08:53:33 2024 00:38:41.843 read: IOPS=2013, BW=15.7MiB/s (16.5MB/s)(78.7MiB/5003msec) 00:38:41.843 slat (nsec): min=5425, max=66388, avg=6035.39, stdev=2110.60 00:38:41.843 clat (usec): min=2023, max=7269, avg=3956.91, stdev=710.72 00:38:41.843 lat (usec): min=2029, max=7275, avg=3962.94, stdev=710.68 00:38:41.843 clat percentiles (usec): 00:38:41.843 | 1.00th=[ 2868], 5.00th=[ 3195], 10.00th=[ 3359], 20.00th=[ 3523], 00:38:41.843 | 30.00th=[ 3589], 40.00th=[ 3687], 50.00th=[ 3752], 60.00th=[ 3818], 00:38:41.843 | 70.00th=[ 3884], 80.00th=[ 4178], 90.00th=[ 5342], 95.00th=[ 5538], 00:38:41.843 | 99.00th=[ 6063], 99.50th=[ 6128], 99.90th=[ 6521], 99.95th=[ 6652], 00:38:41.843 | 99.99th=[ 7242] 00:38:41.843 bw ( KiB/s): min=15536, max=16480, per=24.37%, avg=16102.40, stdev=292.43, samples=10 00:38:41.843 iops : min= 1942, max= 2060, avg=2012.80, stdev=36.55, samples=10 00:38:41.843 lat (msec) : 4=74.12%, 10=25.88% 00:38:41.843 cpu : usr=97.02%, sys=2.74%, ctx=6, majf=0, minf=64 00:38:41.843 IO depths : 1=0.1%, 2=0.1%, 4=72.4%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:41.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:41.843 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:41.843 issued rwts: total=10072,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:41.843 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:41.843 filename1: (groupid=0, jobs=1): err= 0: pid=4063729: Tue Oct 1 08:53:33 2024 00:38:41.843 read: IOPS=2218, BW=17.3MiB/s (18.2MB/s)(86.7MiB/5004msec) 00:38:41.843 slat (nsec): min=5408, max=83437, avg=5987.68, stdev=2063.07 00:38:41.843 clat (usec): min=1073, max=5605, avg=3590.72, stdev=462.49 00:38:41.843 lat (usec): min=1094, max=5611, avg=3596.71, stdev=462.07 00:38:41.843 clat percentiles (usec): 00:38:41.843 | 1.00th=[ 2409], 5.00th=[ 2835], 10.00th=[ 3032], 20.00th=[ 3326], 00:38:41.843 | 30.00th=[ 3425], 40.00th=[ 3589], 50.00th=[ 3621], 60.00th=[ 3654], 00:38:41.843 | 70.00th=[ 3818], 80.00th=[ 3884], 90.00th=[ 3949], 95.00th=[ 4228], 00:38:41.843 | 99.00th=[ 4948], 99.50th=[ 5145], 99.90th=[ 5276], 99.95th=[ 5342], 00:38:41.843 | 99.99th=[ 5604] 00:38:41.843 bw ( KiB/s): min=17200, max=18592, per=26.86%, avg=17750.40, stdev=458.18, samples=10 00:38:41.843 iops : min= 2150, max= 2324, avg=2218.80, stdev=57.27, samples=10 00:38:41.843 lat (msec) : 2=0.66%, 4=90.78%, 10=8.57% 00:38:41.843 cpu : usr=97.54%, sys=2.22%, ctx=6, majf=0, minf=63 00:38:41.843 IO depths : 1=0.1%, 2=1.4%, 4=65.6%, 8=32.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:41.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:41.843 complete : 0=0.0%, 4=96.8%, 8=3.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:41.843 issued rwts: total=11102,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:41.843 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:41.843 00:38:41.843 Run status group 0 (all jobs): 00:38:41.843 READ: bw=64.5MiB/s (67.7MB/s), 15.7MiB/s-17.3MiB/s (16.5MB/s-18.2MB/s), io=323MiB (339MB), run=5002-5004msec 00:38:41.843 08:53:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:38:41.843 08:53:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:38:41.843 08:53:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:41.843 08:53:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:41.843 08:53:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:38:41.843 08:53:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:41.843 08:53:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:41.843 08:53:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:41.843 08:53:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:41.843 08:53:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:41.843 08:53:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:41.843 08:53:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:41.843 08:53:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:41.843 08:53:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:41.843 08:53:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:38:41.843 08:53:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:38:41.843 08:53:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:41.843 08:53:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:41.843 08:53:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:41.843 08:53:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:41.843 08:53:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:38:41.843 08:53:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:41.843 08:53:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:41.843 08:53:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:41.843 00:38:41.843 real 0m24.245s 00:38:41.843 user 5m19.298s 00:38:41.843 sys 0m4.127s 00:38:41.843 08:53:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:41.843 08:53:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:41.843 ************************************ 00:38:41.843 END TEST fio_dif_rand_params 00:38:41.843 ************************************ 00:38:41.843 08:53:33 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:38:41.843 08:53:33 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:38:41.843 08:53:33 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:41.843 08:53:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:41.843 ************************************ 00:38:41.843 START TEST fio_dif_digest 00:38:41.843 ************************************ 00:38:41.843 08:53:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:38:41.843 08:53:33 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:38:41.844 08:53:33 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:38:41.844 08:53:33 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:38:41.844 08:53:33 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:38:41.844 08:53:33 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:38:41.844 08:53:33 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:38:41.844 08:53:33 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:38:41.844 08:53:33 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:38:41.844 08:53:33 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:38:41.844 08:53:33 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:38:41.844 08:53:33 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:38:41.844 08:53:33 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:38:41.844 08:53:33 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:38:41.844 08:53:33 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:38:41.844 08:53:33 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:38:41.844 08:53:33 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:38:41.844 08:53:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:41.844 08:53:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:41.844 bdev_null0 00:38:41.844 08:53:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:41.844 08:53:33 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:41.844 08:53:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:41.844 08:53:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:41.844 08:53:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:41.844 08:53:33 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:41.844 08:53:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:41.844 08:53:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:41.844 08:53:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:41.844 08:53:33 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:41.844 08:53:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:41.844 08:53:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:41.844 [2024-10-01 08:53:33.289089] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:41.844 08:53:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:41.844 08:53:33 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:38:41.844 08:53:33 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:41.844 08:53:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:41.844 08:53:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:38:41.844 08:53:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:41.844 08:53:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:38:41.844 08:53:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:41.844 08:53:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:38:41.844 08:53:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:38:41.844 08:53:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:41.844 08:53:33 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:38:41.844 08:53:33 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:38:41.844 08:53:33 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:38:41.844 08:53:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # config=() 00:38:41.844 08:53:33 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:38:41.844 08:53:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # local subsystem config 00:38:41.844 08:53:33 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:38:41.844 08:53:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:38:41.844 08:53:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:38:41.844 { 00:38:41.844 "params": { 00:38:41.844 "name": "Nvme$subsystem", 00:38:41.844 "trtype": "$TEST_TRANSPORT", 00:38:41.844 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:41.844 "adrfam": "ipv4", 00:38:41.844 "trsvcid": "$NVMF_PORT", 00:38:41.844 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:41.844 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:41.844 "hdgst": ${hdgst:-false}, 00:38:41.844 "ddgst": ${ddgst:-false} 00:38:41.844 }, 00:38:41.844 "method": "bdev_nvme_attach_controller" 00:38:41.844 } 00:38:41.844 EOF 00:38:41.844 )") 00:38:41.844 08:53:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:41.844 08:53:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:38:41.844 08:53:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:41.844 08:53:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@578 -- # cat 00:38:41.844 08:53:33 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:38:41.844 08:53:33 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:38:41.844 08:53:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # jq . 00:38:41.844 08:53:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@581 -- # IFS=, 00:38:41.844 08:53:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:38:41.844 "params": { 00:38:41.844 "name": "Nvme0", 00:38:41.844 "trtype": "tcp", 00:38:41.844 "traddr": "10.0.0.2", 00:38:41.844 "adrfam": "ipv4", 00:38:41.844 "trsvcid": "4420", 00:38:41.844 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:41.844 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:41.844 "hdgst": true, 00:38:41.844 "ddgst": true 00:38:41.844 }, 00:38:41.844 "method": "bdev_nvme_attach_controller" 00:38:41.844 }' 00:38:41.844 08:53:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:38:41.844 08:53:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:38:41.844 08:53:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:41.844 08:53:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:41.844 08:53:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:38:41.844 08:53:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:41.844 08:53:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:38:41.844 08:53:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:38:41.844 08:53:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:41.844 08:53:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:42.105 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:38:42.105 ... 00:38:42.105 fio-3.35 00:38:42.105 Starting 3 threads 00:38:54.347 00:38:54.347 filename0: (groupid=0, jobs=1): err= 0: pid=4065177: Tue Oct 1 08:53:44 2024 00:38:54.347 read: IOPS=223, BW=27.9MiB/s (29.3MB/s)(281MiB/10046msec) 00:38:54.347 slat (nsec): min=5760, max=34315, avg=7625.17, stdev=1905.54 00:38:54.347 clat (usec): min=10249, max=54238, avg=13401.75, stdev=1556.35 00:38:54.347 lat (usec): min=10255, max=54245, avg=13409.38, stdev=1556.32 00:38:54.347 clat percentiles (usec): 00:38:54.347 | 1.00th=[11076], 5.00th=[11731], 10.00th=[12125], 20.00th=[12518], 00:38:54.347 | 30.00th=[12911], 40.00th=[13042], 50.00th=[13304], 60.00th=[13566], 00:38:54.347 | 70.00th=[13829], 80.00th=[14222], 90.00th=[14746], 95.00th=[15139], 00:38:54.347 | 99.00th=[15926], 99.50th=[16319], 99.90th=[22414], 99.95th=[49546], 00:38:54.347 | 99.99th=[54264] 00:38:54.347 bw ( KiB/s): min=27648, max=29952, per=34.11%, avg=28698.95, stdev=637.98, samples=19 00:38:54.347 iops : min= 216, max= 234, avg=224.21, stdev= 4.98, samples=19 00:38:54.347 lat (msec) : 20=99.87%, 50=0.09%, 100=0.04% 00:38:54.347 cpu : usr=94.08%, sys=5.66%, ctx=31, majf=0, minf=82 00:38:54.347 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:54.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:54.347 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:54.347 issued rwts: total=2244,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:54.347 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:54.347 filename0: (groupid=0, jobs=1): err= 0: pid=4065178: Tue Oct 1 08:53:44 2024 00:38:54.347 read: IOPS=218, BW=27.3MiB/s (28.7MB/s)(273MiB/10003msec) 00:38:54.347 slat (usec): min=5, max=122, avg= 7.35, stdev= 2.85 00:38:54.347 clat (usec): min=7860, max=18502, avg=13712.41, stdev=989.17 00:38:54.347 lat (usec): min=7867, max=18533, avg=13719.77, stdev=989.30 00:38:54.347 clat percentiles (usec): 00:38:54.347 | 1.00th=[11469], 5.00th=[12125], 10.00th=[12518], 20.00th=[12911], 00:38:54.347 | 30.00th=[13173], 40.00th=[13435], 50.00th=[13698], 60.00th=[13960], 00:38:54.347 | 70.00th=[14091], 80.00th=[14484], 90.00th=[15008], 95.00th=[15401], 00:38:54.347 | 99.00th=[16188], 99.50th=[16712], 99.90th=[16909], 99.95th=[17171], 00:38:54.347 | 99.99th=[18482] 00:38:54.347 bw ( KiB/s): min=26624, max=28672, per=33.25%, avg=27971.37, stdev=458.28, samples=19 00:38:54.347 iops : min= 208, max= 224, avg=218.53, stdev= 3.58, samples=19 00:38:54.347 lat (msec) : 10=0.09%, 20=99.91% 00:38:54.347 cpu : usr=94.23%, sys=5.51%, ctx=28, majf=0, minf=133 00:38:54.347 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:54.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:54.347 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:54.347 issued rwts: total=2187,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:54.347 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:54.347 filename0: (groupid=0, jobs=1): err= 0: pid=4065179: Tue Oct 1 08:53:44 2024 00:38:54.347 read: IOPS=216, BW=27.0MiB/s (28.3MB/s)(272MiB/10046msec) 00:38:54.347 slat (nsec): min=5679, max=31744, avg=7335.19, stdev=1778.40 00:38:54.347 clat (usec): min=10479, max=53068, avg=13846.16, stdev=1540.11 00:38:54.347 lat (usec): min=10486, max=53076, avg=13853.50, stdev=1540.19 00:38:54.347 clat percentiles (usec): 00:38:54.347 | 1.00th=[11469], 5.00th=[12125], 10.00th=[12518], 20.00th=[13042], 00:38:54.347 | 30.00th=[13304], 40.00th=[13566], 50.00th=[13829], 60.00th=[14091], 00:38:54.347 | 70.00th=[14353], 80.00th=[14615], 90.00th=[15139], 95.00th=[15533], 00:38:54.347 | 99.00th=[16319], 99.50th=[16712], 99.90th=[20579], 99.95th=[49546], 00:38:54.347 | 99.99th=[53216] 00:38:54.347 bw ( KiB/s): min=26880, max=29696, per=33.02%, avg=27776.00, stdev=566.38, samples=20 00:38:54.347 iops : min= 210, max= 232, avg=217.00, stdev= 4.42, samples=20 00:38:54.347 lat (msec) : 20=99.77%, 50=0.18%, 100=0.05% 00:38:54.347 cpu : usr=94.39%, sys=5.37%, ctx=21, majf=0, minf=97 00:38:54.347 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:54.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:54.347 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:54.347 issued rwts: total=2172,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:54.347 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:54.347 00:38:54.347 Run status group 0 (all jobs): 00:38:54.347 READ: bw=82.2MiB/s (86.1MB/s), 27.0MiB/s-27.9MiB/s (28.3MB/s-29.3MB/s), io=825MiB (865MB), run=10003-10046msec 00:38:54.347 08:53:44 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:38:54.347 08:53:44 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:38:54.347 08:53:44 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:38:54.347 08:53:44 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:54.347 08:53:44 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:38:54.347 08:53:44 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:54.347 08:53:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:54.347 08:53:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:54.347 08:53:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:54.347 08:53:44 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:54.347 08:53:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:54.347 08:53:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:54.347 08:53:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:54.347 00:38:54.347 real 0m11.082s 00:38:54.347 user 0m41.036s 00:38:54.347 sys 0m1.986s 00:38:54.347 08:53:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:54.347 08:53:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:54.347 ************************************ 00:38:54.347 END TEST fio_dif_digest 00:38:54.347 ************************************ 00:38:54.347 08:53:44 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:38:54.347 08:53:44 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:38:54.347 08:53:44 nvmf_dif -- nvmf/common.sh@512 -- # nvmfcleanup 00:38:54.348 08:53:44 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:38:54.348 08:53:44 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:54.348 08:53:44 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:38:54.348 08:53:44 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:54.348 08:53:44 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:54.348 rmmod nvme_tcp 00:38:54.348 rmmod nvme_fabrics 00:38:54.348 rmmod nvme_keyring 00:38:54.348 08:53:44 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:54.348 08:53:44 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:38:54.348 08:53:44 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:38:54.348 08:53:44 nvmf_dif -- nvmf/common.sh@513 -- # '[' -n 4055009 ']' 00:38:54.348 08:53:44 nvmf_dif -- nvmf/common.sh@514 -- # killprocess 4055009 00:38:54.348 08:53:44 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 4055009 ']' 00:38:54.348 08:53:44 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 4055009 00:38:54.348 08:53:44 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:38:54.348 08:53:44 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:54.348 08:53:44 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4055009 00:38:54.348 08:53:44 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:54.348 08:53:44 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:54.348 08:53:44 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4055009' 00:38:54.348 killing process with pid 4055009 00:38:54.348 08:53:44 nvmf_dif -- common/autotest_common.sh@969 -- # kill 4055009 00:38:54.348 08:53:44 nvmf_dif -- common/autotest_common.sh@974 -- # wait 4055009 00:38:54.348 08:53:44 nvmf_dif -- nvmf/common.sh@516 -- # '[' iso == iso ']' 00:38:54.348 08:53:44 nvmf_dif -- nvmf/common.sh@517 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:56.261 Waiting for block devices as requested 00:38:56.262 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:56.262 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:56.262 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:56.262 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:56.262 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:56.522 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:56.522 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:56.522 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:56.522 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:38:56.782 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:56.782 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:57.042 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:57.042 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:57.042 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:57.042 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:57.303 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:57.303 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:57.562 08:53:49 nvmf_dif -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:38:57.562 08:53:49 nvmf_dif -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:38:57.562 08:53:49 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:38:57.562 08:53:49 nvmf_dif -- nvmf/common.sh@787 -- # iptables-save 00:38:57.562 08:53:49 nvmf_dif -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:38:57.563 08:53:49 nvmf_dif -- nvmf/common.sh@787 -- # iptables-restore 00:38:57.563 08:53:49 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:57.563 08:53:49 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:57.563 08:53:49 nvmf_dif -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:57.563 08:53:49 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:57.563 08:53:49 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:00.105 08:53:51 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:00.105 00:39:00.105 real 1m17.137s 00:39:00.105 user 8m3.440s 00:39:00.105 sys 0m21.256s 00:39:00.105 08:53:51 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:00.105 08:53:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:00.105 ************************************ 00:39:00.105 END TEST nvmf_dif 00:39:00.105 ************************************ 00:39:00.105 08:53:51 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:39:00.105 08:53:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:39:00.105 08:53:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:00.105 08:53:51 -- common/autotest_common.sh@10 -- # set +x 00:39:00.105 ************************************ 00:39:00.105 START TEST nvmf_abort_qd_sizes 00:39:00.105 ************************************ 00:39:00.105 08:53:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:39:00.105 * Looking for test storage... 00:39:00.105 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:00.105 08:53:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:39:00.105 08:53:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lcov --version 00:39:00.105 08:53:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:39:00.105 08:53:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:39:00.105 08:53:51 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:00.105 08:53:51 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:00.105 08:53:51 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:00.105 08:53:51 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:39:00.105 08:53:51 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:39:00.105 08:53:51 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:39:00.105 08:53:51 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:39:00.105 08:53:51 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:39:00.105 08:53:51 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:39:00.105 08:53:51 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:39:00.105 08:53:51 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:00.105 08:53:51 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:39:00.105 08:53:51 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:39:00.105 08:53:51 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:00.105 08:53:51 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:00.105 08:53:51 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:39:00.105 08:53:51 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:39:00.105 08:53:51 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:00.105 08:53:51 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:39:00.105 08:53:51 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:39:00.105 08:53:51 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:39:00.105 08:53:51 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:39:00.105 08:53:51 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:00.105 08:53:51 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:39:00.105 08:53:51 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:39:00.105 08:53:51 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:00.105 08:53:51 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:00.105 08:53:51 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:39:00.105 08:53:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:00.105 08:53:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:39:00.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:00.105 --rc genhtml_branch_coverage=1 00:39:00.105 --rc genhtml_function_coverage=1 00:39:00.105 --rc genhtml_legend=1 00:39:00.105 --rc geninfo_all_blocks=1 00:39:00.105 --rc geninfo_unexecuted_blocks=1 00:39:00.105 00:39:00.105 ' 00:39:00.105 08:53:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:39:00.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:00.105 --rc genhtml_branch_coverage=1 00:39:00.105 --rc genhtml_function_coverage=1 00:39:00.105 --rc genhtml_legend=1 00:39:00.105 --rc geninfo_all_blocks=1 00:39:00.105 --rc geninfo_unexecuted_blocks=1 00:39:00.105 00:39:00.105 ' 00:39:00.105 08:53:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:39:00.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:00.105 --rc genhtml_branch_coverage=1 00:39:00.105 --rc genhtml_function_coverage=1 00:39:00.105 --rc genhtml_legend=1 00:39:00.106 --rc geninfo_all_blocks=1 00:39:00.106 --rc geninfo_unexecuted_blocks=1 00:39:00.106 00:39:00.106 ' 00:39:00.106 08:53:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:39:00.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:00.106 --rc genhtml_branch_coverage=1 00:39:00.106 --rc genhtml_function_coverage=1 00:39:00.106 --rc genhtml_legend=1 00:39:00.106 --rc geninfo_all_blocks=1 00:39:00.106 --rc geninfo_unexecuted_blocks=1 00:39:00.106 00:39:00.106 ' 00:39:00.106 08:53:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:00.106 08:53:51 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:39:00.106 08:53:51 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:00.106 08:53:51 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:00.106 08:53:51 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:00.106 08:53:51 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:00.106 08:53:51 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:00.106 08:53:51 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:00.106 08:53:51 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:00.106 08:53:51 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:00.106 08:53:51 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:00.106 08:53:51 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:00.106 08:53:51 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:00.106 08:53:51 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:00.106 08:53:51 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:00.106 08:53:51 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:00.106 08:53:51 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:00.106 08:53:51 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:00.106 08:53:51 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:00.106 08:53:51 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:39:00.106 08:53:51 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:00.106 08:53:51 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:00.106 08:53:51 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:00.106 08:53:51 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:00.106 08:53:51 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:00.106 08:53:51 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:00.106 08:53:51 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:39:00.106 08:53:51 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:00.106 08:53:51 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:39:00.106 08:53:51 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:00.106 08:53:51 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:00.106 08:53:51 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:00.106 08:53:51 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:00.106 08:53:51 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:00.106 08:53:51 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:00.106 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:00.106 08:53:51 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:00.106 08:53:51 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:00.106 08:53:51 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:00.106 08:53:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:39:00.106 08:53:51 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:39:00.106 08:53:51 nvmf_abort_qd_sizes -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:00.106 08:53:51 nvmf_abort_qd_sizes -- nvmf/common.sh@472 -- # prepare_net_devs 00:39:00.106 08:53:51 nvmf_abort_qd_sizes -- nvmf/common.sh@434 -- # local -g is_hw=no 00:39:00.106 08:53:51 nvmf_abort_qd_sizes -- nvmf/common.sh@436 -- # remove_spdk_ns 00:39:00.106 08:53:51 nvmf_abort_qd_sizes -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:00.106 08:53:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:00.106 08:53:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:00.106 08:53:51 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:39:00.106 08:53:51 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:39:00.106 08:53:51 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:39:00.106 08:53:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:06.692 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:06.692 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:39:06.692 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:06.692 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:06.692 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:06.692 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:06.692 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:06.692 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:39:06.692 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:06.692 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:39:06.692 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:39:06.692 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:39:06.692 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:39:06.692 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:39:06.692 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:39:06.692 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:06.692 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:06.692 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:06.692 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:06.692 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:06.692 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:06.692 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:06.692 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:06.692 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:06.692 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:06.692 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:06.692 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:39:06.692 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:39:06.692 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:39:06.692 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:39:06.692 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:39:06.692 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:39:06.692 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:39:06.692 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:39:06.692 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:39:06.692 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:39:06.692 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:39:06.692 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:06.692 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:06.692 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:39:06.692 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:39:06.692 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:39:06.692 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:39:06.692 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:39:06.692 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:39:06.692 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:06.692 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:06.692 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:39:06.692 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:39:06.692 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:39:06.693 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:39:06.693 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:39:06.693 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:06.693 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:39:06.693 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:06.693 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ up == up ]] 00:39:06.693 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:39:06.693 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:06.693 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:39:06.693 Found net devices under 0000:4b:00.0: cvl_0_0 00:39:06.693 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:39:06.693 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:39:06.693 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:06.693 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:39:06.693 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:06.693 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ up == up ]] 00:39:06.693 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:39:06.693 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:06.693 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:39:06.693 Found net devices under 0000:4b:00.1: cvl_0_1 00:39:06.693 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:39:06.693 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:39:06.693 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # is_hw=yes 00:39:06.693 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:39:06.693 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:39:06.693 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:39:06.693 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:06.693 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:06.693 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:06.693 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:06.693 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:06.693 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:06.693 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:06.693 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:06.693 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:06.693 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:06.693 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:06.693 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:06.693 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:06.693 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:06.693 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:06.953 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:06.953 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:06.953 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:06.953 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:06.953 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:06.953 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:06.953 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:06.953 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:06.953 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:06.953 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.675 ms 00:39:06.953 00:39:06.953 --- 10.0.0.2 ping statistics --- 00:39:06.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:06.953 rtt min/avg/max/mdev = 0.675/0.675/0.675/0.000 ms 00:39:06.953 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:06.953 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:06.953 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:39:06.953 00:39:06.953 --- 10.0.0.1 ping statistics --- 00:39:06.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:06.953 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:39:06.953 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:06.953 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # return 0 00:39:06.953 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # '[' iso == iso ']' 00:39:06.953 08:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@475 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:39:10.253 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:39:10.253 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:39:10.253 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:39:10.513 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:39:10.513 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:39:10.513 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:39:10.513 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:39:10.513 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:39:10.513 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:39:10.513 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:39:10.513 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:39:10.513 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:39:10.513 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:39:10.513 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:39:10.513 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:39:10.513 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:39:10.513 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:39:11.084 08:54:02 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:11.084 08:54:02 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:39:11.084 08:54:02 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:39:11.084 08:54:02 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:11.084 08:54:02 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:39:11.084 08:54:02 nvmf_abort_qd_sizes -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:39:11.084 08:54:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:39:11.084 08:54:02 nvmf_abort_qd_sizes -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:39:11.084 08:54:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:11.084 08:54:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:11.084 08:54:02 nvmf_abort_qd_sizes -- nvmf/common.sh@505 -- # nvmfpid=4074709 00:39:11.084 08:54:02 nvmf_abort_qd_sizes -- nvmf/common.sh@506 -- # waitforlisten 4074709 00:39:11.084 08:54:02 nvmf_abort_qd_sizes -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:39:11.084 08:54:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 4074709 ']' 00:39:11.084 08:54:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:11.084 08:54:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:11.084 08:54:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:11.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:11.084 08:54:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:11.084 08:54:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:11.084 [2024-10-01 08:54:02.733765] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:39:11.084 [2024-10-01 08:54:02.733813] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:11.084 [2024-10-01 08:54:02.798897] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:11.084 [2024-10-01 08:54:02.863427] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:11.084 [2024-10-01 08:54:02.863464] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:11.084 [2024-10-01 08:54:02.863472] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:11.084 [2024-10-01 08:54:02.863479] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:11.084 [2024-10-01 08:54:02.863485] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:11.084 [2024-10-01 08:54:02.865223] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:39:11.084 [2024-10-01 08:54:02.865393] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:39:11.084 [2024-10-01 08:54:02.865549] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:39:11.084 [2024-10-01 08:54:02.865550] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:39:12.025 08:54:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:12.025 08:54:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:39:12.025 08:54:03 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:39:12.025 08:54:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:12.025 08:54:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:12.025 08:54:03 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:12.025 08:54:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:39:12.025 08:54:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:39:12.025 08:54:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:39:12.025 08:54:03 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:39:12.025 08:54:03 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:39:12.025 08:54:03 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:39:12.025 08:54:03 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:39:12.025 08:54:03 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:39:12.025 08:54:03 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:39:12.025 08:54:03 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:39:12.025 08:54:03 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:39:12.025 08:54:03 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:39:12.025 08:54:03 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:39:12.025 08:54:03 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:39:12.025 08:54:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:39:12.025 08:54:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:39:12.026 08:54:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:39:12.026 08:54:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:39:12.026 08:54:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:12.026 08:54:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:12.026 ************************************ 00:39:12.026 START TEST spdk_target_abort 00:39:12.026 ************************************ 00:39:12.026 08:54:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:39:12.026 08:54:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:39:12.026 08:54:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:39:12.026 08:54:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:12.026 08:54:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:12.287 spdk_targetn1 00:39:12.287 08:54:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:12.287 08:54:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:12.287 08:54:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:12.287 08:54:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:12.287 [2024-10-01 08:54:03.940040] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:12.287 08:54:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:12.287 08:54:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:39:12.287 08:54:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:12.287 08:54:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:12.287 08:54:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:12.287 08:54:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:39:12.287 08:54:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:12.287 08:54:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:12.287 08:54:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:12.287 08:54:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:39:12.287 08:54:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:12.287 08:54:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:12.287 [2024-10-01 08:54:03.980319] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:12.287 08:54:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:12.287 08:54:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:39:12.287 08:54:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:39:12.287 08:54:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:39:12.287 08:54:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:39:12.287 08:54:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:39:12.287 08:54:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:39:12.287 08:54:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:39:12.287 08:54:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:39:12.287 08:54:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:39:12.287 08:54:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:12.287 08:54:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:39:12.287 08:54:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:12.287 08:54:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:39:12.287 08:54:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:12.287 08:54:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:39:12.287 08:54:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:12.287 08:54:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:39:12.287 08:54:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:12.287 08:54:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:12.287 08:54:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:12.287 08:54:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:12.548 [2024-10-01 08:54:04.187736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:32 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:39:12.548 [2024-10-01 08:54:04.187765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0006 p:1 m:0 dnr:0 00:39:12.548 [2024-10-01 08:54:04.213442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:824 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:39:12.548 [2024-10-01 08:54:04.213461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:006b p:1 m:0 dnr:0 00:39:12.548 [2024-10-01 08:54:04.264529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:2792 len:8 PRP1 0x2000078c6000 PRP2 0x0 00:39:12.548 [2024-10-01 08:54:04.264549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:39:12.548 [2024-10-01 08:54:04.288454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:3648 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:39:12.548 [2024-10-01 08:54:04.288471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:00c9 p:0 m:0 dnr:0 00:39:12.548 [2024-10-01 08:54:04.297113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:3992 len:8 PRP1 0x2000078c2000 PRP2 0x0 00:39:12.548 [2024-10-01 08:54:04.297129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00f4 p:0 m:0 dnr:0 00:39:15.848 Initializing NVMe Controllers 00:39:15.848 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:39:15.848 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:15.848 Initialization complete. Launching workers. 00:39:15.848 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12133, failed: 5 00:39:15.848 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 3357, failed to submit 8781 00:39:15.848 success 705, unsuccessful 2652, failed 0 00:39:15.848 08:54:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:15.848 08:54:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:15.848 [2024-10-01 08:54:07.355137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:191 nsid:1 lba:472 len:8 PRP1 0x200007c40000 PRP2 0x0 00:39:15.848 [2024-10-01 08:54:07.355179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:191 cdw0:0 sqhd:004d p:1 m:0 dnr:0 00:39:15.848 [2024-10-01 08:54:07.402100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:191 nsid:1 lba:1640 len:8 PRP1 0x200007c50000 PRP2 0x0 00:39:15.848 [2024-10-01 08:54:07.402126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:191 cdw0:0 sqhd:00d0 p:1 m:0 dnr:0 00:39:15.848 [2024-10-01 08:54:07.426044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:188 nsid:1 lba:2112 len:8 PRP1 0x200007c52000 PRP2 0x0 00:39:15.848 [2024-10-01 08:54:07.426070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:188 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:39:17.230 [2024-10-01 08:54:08.691344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:178 nsid:1 lba:31216 len:8 PRP1 0x200007c46000 PRP2 0x0 00:39:17.230 [2024-10-01 08:54:08.691386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:178 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:19.142 Initializing NVMe Controllers 00:39:19.142 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:39:19.142 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:19.142 Initialization complete. Launching workers. 00:39:19.142 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8568, failed: 4 00:39:19.142 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1244, failed to submit 7328 00:39:19.142 success 317, unsuccessful 927, failed 0 00:39:19.142 08:54:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:19.142 08:54:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:19.142 [2024-10-01 08:54:10.736591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:151 nsid:1 lba:1760 len:8 PRP1 0x200007916000 PRP2 0x0 00:39:19.142 [2024-10-01 08:54:10.736624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:151 cdw0:0 sqhd:004e p:1 m:0 dnr:0 00:39:19.713 [2024-10-01 08:54:11.443319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:156 nsid:1 lba:81272 len:8 PRP1 0x2000078cc000 PRP2 0x0 00:39:19.713 [2024-10-01 08:54:11.443346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:156 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:39:20.283 [2024-10-01 08:54:11.852754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:186 nsid:1 lba:127600 len:8 PRP1 0x200007916000 PRP2 0x0 00:39:20.283 [2024-10-01 08:54:11.852777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:186 cdw0:0 sqhd:00ba p:0 m:0 dnr:0 00:39:20.543 [2024-10-01 08:54:12.151048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:168 nsid:1 lba:161328 len:8 PRP1 0x2000078d6000 PRP2 0x0 00:39:20.543 [2024-10-01 08:54:12.151070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:168 cdw0:0 sqhd:0033 p:1 m:0 dnr:0 00:39:21.927 Initializing NVMe Controllers 00:39:21.927 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:39:21.927 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:21.927 Initialization complete. Launching workers. 00:39:21.927 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 42508, failed: 4 00:39:21.927 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2833, failed to submit 39679 00:39:21.927 success 598, unsuccessful 2235, failed 0 00:39:22.187 08:54:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:39:22.187 08:54:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:22.187 08:54:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:22.187 08:54:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:22.187 08:54:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:39:22.187 08:54:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:22.187 08:54:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:24.100 08:54:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:24.100 08:54:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 4074709 00:39:24.100 08:54:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 4074709 ']' 00:39:24.100 08:54:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 4074709 00:39:24.100 08:54:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:39:24.100 08:54:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:24.100 08:54:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4074709 00:39:24.100 08:54:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:24.100 08:54:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:24.100 08:54:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4074709' 00:39:24.100 killing process with pid 4074709 00:39:24.100 08:54:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 4074709 00:39:24.100 08:54:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 4074709 00:39:24.100 00:39:24.100 real 0m12.183s 00:39:24.100 user 0m49.701s 00:39:24.100 sys 0m1.835s 00:39:24.100 08:54:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:24.100 08:54:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:24.100 ************************************ 00:39:24.100 END TEST spdk_target_abort 00:39:24.100 ************************************ 00:39:24.100 08:54:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:39:24.100 08:54:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:39:24.100 08:54:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:24.100 08:54:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:24.100 ************************************ 00:39:24.100 START TEST kernel_target_abort 00:39:24.100 ************************************ 00:39:24.101 08:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:39:24.101 08:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:39:24.101 08:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@765 -- # local ip 00:39:24.101 08:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@766 -- # ip_candidates=() 00:39:24.101 08:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@766 -- # local -A ip_candidates 00:39:24.101 08:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:24.101 08:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:24.101 08:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:39:24.101 08:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:24.101 08:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:39:24.101 08:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:39:24.101 08:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:39:24.101 08:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:39:24.101 08:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:39:24.101 08:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:39:24.101 08:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:24.101 08:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:39:24.101 08:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:39:24.101 08:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # local block nvme 00:39:24.101 08:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:39:24.101 08:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@666 -- # modprobe nvmet 00:39:24.361 08:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:39:24.361 08:54:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:39:27.665 Waiting for block devices as requested 00:39:27.665 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:39:27.665 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:39:27.665 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:39:27.665 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:39:27.665 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:39:27.934 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:39:27.934 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:39:27.934 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:39:28.195 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:39:28.195 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:39:28.467 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:39:28.467 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:39:28.467 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:39:28.467 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:39:28.765 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:39:28.765 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:39:28.765 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:39:29.055 08:54:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:39:29.055 08:54:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:39:29.055 08:54:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:39:29.055 08:54:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:39:29.055 08:54:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:39:29.055 08:54:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:39:29.055 08:54:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:39:29.055 08:54:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:39:29.055 08:54:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:39:29.055 No valid GPT data, bailing 00:39:29.055 08:54:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:39:29.055 08:54:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:39:29.055 08:54:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:39:29.055 08:54:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:39:29.055 08:54:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # [[ -b /dev/nvme0n1 ]] 00:39:29.055 08:54:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:29.055 08:54:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:39:29.055 08:54:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:39:29.055 08:54:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:39:29.055 08:54:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # echo 1 00:39:29.055 08:54:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@692 -- # echo /dev/nvme0n1 00:39:29.055 08:54:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo 1 00:39:29.055 08:54:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:39:29.055 08:54:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo tcp 00:39:29.055 08:54:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 4420 00:39:29.055 08:54:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # echo ipv4 00:39:29.055 08:54:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:39:29.314 08:54:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:39:29.314 00:39:29.314 Discovery Log Number of Records 2, Generation counter 2 00:39:29.314 =====Discovery Log Entry 0====== 00:39:29.315 trtype: tcp 00:39:29.315 adrfam: ipv4 00:39:29.315 subtype: current discovery subsystem 00:39:29.315 treq: not specified, sq flow control disable supported 00:39:29.315 portid: 1 00:39:29.315 trsvcid: 4420 00:39:29.315 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:39:29.315 traddr: 10.0.0.1 00:39:29.315 eflags: none 00:39:29.315 sectype: none 00:39:29.315 =====Discovery Log Entry 1====== 00:39:29.315 trtype: tcp 00:39:29.315 adrfam: ipv4 00:39:29.315 subtype: nvme subsystem 00:39:29.315 treq: not specified, sq flow control disable supported 00:39:29.315 portid: 1 00:39:29.315 trsvcid: 4420 00:39:29.315 subnqn: nqn.2016-06.io.spdk:testnqn 00:39:29.315 traddr: 10.0.0.1 00:39:29.315 eflags: none 00:39:29.315 sectype: none 00:39:29.315 08:54:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:39:29.315 08:54:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:39:29.315 08:54:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:39:29.315 08:54:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:39:29.315 08:54:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:39:29.315 08:54:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:39:29.315 08:54:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:39:29.315 08:54:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:39:29.315 08:54:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:39:29.315 08:54:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:29.315 08:54:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:39:29.315 08:54:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:29.315 08:54:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:39:29.315 08:54:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:29.315 08:54:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:39:29.315 08:54:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:29.315 08:54:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:39:29.315 08:54:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:29.315 08:54:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:29.315 08:54:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:29.315 08:54:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:32.610 Initializing NVMe Controllers 00:39:32.610 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:39:32.610 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:32.610 Initialization complete. Launching workers. 00:39:32.610 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67428, failed: 0 00:39:32.610 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 67428, failed to submit 0 00:39:32.610 success 0, unsuccessful 67428, failed 0 00:39:32.610 08:54:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:32.610 08:54:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:35.907 Initializing NVMe Controllers 00:39:35.908 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:39:35.908 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:35.908 Initialization complete. Launching workers. 00:39:35.908 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 108751, failed: 0 00:39:35.908 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 27366, failed to submit 81385 00:39:35.908 success 0, unsuccessful 27366, failed 0 00:39:35.908 08:54:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:35.908 08:54:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:38.448 Initializing NVMe Controllers 00:39:38.448 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:39:38.448 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:38.448 Initialization complete. Launching workers. 00:39:38.448 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 102395, failed: 0 00:39:38.448 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25602, failed to submit 76793 00:39:38.448 success 0, unsuccessful 25602, failed 0 00:39:38.448 08:54:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:39:38.448 08:54:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:39:38.448 08:54:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@710 -- # echo 0 00:39:38.449 08:54:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:38.449 08:54:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:39:38.449 08:54:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:39:38.449 08:54:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:38.449 08:54:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:39:38.449 08:54:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:39:38.709 08:54:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@722 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:39:42.009 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:39:42.009 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:39:42.009 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:39:42.009 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:39:42.009 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:39:42.009 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:39:42.009 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:39:42.009 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:39:42.009 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:39:42.009 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:39:42.009 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:39:42.009 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:39:42.009 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:39:42.009 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:39:42.009 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:39:42.009 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:39:44.020 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:39:44.280 00:39:44.280 real 0m19.987s 00:39:44.280 user 0m9.864s 00:39:44.280 sys 0m5.884s 00:39:44.280 08:54:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:44.280 08:54:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:44.280 ************************************ 00:39:44.280 END TEST kernel_target_abort 00:39:44.280 ************************************ 00:39:44.280 08:54:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:39:44.280 08:54:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:39:44.280 08:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # nvmfcleanup 00:39:44.280 08:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:39:44.280 08:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:44.280 08:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:39:44.280 08:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:44.280 08:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:44.280 rmmod nvme_tcp 00:39:44.280 rmmod nvme_fabrics 00:39:44.280 rmmod nvme_keyring 00:39:44.280 08:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:44.280 08:54:36 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:39:44.280 08:54:36 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:39:44.280 08:54:36 nvmf_abort_qd_sizes -- nvmf/common.sh@513 -- # '[' -n 4074709 ']' 00:39:44.280 08:54:36 nvmf_abort_qd_sizes -- nvmf/common.sh@514 -- # killprocess 4074709 00:39:44.280 08:54:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 4074709 ']' 00:39:44.280 08:54:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 4074709 00:39:44.280 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (4074709) - No such process 00:39:44.280 08:54:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 4074709 is not found' 00:39:44.280 Process with pid 4074709 is not found 00:39:44.280 08:54:36 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # '[' iso == iso ']' 00:39:44.280 08:54:36 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:39:47.584 Waiting for block devices as requested 00:39:47.584 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:39:47.584 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:39:47.584 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:39:47.584 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:39:47.584 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:39:47.584 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:39:47.584 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:39:47.584 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:39:47.844 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:39:47.844 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:39:48.104 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:39:48.104 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:39:48.104 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:39:48.104 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:39:48.364 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:39:48.364 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:39:48.364 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:39:48.624 08:54:40 nvmf_abort_qd_sizes -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:39:48.624 08:54:40 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:39:48.624 08:54:40 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:39:48.624 08:54:40 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # iptables-save 00:39:48.624 08:54:40 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:39:48.624 08:54:40 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # iptables-restore 00:39:48.624 08:54:40 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:48.624 08:54:40 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:48.624 08:54:40 nvmf_abort_qd_sizes -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:48.624 08:54:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:48.624 08:54:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:51.167 08:54:42 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:51.167 00:39:51.167 real 0m51.047s 00:39:51.167 user 1m4.478s 00:39:51.167 sys 0m18.221s 00:39:51.167 08:54:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:51.167 08:54:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:51.167 ************************************ 00:39:51.167 END TEST nvmf_abort_qd_sizes 00:39:51.167 ************************************ 00:39:51.167 08:54:42 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:39:51.167 08:54:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:39:51.167 08:54:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:51.167 08:54:42 -- common/autotest_common.sh@10 -- # set +x 00:39:51.167 ************************************ 00:39:51.167 START TEST keyring_file 00:39:51.167 ************************************ 00:39:51.167 08:54:42 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:39:51.167 * Looking for test storage... 00:39:51.167 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:39:51.167 08:54:42 keyring_file -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:39:51.167 08:54:42 keyring_file -- common/autotest_common.sh@1681 -- # lcov --version 00:39:51.167 08:54:42 keyring_file -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:39:51.167 08:54:42 keyring_file -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:39:51.167 08:54:42 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:51.167 08:54:42 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:51.167 08:54:42 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:51.167 08:54:42 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:39:51.167 08:54:42 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:39:51.167 08:54:42 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:39:51.167 08:54:42 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:39:51.167 08:54:42 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:39:51.167 08:54:42 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:39:51.167 08:54:42 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:39:51.167 08:54:42 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:51.167 08:54:42 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:39:51.167 08:54:42 keyring_file -- scripts/common.sh@345 -- # : 1 00:39:51.167 08:54:42 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:51.167 08:54:42 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:51.167 08:54:42 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:39:51.167 08:54:42 keyring_file -- scripts/common.sh@353 -- # local d=1 00:39:51.167 08:54:42 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:51.167 08:54:42 keyring_file -- scripts/common.sh@355 -- # echo 1 00:39:51.167 08:54:42 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:39:51.167 08:54:42 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:39:51.167 08:54:42 keyring_file -- scripts/common.sh@353 -- # local d=2 00:39:51.167 08:54:42 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:51.167 08:54:42 keyring_file -- scripts/common.sh@355 -- # echo 2 00:39:51.167 08:54:42 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:39:51.167 08:54:42 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:51.167 08:54:42 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:51.167 08:54:42 keyring_file -- scripts/common.sh@368 -- # return 0 00:39:51.167 08:54:42 keyring_file -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:51.167 08:54:42 keyring_file -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:39:51.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:51.167 --rc genhtml_branch_coverage=1 00:39:51.167 --rc genhtml_function_coverage=1 00:39:51.167 --rc genhtml_legend=1 00:39:51.167 --rc geninfo_all_blocks=1 00:39:51.167 --rc geninfo_unexecuted_blocks=1 00:39:51.167 00:39:51.167 ' 00:39:51.167 08:54:42 keyring_file -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:39:51.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:51.167 --rc genhtml_branch_coverage=1 00:39:51.167 --rc genhtml_function_coverage=1 00:39:51.167 --rc genhtml_legend=1 00:39:51.167 --rc geninfo_all_blocks=1 00:39:51.167 --rc geninfo_unexecuted_blocks=1 00:39:51.167 00:39:51.167 ' 00:39:51.167 08:54:42 keyring_file -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:39:51.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:51.167 --rc genhtml_branch_coverage=1 00:39:51.167 --rc genhtml_function_coverage=1 00:39:51.167 --rc genhtml_legend=1 00:39:51.167 --rc geninfo_all_blocks=1 00:39:51.167 --rc geninfo_unexecuted_blocks=1 00:39:51.167 00:39:51.167 ' 00:39:51.167 08:54:42 keyring_file -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:39:51.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:51.167 --rc genhtml_branch_coverage=1 00:39:51.167 --rc genhtml_function_coverage=1 00:39:51.167 --rc genhtml_legend=1 00:39:51.167 --rc geninfo_all_blocks=1 00:39:51.167 --rc geninfo_unexecuted_blocks=1 00:39:51.167 00:39:51.167 ' 00:39:51.167 08:54:42 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:39:51.167 08:54:42 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:51.167 08:54:42 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:39:51.167 08:54:42 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:51.167 08:54:42 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:51.167 08:54:42 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:51.167 08:54:42 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:51.167 08:54:42 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:51.167 08:54:42 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:51.167 08:54:42 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:51.167 08:54:42 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:51.167 08:54:42 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:51.167 08:54:42 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:51.167 08:54:42 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:51.167 08:54:42 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:51.167 08:54:42 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:51.167 08:54:42 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:51.167 08:54:42 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:51.167 08:54:42 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:51.167 08:54:42 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:51.167 08:54:42 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:39:51.167 08:54:42 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:51.167 08:54:42 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:51.167 08:54:42 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:51.167 08:54:42 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:51.167 08:54:42 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:51.167 08:54:42 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:51.167 08:54:42 keyring_file -- paths/export.sh@5 -- # export PATH 00:39:51.167 08:54:42 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:51.167 08:54:42 keyring_file -- nvmf/common.sh@51 -- # : 0 00:39:51.167 08:54:42 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:51.167 08:54:42 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:51.168 08:54:42 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:51.168 08:54:42 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:51.168 08:54:42 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:51.168 08:54:42 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:51.168 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:51.168 08:54:42 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:51.168 08:54:42 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:51.168 08:54:42 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:51.168 08:54:42 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:39:51.168 08:54:42 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:39:51.168 08:54:42 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:39:51.168 08:54:42 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:39:51.168 08:54:42 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:39:51.168 08:54:42 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:39:51.168 08:54:42 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:39:51.168 08:54:42 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:51.168 08:54:42 keyring_file -- keyring/common.sh@17 -- # name=key0 00:39:51.168 08:54:42 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:51.168 08:54:42 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:51.168 08:54:42 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:51.168 08:54:42 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.JUODXtOctR 00:39:51.168 08:54:42 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:51.168 08:54:42 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:51.168 08:54:42 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:39:51.168 08:54:42 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:39:51.168 08:54:42 keyring_file -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:39:51.168 08:54:42 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:39:51.168 08:54:42 keyring_file -- nvmf/common.sh@729 -- # python - 00:39:51.168 08:54:42 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.JUODXtOctR 00:39:51.168 08:54:42 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.JUODXtOctR 00:39:51.168 08:54:42 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.JUODXtOctR 00:39:51.168 08:54:42 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:39:51.168 08:54:42 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:51.168 08:54:42 keyring_file -- keyring/common.sh@17 -- # name=key1 00:39:51.168 08:54:42 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:39:51.168 08:54:42 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:51.168 08:54:42 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:51.168 08:54:42 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Al4e0UilgL 00:39:51.168 08:54:42 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:39:51.168 08:54:42 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:39:51.168 08:54:42 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:39:51.168 08:54:42 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:39:51.168 08:54:42 keyring_file -- nvmf/common.sh@728 -- # key=112233445566778899aabbccddeeff00 00:39:51.168 08:54:42 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:39:51.168 08:54:42 keyring_file -- nvmf/common.sh@729 -- # python - 00:39:51.168 08:54:42 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Al4e0UilgL 00:39:51.168 08:54:42 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Al4e0UilgL 00:39:51.168 08:54:42 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.Al4e0UilgL 00:39:51.168 08:54:42 keyring_file -- keyring/file.sh@30 -- # tgtpid=4085096 00:39:51.168 08:54:42 keyring_file -- keyring/file.sh@32 -- # waitforlisten 4085096 00:39:51.168 08:54:42 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:39:51.168 08:54:42 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 4085096 ']' 00:39:51.168 08:54:42 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:51.168 08:54:42 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:51.168 08:54:42 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:51.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:51.168 08:54:42 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:51.168 08:54:42 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:51.168 [2024-10-01 08:54:42.978477] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:39:51.168 [2024-10-01 08:54:42.978558] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4085096 ] 00:39:51.427 [2024-10-01 08:54:43.042184] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:51.427 [2024-10-01 08:54:43.116826] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:39:51.998 08:54:43 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:51.998 08:54:43 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:39:51.998 08:54:43 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:39:51.998 08:54:43 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:51.998 08:54:43 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:51.998 [2024-10-01 08:54:43.765829] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:51.998 null0 00:39:51.998 [2024-10-01 08:54:43.797881] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:39:51.998 [2024-10-01 08:54:43.798214] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:39:51.998 08:54:43 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:51.998 08:54:43 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:51.998 08:54:43 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:39:52.258 08:54:43 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:52.258 08:54:43 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:39:52.258 08:54:43 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:52.258 08:54:43 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:39:52.258 08:54:43 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:52.258 08:54:43 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:52.258 08:54:43 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:52.258 08:54:43 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:52.258 [2024-10-01 08:54:43.829946] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:39:52.258 request: 00:39:52.258 { 00:39:52.258 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:39:52.258 "secure_channel": false, 00:39:52.258 "listen_address": { 00:39:52.258 "trtype": "tcp", 00:39:52.258 "traddr": "127.0.0.1", 00:39:52.258 "trsvcid": "4420" 00:39:52.258 }, 00:39:52.258 "method": "nvmf_subsystem_add_listener", 00:39:52.258 "req_id": 1 00:39:52.258 } 00:39:52.258 Got JSON-RPC error response 00:39:52.258 response: 00:39:52.258 { 00:39:52.258 "code": -32602, 00:39:52.258 "message": "Invalid parameters" 00:39:52.258 } 00:39:52.258 08:54:43 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:39:52.258 08:54:43 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:39:52.258 08:54:43 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:39:52.258 08:54:43 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:39:52.258 08:54:43 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:39:52.258 08:54:43 keyring_file -- keyring/file.sh@47 -- # bperfpid=4085405 00:39:52.258 08:54:43 keyring_file -- keyring/file.sh@49 -- # waitforlisten 4085405 /var/tmp/bperf.sock 00:39:52.258 08:54:43 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 4085405 ']' 00:39:52.258 08:54:43 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:39:52.258 08:54:43 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:52.258 08:54:43 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:52.258 08:54:43 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:52.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:52.258 08:54:43 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:52.258 08:54:43 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:52.258 [2024-10-01 08:54:43.886421] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:39:52.258 [2024-10-01 08:54:43.886470] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4085405 ] 00:39:52.258 [2024-10-01 08:54:43.963185] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:52.258 [2024-10-01 08:54:44.026614] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:39:53.222 08:54:44 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:53.222 08:54:44 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:39:53.222 08:54:44 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.JUODXtOctR 00:39:53.222 08:54:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.JUODXtOctR 00:39:53.222 08:54:44 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Al4e0UilgL 00:39:53.222 08:54:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Al4e0UilgL 00:39:53.222 08:54:45 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:39:53.222 08:54:45 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:39:53.222 08:54:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:53.222 08:54:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:53.222 08:54:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:53.481 08:54:45 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.JUODXtOctR == \/\t\m\p\/\t\m\p\.\J\U\O\D\X\t\O\c\t\R ]] 00:39:53.481 08:54:45 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:39:53.481 08:54:45 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:39:53.481 08:54:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:53.481 08:54:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:53.481 08:54:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:53.742 08:54:45 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.Al4e0UilgL == \/\t\m\p\/\t\m\p\.\A\l\4\e\0\U\i\l\g\L ]] 00:39:53.742 08:54:45 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:39:53.742 08:54:45 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:53.742 08:54:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:53.742 08:54:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:53.742 08:54:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:53.742 08:54:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:53.742 08:54:45 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:39:53.742 08:54:45 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:39:53.742 08:54:45 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:53.742 08:54:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:53.742 08:54:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:53.742 08:54:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:53.742 08:54:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:54.002 08:54:45 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:39:54.002 08:54:45 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:54.002 08:54:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:54.262 [2024-10-01 08:54:45.868683] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:54.262 nvme0n1 00:39:54.262 08:54:45 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:39:54.262 08:54:45 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:54.262 08:54:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:54.262 08:54:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:54.262 08:54:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:54.263 08:54:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:54.522 08:54:46 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:39:54.522 08:54:46 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:39:54.522 08:54:46 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:54.522 08:54:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:54.522 08:54:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:54.522 08:54:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:54.522 08:54:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:54.522 08:54:46 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:39:54.522 08:54:46 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:54.783 Running I/O for 1 seconds... 00:39:55.724 16600.00 IOPS, 64.84 MiB/s 00:39:55.724 Latency(us) 00:39:55.724 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:55.724 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:39:55.724 nvme0n1 : 1.01 16610.09 64.88 0.00 0.00 7676.32 5789.01 19114.67 00:39:55.724 =================================================================================================================== 00:39:55.724 Total : 16610.09 64.88 0.00 0.00 7676.32 5789.01 19114.67 00:39:55.724 { 00:39:55.724 "results": [ 00:39:55.724 { 00:39:55.724 "job": "nvme0n1", 00:39:55.724 "core_mask": "0x2", 00:39:55.724 "workload": "randrw", 00:39:55.724 "percentage": 50, 00:39:55.724 "status": "finished", 00:39:55.724 "queue_depth": 128, 00:39:55.724 "io_size": 4096, 00:39:55.724 "runtime": 1.007219, 00:39:55.724 "iops": 16610.091747673545, 00:39:55.724 "mibps": 64.88317088934978, 00:39:55.724 "io_failed": 0, 00:39:55.724 "io_timeout": 0, 00:39:55.724 "avg_latency_us": 7676.322658696951, 00:39:55.724 "min_latency_us": 5789.013333333333, 00:39:55.724 "max_latency_us": 19114.666666666668 00:39:55.724 } 00:39:55.724 ], 00:39:55.724 "core_count": 1 00:39:55.724 } 00:39:55.724 08:54:47 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:55.724 08:54:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:55.985 08:54:47 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:39:55.985 08:54:47 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:55.985 08:54:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:55.985 08:54:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:55.985 08:54:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:55.985 08:54:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:55.985 08:54:47 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:39:55.985 08:54:47 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:39:55.985 08:54:47 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:55.985 08:54:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:55.985 08:54:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:55.985 08:54:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:55.985 08:54:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:56.246 08:54:47 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:39:56.246 08:54:47 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:56.246 08:54:47 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:39:56.246 08:54:47 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:56.246 08:54:47 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:39:56.246 08:54:47 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:56.246 08:54:47 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:39:56.246 08:54:47 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:56.246 08:54:47 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:56.246 08:54:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:56.506 [2024-10-01 08:54:48.120481] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:39:56.506 [2024-10-01 08:54:48.121295] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bf0e60 (107): Transport endpoint is not connected 00:39:56.506 [2024-10-01 08:54:48.122292] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bf0e60 (9): Bad file descriptor 00:39:56.506 [2024-10-01 08:54:48.123293] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:39:56.506 [2024-10-01 08:54:48.123301] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:39:56.506 [2024-10-01 08:54:48.123307] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:39:56.506 [2024-10-01 08:54:48.123313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:39:56.506 request: 00:39:56.506 { 00:39:56.506 "name": "nvme0", 00:39:56.506 "trtype": "tcp", 00:39:56.506 "traddr": "127.0.0.1", 00:39:56.506 "adrfam": "ipv4", 00:39:56.506 "trsvcid": "4420", 00:39:56.506 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:56.506 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:56.506 "prchk_reftag": false, 00:39:56.506 "prchk_guard": false, 00:39:56.506 "hdgst": false, 00:39:56.506 "ddgst": false, 00:39:56.506 "psk": "key1", 00:39:56.506 "allow_unrecognized_csi": false, 00:39:56.506 "method": "bdev_nvme_attach_controller", 00:39:56.506 "req_id": 1 00:39:56.506 } 00:39:56.506 Got JSON-RPC error response 00:39:56.506 response: 00:39:56.506 { 00:39:56.506 "code": -5, 00:39:56.506 "message": "Input/output error" 00:39:56.506 } 00:39:56.506 08:54:48 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:39:56.506 08:54:48 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:39:56.506 08:54:48 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:39:56.506 08:54:48 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:39:56.506 08:54:48 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:39:56.506 08:54:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:56.506 08:54:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:56.506 08:54:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:56.506 08:54:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:56.506 08:54:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:56.506 08:54:48 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:39:56.506 08:54:48 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:39:56.506 08:54:48 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:56.506 08:54:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:56.506 08:54:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:56.506 08:54:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:56.506 08:54:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:56.767 08:54:48 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:39:56.767 08:54:48 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:39:56.767 08:54:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:57.028 08:54:48 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:39:57.028 08:54:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:39:57.028 08:54:48 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:39:57.028 08:54:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:57.028 08:54:48 keyring_file -- keyring/file.sh@78 -- # jq length 00:39:57.288 08:54:48 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:39:57.288 08:54:48 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.JUODXtOctR 00:39:57.288 08:54:48 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.JUODXtOctR 00:39:57.288 08:54:48 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:39:57.288 08:54:48 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.JUODXtOctR 00:39:57.288 08:54:48 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:39:57.288 08:54:48 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:57.288 08:54:48 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:39:57.288 08:54:48 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:57.288 08:54:48 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.JUODXtOctR 00:39:57.288 08:54:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.JUODXtOctR 00:39:57.549 [2024-10-01 08:54:49.141100] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.JUODXtOctR': 0100660 00:39:57.549 [2024-10-01 08:54:49.141118] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:39:57.549 request: 00:39:57.549 { 00:39:57.549 "name": "key0", 00:39:57.549 "path": "/tmp/tmp.JUODXtOctR", 00:39:57.549 "method": "keyring_file_add_key", 00:39:57.549 "req_id": 1 00:39:57.549 } 00:39:57.549 Got JSON-RPC error response 00:39:57.549 response: 00:39:57.549 { 00:39:57.549 "code": -1, 00:39:57.549 "message": "Operation not permitted" 00:39:57.549 } 00:39:57.549 08:54:49 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:39:57.549 08:54:49 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:39:57.549 08:54:49 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:39:57.549 08:54:49 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:39:57.549 08:54:49 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.JUODXtOctR 00:39:57.549 08:54:49 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.JUODXtOctR 00:39:57.549 08:54:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.JUODXtOctR 00:39:57.549 08:54:49 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.JUODXtOctR 00:39:57.549 08:54:49 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:39:57.549 08:54:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:57.549 08:54:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:57.549 08:54:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:57.549 08:54:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:57.549 08:54:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:57.810 08:54:49 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:39:57.810 08:54:49 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:57.810 08:54:49 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:39:57.810 08:54:49 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:57.810 08:54:49 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:39:57.810 08:54:49 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:57.810 08:54:49 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:39:57.810 08:54:49 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:57.810 08:54:49 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:57.810 08:54:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:58.071 [2024-10-01 08:54:49.662420] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.JUODXtOctR': No such file or directory 00:39:58.071 [2024-10-01 08:54:49.662433] nvme_tcp.c:2609:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:39:58.071 [2024-10-01 08:54:49.662446] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:39:58.071 [2024-10-01 08:54:49.662451] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:39:58.071 [2024-10-01 08:54:49.662457] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:39:58.071 [2024-10-01 08:54:49.662462] bdev_nvme.c:6447:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:39:58.071 request: 00:39:58.071 { 00:39:58.071 "name": "nvme0", 00:39:58.071 "trtype": "tcp", 00:39:58.071 "traddr": "127.0.0.1", 00:39:58.071 "adrfam": "ipv4", 00:39:58.071 "trsvcid": "4420", 00:39:58.071 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:58.071 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:58.071 "prchk_reftag": false, 00:39:58.071 "prchk_guard": false, 00:39:58.071 "hdgst": false, 00:39:58.071 "ddgst": false, 00:39:58.071 "psk": "key0", 00:39:58.072 "allow_unrecognized_csi": false, 00:39:58.072 "method": "bdev_nvme_attach_controller", 00:39:58.072 "req_id": 1 00:39:58.072 } 00:39:58.072 Got JSON-RPC error response 00:39:58.072 response: 00:39:58.072 { 00:39:58.072 "code": -19, 00:39:58.072 "message": "No such device" 00:39:58.072 } 00:39:58.072 08:54:49 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:39:58.072 08:54:49 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:39:58.072 08:54:49 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:39:58.072 08:54:49 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:39:58.072 08:54:49 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:39:58.072 08:54:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:58.072 08:54:49 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:39:58.072 08:54:49 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:58.072 08:54:49 keyring_file -- keyring/common.sh@17 -- # name=key0 00:39:58.072 08:54:49 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:58.072 08:54:49 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:58.072 08:54:49 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:58.072 08:54:49 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.NdjO3JSm5v 00:39:58.072 08:54:49 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:58.072 08:54:49 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:58.072 08:54:49 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:39:58.072 08:54:49 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:39:58.072 08:54:49 keyring_file -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:39:58.072 08:54:49 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:39:58.072 08:54:49 keyring_file -- nvmf/common.sh@729 -- # python - 00:39:58.332 08:54:49 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.NdjO3JSm5v 00:39:58.332 08:54:49 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.NdjO3JSm5v 00:39:58.332 08:54:49 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.NdjO3JSm5v 00:39:58.332 08:54:49 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.NdjO3JSm5v 00:39:58.332 08:54:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.NdjO3JSm5v 00:39:58.332 08:54:50 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:58.333 08:54:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:58.593 nvme0n1 00:39:58.593 08:54:50 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:39:58.593 08:54:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:58.593 08:54:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:58.593 08:54:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:58.593 08:54:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:58.593 08:54:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:58.854 08:54:50 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:39:58.854 08:54:50 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:39:58.854 08:54:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:58.854 08:54:50 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:39:58.854 08:54:50 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:39:58.854 08:54:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:58.854 08:54:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:58.854 08:54:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:59.115 08:54:50 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:39:59.115 08:54:50 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:39:59.115 08:54:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:59.115 08:54:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:59.115 08:54:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:59.115 08:54:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:59.115 08:54:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:59.376 08:54:50 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:39:59.376 08:54:50 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:59.376 08:54:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:59.376 08:54:51 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:39:59.376 08:54:51 keyring_file -- keyring/file.sh@105 -- # jq length 00:39:59.376 08:54:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:59.636 08:54:51 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:39:59.636 08:54:51 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.NdjO3JSm5v 00:39:59.636 08:54:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.NdjO3JSm5v 00:39:59.896 08:54:51 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Al4e0UilgL 00:39:59.896 08:54:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Al4e0UilgL 00:39:59.896 08:54:51 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:59.896 08:54:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:00.156 nvme0n1 00:40:00.156 08:54:51 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:40:00.156 08:54:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:40:00.417 08:54:52 keyring_file -- keyring/file.sh@113 -- # config='{ 00:40:00.417 "subsystems": [ 00:40:00.417 { 00:40:00.417 "subsystem": "keyring", 00:40:00.417 "config": [ 00:40:00.417 { 00:40:00.417 "method": "keyring_file_add_key", 00:40:00.417 "params": { 00:40:00.417 "name": "key0", 00:40:00.417 "path": "/tmp/tmp.NdjO3JSm5v" 00:40:00.417 } 00:40:00.417 }, 00:40:00.417 { 00:40:00.417 "method": "keyring_file_add_key", 00:40:00.417 "params": { 00:40:00.417 "name": "key1", 00:40:00.417 "path": "/tmp/tmp.Al4e0UilgL" 00:40:00.417 } 00:40:00.417 } 00:40:00.417 ] 00:40:00.417 }, 00:40:00.417 { 00:40:00.417 "subsystem": "iobuf", 00:40:00.417 "config": [ 00:40:00.417 { 00:40:00.417 "method": "iobuf_set_options", 00:40:00.417 "params": { 00:40:00.417 "small_pool_count": 8192, 00:40:00.417 "large_pool_count": 1024, 00:40:00.417 "small_bufsize": 8192, 00:40:00.417 "large_bufsize": 135168 00:40:00.417 } 00:40:00.417 } 00:40:00.417 ] 00:40:00.417 }, 00:40:00.417 { 00:40:00.417 "subsystem": "sock", 00:40:00.417 "config": [ 00:40:00.417 { 00:40:00.417 "method": "sock_set_default_impl", 00:40:00.417 "params": { 00:40:00.417 "impl_name": "posix" 00:40:00.417 } 00:40:00.417 }, 00:40:00.417 { 00:40:00.417 "method": "sock_impl_set_options", 00:40:00.417 "params": { 00:40:00.417 "impl_name": "ssl", 00:40:00.417 "recv_buf_size": 4096, 00:40:00.417 "send_buf_size": 4096, 00:40:00.417 "enable_recv_pipe": true, 00:40:00.417 "enable_quickack": false, 00:40:00.417 "enable_placement_id": 0, 00:40:00.417 "enable_zerocopy_send_server": true, 00:40:00.417 "enable_zerocopy_send_client": false, 00:40:00.417 "zerocopy_threshold": 0, 00:40:00.417 "tls_version": 0, 00:40:00.417 "enable_ktls": false 00:40:00.417 } 00:40:00.417 }, 00:40:00.417 { 00:40:00.417 "method": "sock_impl_set_options", 00:40:00.417 "params": { 00:40:00.417 "impl_name": "posix", 00:40:00.417 "recv_buf_size": 2097152, 00:40:00.417 "send_buf_size": 2097152, 00:40:00.417 "enable_recv_pipe": true, 00:40:00.417 "enable_quickack": false, 00:40:00.417 "enable_placement_id": 0, 00:40:00.417 "enable_zerocopy_send_server": true, 00:40:00.417 "enable_zerocopy_send_client": false, 00:40:00.417 "zerocopy_threshold": 0, 00:40:00.417 "tls_version": 0, 00:40:00.417 "enable_ktls": false 00:40:00.417 } 00:40:00.417 } 00:40:00.417 ] 00:40:00.417 }, 00:40:00.417 { 00:40:00.417 "subsystem": "vmd", 00:40:00.417 "config": [] 00:40:00.417 }, 00:40:00.417 { 00:40:00.417 "subsystem": "accel", 00:40:00.417 "config": [ 00:40:00.417 { 00:40:00.417 "method": "accel_set_options", 00:40:00.417 "params": { 00:40:00.417 "small_cache_size": 128, 00:40:00.417 "large_cache_size": 16, 00:40:00.417 "task_count": 2048, 00:40:00.417 "sequence_count": 2048, 00:40:00.417 "buf_count": 2048 00:40:00.417 } 00:40:00.417 } 00:40:00.417 ] 00:40:00.417 }, 00:40:00.417 { 00:40:00.417 "subsystem": "bdev", 00:40:00.417 "config": [ 00:40:00.417 { 00:40:00.417 "method": "bdev_set_options", 00:40:00.417 "params": { 00:40:00.417 "bdev_io_pool_size": 65535, 00:40:00.417 "bdev_io_cache_size": 256, 00:40:00.417 "bdev_auto_examine": true, 00:40:00.417 "iobuf_small_cache_size": 128, 00:40:00.417 "iobuf_large_cache_size": 16 00:40:00.417 } 00:40:00.417 }, 00:40:00.417 { 00:40:00.417 "method": "bdev_raid_set_options", 00:40:00.417 "params": { 00:40:00.417 "process_window_size_kb": 1024, 00:40:00.417 "process_max_bandwidth_mb_sec": 0 00:40:00.417 } 00:40:00.417 }, 00:40:00.417 { 00:40:00.417 "method": "bdev_iscsi_set_options", 00:40:00.417 "params": { 00:40:00.417 "timeout_sec": 30 00:40:00.417 } 00:40:00.417 }, 00:40:00.417 { 00:40:00.417 "method": "bdev_nvme_set_options", 00:40:00.417 "params": { 00:40:00.417 "action_on_timeout": "none", 00:40:00.417 "timeout_us": 0, 00:40:00.417 "timeout_admin_us": 0, 00:40:00.417 "keep_alive_timeout_ms": 10000, 00:40:00.417 "arbitration_burst": 0, 00:40:00.417 "low_priority_weight": 0, 00:40:00.417 "medium_priority_weight": 0, 00:40:00.417 "high_priority_weight": 0, 00:40:00.417 "nvme_adminq_poll_period_us": 10000, 00:40:00.417 "nvme_ioq_poll_period_us": 0, 00:40:00.417 "io_queue_requests": 512, 00:40:00.417 "delay_cmd_submit": true, 00:40:00.417 "transport_retry_count": 4, 00:40:00.417 "bdev_retry_count": 3, 00:40:00.417 "transport_ack_timeout": 0, 00:40:00.417 "ctrlr_loss_timeout_sec": 0, 00:40:00.417 "reconnect_delay_sec": 0, 00:40:00.417 "fast_io_fail_timeout_sec": 0, 00:40:00.417 "disable_auto_failback": false, 00:40:00.417 "generate_uuids": false, 00:40:00.417 "transport_tos": 0, 00:40:00.417 "nvme_error_stat": false, 00:40:00.417 "rdma_srq_size": 0, 00:40:00.417 "io_path_stat": false, 00:40:00.417 "allow_accel_sequence": false, 00:40:00.417 "rdma_max_cq_size": 0, 00:40:00.417 "rdma_cm_event_timeout_ms": 0, 00:40:00.417 "dhchap_digests": [ 00:40:00.417 "sha256", 00:40:00.417 "sha384", 00:40:00.417 "sha512" 00:40:00.417 ], 00:40:00.417 "dhchap_dhgroups": [ 00:40:00.417 "null", 00:40:00.417 "ffdhe2048", 00:40:00.417 "ffdhe3072", 00:40:00.417 "ffdhe4096", 00:40:00.417 "ffdhe6144", 00:40:00.417 "ffdhe8192" 00:40:00.417 ] 00:40:00.417 } 00:40:00.417 }, 00:40:00.417 { 00:40:00.417 "method": "bdev_nvme_attach_controller", 00:40:00.417 "params": { 00:40:00.417 "name": "nvme0", 00:40:00.417 "trtype": "TCP", 00:40:00.417 "adrfam": "IPv4", 00:40:00.417 "traddr": "127.0.0.1", 00:40:00.417 "trsvcid": "4420", 00:40:00.417 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:00.417 "prchk_reftag": false, 00:40:00.417 "prchk_guard": false, 00:40:00.417 "ctrlr_loss_timeout_sec": 0, 00:40:00.417 "reconnect_delay_sec": 0, 00:40:00.417 "fast_io_fail_timeout_sec": 0, 00:40:00.417 "psk": "key0", 00:40:00.417 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:00.417 "hdgst": false, 00:40:00.417 "ddgst": false 00:40:00.417 } 00:40:00.417 }, 00:40:00.417 { 00:40:00.417 "method": "bdev_nvme_set_hotplug", 00:40:00.417 "params": { 00:40:00.417 "period_us": 100000, 00:40:00.417 "enable": false 00:40:00.417 } 00:40:00.417 }, 00:40:00.417 { 00:40:00.417 "method": "bdev_wait_for_examine" 00:40:00.417 } 00:40:00.417 ] 00:40:00.417 }, 00:40:00.417 { 00:40:00.417 "subsystem": "nbd", 00:40:00.417 "config": [] 00:40:00.417 } 00:40:00.417 ] 00:40:00.417 }' 00:40:00.417 08:54:52 keyring_file -- keyring/file.sh@115 -- # killprocess 4085405 00:40:00.417 08:54:52 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 4085405 ']' 00:40:00.417 08:54:52 keyring_file -- common/autotest_common.sh@954 -- # kill -0 4085405 00:40:00.417 08:54:52 keyring_file -- common/autotest_common.sh@955 -- # uname 00:40:00.417 08:54:52 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:00.418 08:54:52 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4085405 00:40:00.418 08:54:52 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:40:00.418 08:54:52 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:40:00.418 08:54:52 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4085405' 00:40:00.418 killing process with pid 4085405 00:40:00.418 08:54:52 keyring_file -- common/autotest_common.sh@969 -- # kill 4085405 00:40:00.418 Received shutdown signal, test time was about 1.000000 seconds 00:40:00.418 00:40:00.418 Latency(us) 00:40:00.418 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:00.418 =================================================================================================================== 00:40:00.418 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:00.418 08:54:52 keyring_file -- common/autotest_common.sh@974 -- # wait 4085405 00:40:00.678 08:54:52 keyring_file -- keyring/file.sh@118 -- # bperfpid=4087059 00:40:00.678 08:54:52 keyring_file -- keyring/file.sh@120 -- # waitforlisten 4087059 /var/tmp/bperf.sock 00:40:00.678 08:54:52 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:40:00.678 "subsystems": [ 00:40:00.678 { 00:40:00.678 "subsystem": "keyring", 00:40:00.678 "config": [ 00:40:00.678 { 00:40:00.678 "method": "keyring_file_add_key", 00:40:00.678 "params": { 00:40:00.678 "name": "key0", 00:40:00.678 "path": "/tmp/tmp.NdjO3JSm5v" 00:40:00.678 } 00:40:00.678 }, 00:40:00.678 { 00:40:00.678 "method": "keyring_file_add_key", 00:40:00.678 "params": { 00:40:00.678 "name": "key1", 00:40:00.678 "path": "/tmp/tmp.Al4e0UilgL" 00:40:00.678 } 00:40:00.678 } 00:40:00.678 ] 00:40:00.678 }, 00:40:00.678 { 00:40:00.678 "subsystem": "iobuf", 00:40:00.678 "config": [ 00:40:00.678 { 00:40:00.678 "method": "iobuf_set_options", 00:40:00.678 "params": { 00:40:00.678 "small_pool_count": 8192, 00:40:00.678 "large_pool_count": 1024, 00:40:00.678 "small_bufsize": 8192, 00:40:00.678 "large_bufsize": 135168 00:40:00.678 } 00:40:00.678 } 00:40:00.678 ] 00:40:00.678 }, 00:40:00.678 { 00:40:00.678 "subsystem": "sock", 00:40:00.678 "config": [ 00:40:00.678 { 00:40:00.678 "method": "sock_set_default_impl", 00:40:00.678 "params": { 00:40:00.678 "impl_name": "posix" 00:40:00.678 } 00:40:00.678 }, 00:40:00.678 { 00:40:00.678 "method": "sock_impl_set_options", 00:40:00.678 "params": { 00:40:00.678 "impl_name": "ssl", 00:40:00.678 "recv_buf_size": 4096, 00:40:00.678 "send_buf_size": 4096, 00:40:00.678 "enable_recv_pipe": true, 00:40:00.678 "enable_quickack": false, 00:40:00.678 "enable_placement_id": 0, 00:40:00.678 "enable_zerocopy_send_server": true, 00:40:00.678 "enable_zerocopy_send_client": false, 00:40:00.678 "zerocopy_threshold": 0, 00:40:00.678 "tls_version": 0, 00:40:00.678 "enable_ktls": false 00:40:00.678 } 00:40:00.678 }, 00:40:00.678 { 00:40:00.678 "method": "sock_impl_set_options", 00:40:00.678 "params": { 00:40:00.678 "impl_name": "posix", 00:40:00.678 "recv_buf_size": 2097152, 00:40:00.678 "send_buf_size": 2097152, 00:40:00.678 "enable_recv_pipe": true, 00:40:00.678 "enable_quickack": false, 00:40:00.678 "enable_placement_id": 0, 00:40:00.678 "enable_zerocopy_send_server": true, 00:40:00.678 "enable_zerocopy_send_client": false, 00:40:00.678 "zerocopy_threshold": 0, 00:40:00.678 "tls_version": 0, 00:40:00.678 "enable_ktls": false 00:40:00.678 } 00:40:00.678 } 00:40:00.678 ] 00:40:00.678 }, 00:40:00.678 { 00:40:00.678 "subsystem": "vmd", 00:40:00.678 "config": [] 00:40:00.678 }, 00:40:00.678 { 00:40:00.678 "subsystem": "accel", 00:40:00.678 "config": [ 00:40:00.678 { 00:40:00.678 "method": "accel_set_options", 00:40:00.678 "params": { 00:40:00.678 "small_cache_size": 128, 00:40:00.678 "large_cache_size": 16, 00:40:00.678 "task_count": 2048, 00:40:00.678 "sequence_count": 2048, 00:40:00.678 "buf_count": 2048 00:40:00.678 } 00:40:00.678 } 00:40:00.678 ] 00:40:00.678 }, 00:40:00.678 { 00:40:00.678 "subsystem": "bdev", 00:40:00.678 "config": [ 00:40:00.678 { 00:40:00.678 "method": "bdev_set_options", 00:40:00.678 "params": { 00:40:00.678 "bdev_io_pool_size": 65535, 00:40:00.678 "bdev_io_cache_size": 256, 00:40:00.678 "bdev_auto_examine": true, 00:40:00.678 "iobuf_small_cache_size": 128, 00:40:00.678 "iobuf_large_cache_size": 16 00:40:00.678 } 00:40:00.678 }, 00:40:00.678 { 00:40:00.678 "method": "bdev_raid_set_options", 00:40:00.678 "params": { 00:40:00.678 "process_window_size_kb": 1024, 00:40:00.678 "process_max_bandwidth_mb_sec": 0 00:40:00.678 } 00:40:00.678 }, 00:40:00.678 { 00:40:00.678 "method": "bdev_iscsi_set_options", 00:40:00.678 "params": { 00:40:00.678 "timeout_sec": 30 00:40:00.678 } 00:40:00.678 }, 00:40:00.678 { 00:40:00.678 "method": "bdev_nvme_set_options", 00:40:00.678 "params": { 00:40:00.678 "action_on_timeout": "none", 00:40:00.678 "timeout_us": 0, 00:40:00.678 "timeout_admin_us": 0, 00:40:00.678 "keep_alive_timeout_ms": 10000, 00:40:00.678 "arbitration_burst": 0, 00:40:00.678 "low_priority_weight": 0, 00:40:00.678 "medium_priority_weight": 0, 00:40:00.678 "high_priority_weight": 0, 00:40:00.678 "nvme_adminq_poll_period_us": 10000, 00:40:00.678 "nvme_ioq_poll_period_us": 0, 00:40:00.678 "io_queue_requests": 512, 00:40:00.678 "delay_cmd_submit": true, 00:40:00.678 "transport_retry_count": 4, 00:40:00.678 "bdev_retry_count": 3, 00:40:00.678 "transport_ack_timeout": 0, 00:40:00.678 "ctrlr_loss_timeout_sec": 0, 00:40:00.678 "reconnect_delay_sec": 0, 00:40:00.678 "fast_io_fail_timeout_sec": 0, 00:40:00.678 "disable_auto_failback": false, 00:40:00.678 "generate_uuids": false, 00:40:00.678 "transport_tos": 0, 00:40:00.678 "nvme_error_stat": false, 00:40:00.678 "rdma_srq_size": 0, 00:40:00.678 "io_path_stat": false, 00:40:00.678 "allow_accel_sequence": false, 00:40:00.678 "rdma_max_cq_size": 0, 00:40:00.678 "rdma_cm_event_timeout_ms": 0, 00:40:00.678 "dhchap_digests": [ 00:40:00.678 "sha256", 00:40:00.678 "sha384", 00:40:00.678 "sha512" 00:40:00.678 ], 00:40:00.678 "dhchap_dhgroups": [ 00:40:00.678 "null", 00:40:00.678 "ffdhe2048", 00:40:00.678 "ffdhe3072", 00:40:00.678 "ffdhe4096", 00:40:00.678 "ffdhe6144", 00:40:00.678 "ffdhe8192" 00:40:00.678 ] 00:40:00.678 } 00:40:00.678 }, 00:40:00.678 { 00:40:00.678 "method": "bdev_nvme_attach_controller", 00:40:00.678 "params": { 00:40:00.678 "name": "nvme0", 00:40:00.678 "trtype": "TCP", 00:40:00.678 "adrfam": "IPv4", 00:40:00.678 "traddr": "127.0.0.1", 00:40:00.678 "trsvcid": "4420", 00:40:00.678 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:00.678 "prchk_reftag": false, 00:40:00.678 "prchk_guard": false, 00:40:00.678 "ctrlr_loss_timeout_sec": 0, 00:40:00.678 "reconnect_delay_sec": 0, 00:40:00.678 "fast_io_fail_timeout_sec": 0, 00:40:00.678 "psk": "key0", 00:40:00.678 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:00.678 "hdgst": false, 00:40:00.678 "ddgst": false 00:40:00.678 } 00:40:00.678 }, 00:40:00.678 { 00:40:00.678 "method": "bdev_nvme_set_hotplug", 00:40:00.678 "params": { 00:40:00.678 "period_us": 100000, 00:40:00.678 "enable": false 00:40:00.678 } 00:40:00.678 }, 00:40:00.678 { 00:40:00.678 "method": "bdev_wait_for_examine" 00:40:00.678 } 00:40:00.678 ] 00:40:00.678 }, 00:40:00.678 { 00:40:00.678 "subsystem": "nbd", 00:40:00.678 "config": [] 00:40:00.678 } 00:40:00.678 ] 00:40:00.678 }' 00:40:00.678 08:54:52 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 4087059 ']' 00:40:00.679 08:54:52 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:00.679 08:54:52 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:40:00.679 08:54:52 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:00.679 08:54:52 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:00.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:00.679 08:54:52 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:00.679 08:54:52 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:00.679 [2024-10-01 08:54:52.372649] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:40:00.679 [2024-10-01 08:54:52.372708] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4087059 ] 00:40:00.679 [2024-10-01 08:54:52.448717] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:00.938 [2024-10-01 08:54:52.502309] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:40:00.938 [2024-10-01 08:54:52.645060] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:40:01.509 08:54:53 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:01.509 08:54:53 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:40:01.509 08:54:53 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:40:01.509 08:54:53 keyring_file -- keyring/file.sh@121 -- # jq length 00:40:01.509 08:54:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:01.509 08:54:53 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:40:01.509 08:54:53 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:40:01.509 08:54:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:01.509 08:54:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:01.509 08:54:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:01.509 08:54:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:01.509 08:54:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:01.769 08:54:53 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:40:01.769 08:54:53 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:40:01.769 08:54:53 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:01.769 08:54:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:01.769 08:54:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:01.769 08:54:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:01.769 08:54:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:02.030 08:54:53 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:40:02.031 08:54:53 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:40:02.031 08:54:53 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:40:02.031 08:54:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:40:02.031 08:54:53 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:40:02.031 08:54:53 keyring_file -- keyring/file.sh@1 -- # cleanup 00:40:02.031 08:54:53 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.NdjO3JSm5v /tmp/tmp.Al4e0UilgL 00:40:02.031 08:54:53 keyring_file -- keyring/file.sh@20 -- # killprocess 4087059 00:40:02.031 08:54:53 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 4087059 ']' 00:40:02.031 08:54:53 keyring_file -- common/autotest_common.sh@954 -- # kill -0 4087059 00:40:02.031 08:54:53 keyring_file -- common/autotest_common.sh@955 -- # uname 00:40:02.031 08:54:53 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:02.031 08:54:53 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4087059 00:40:02.291 08:54:53 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:40:02.291 08:54:53 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:40:02.291 08:54:53 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4087059' 00:40:02.291 killing process with pid 4087059 00:40:02.291 08:54:53 keyring_file -- common/autotest_common.sh@969 -- # kill 4087059 00:40:02.291 Received shutdown signal, test time was about 1.000000 seconds 00:40:02.291 00:40:02.291 Latency(us) 00:40:02.291 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:02.291 =================================================================================================================== 00:40:02.291 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:40:02.291 08:54:53 keyring_file -- common/autotest_common.sh@974 -- # wait 4087059 00:40:02.291 08:54:54 keyring_file -- keyring/file.sh@21 -- # killprocess 4085096 00:40:02.291 08:54:54 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 4085096 ']' 00:40:02.291 08:54:54 keyring_file -- common/autotest_common.sh@954 -- # kill -0 4085096 00:40:02.291 08:54:54 keyring_file -- common/autotest_common.sh@955 -- # uname 00:40:02.291 08:54:54 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:02.291 08:54:54 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4085096 00:40:02.291 08:54:54 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:02.291 08:54:54 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:02.291 08:54:54 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4085096' 00:40:02.291 killing process with pid 4085096 00:40:02.291 08:54:54 keyring_file -- common/autotest_common.sh@969 -- # kill 4085096 00:40:02.291 08:54:54 keyring_file -- common/autotest_common.sh@974 -- # wait 4085096 00:40:02.552 00:40:02.552 real 0m11.729s 00:40:02.552 user 0m28.141s 00:40:02.552 sys 0m2.643s 00:40:02.552 08:54:54 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:02.552 08:54:54 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:02.552 ************************************ 00:40:02.552 END TEST keyring_file 00:40:02.552 ************************************ 00:40:02.552 08:54:54 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:40:02.552 08:54:54 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:40:02.552 08:54:54 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:40:02.552 08:54:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:02.552 08:54:54 -- common/autotest_common.sh@10 -- # set +x 00:40:02.815 ************************************ 00:40:02.815 START TEST keyring_linux 00:40:02.815 ************************************ 00:40:02.815 08:54:54 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:40:02.815 Joined session keyring: 946002303 00:40:02.815 * Looking for test storage... 00:40:02.815 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:40:02.815 08:54:54 keyring_linux -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:40:02.815 08:54:54 keyring_linux -- common/autotest_common.sh@1681 -- # lcov --version 00:40:02.815 08:54:54 keyring_linux -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:40:02.815 08:54:54 keyring_linux -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:40:02.815 08:54:54 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:02.815 08:54:54 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:02.815 08:54:54 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:02.815 08:54:54 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:40:02.815 08:54:54 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:40:02.815 08:54:54 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:40:02.815 08:54:54 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:40:02.815 08:54:54 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:40:02.815 08:54:54 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:40:02.815 08:54:54 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:40:02.815 08:54:54 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:02.815 08:54:54 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:40:02.815 08:54:54 keyring_linux -- scripts/common.sh@345 -- # : 1 00:40:02.816 08:54:54 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:02.816 08:54:54 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:02.816 08:54:54 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:40:02.816 08:54:54 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:40:02.816 08:54:54 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:02.816 08:54:54 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:40:02.816 08:54:54 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:40:02.816 08:54:54 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:40:02.816 08:54:54 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:40:02.816 08:54:54 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:02.816 08:54:54 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:40:02.816 08:54:54 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:40:02.816 08:54:54 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:02.816 08:54:54 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:02.816 08:54:54 keyring_linux -- scripts/common.sh@368 -- # return 0 00:40:02.816 08:54:54 keyring_linux -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:02.816 08:54:54 keyring_linux -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:40:02.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:02.816 --rc genhtml_branch_coverage=1 00:40:02.816 --rc genhtml_function_coverage=1 00:40:02.816 --rc genhtml_legend=1 00:40:02.816 --rc geninfo_all_blocks=1 00:40:02.816 --rc geninfo_unexecuted_blocks=1 00:40:02.816 00:40:02.816 ' 00:40:02.816 08:54:54 keyring_linux -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:40:02.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:02.816 --rc genhtml_branch_coverage=1 00:40:02.816 --rc genhtml_function_coverage=1 00:40:02.816 --rc genhtml_legend=1 00:40:02.816 --rc geninfo_all_blocks=1 00:40:02.816 --rc geninfo_unexecuted_blocks=1 00:40:02.816 00:40:02.816 ' 00:40:02.816 08:54:54 keyring_linux -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:40:02.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:02.816 --rc genhtml_branch_coverage=1 00:40:02.816 --rc genhtml_function_coverage=1 00:40:02.816 --rc genhtml_legend=1 00:40:02.816 --rc geninfo_all_blocks=1 00:40:02.816 --rc geninfo_unexecuted_blocks=1 00:40:02.816 00:40:02.816 ' 00:40:02.816 08:54:54 keyring_linux -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:40:02.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:02.816 --rc genhtml_branch_coverage=1 00:40:02.816 --rc genhtml_function_coverage=1 00:40:02.816 --rc genhtml_legend=1 00:40:02.816 --rc geninfo_all_blocks=1 00:40:02.816 --rc geninfo_unexecuted_blocks=1 00:40:02.816 00:40:02.816 ' 00:40:02.816 08:54:54 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:40:02.816 08:54:54 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:02.816 08:54:54 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:40:02.816 08:54:54 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:02.816 08:54:54 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:02.816 08:54:54 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:02.816 08:54:54 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:02.816 08:54:54 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:02.816 08:54:54 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:02.816 08:54:54 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:02.816 08:54:54 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:02.816 08:54:54 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:02.816 08:54:54 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:02.816 08:54:54 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:40:02.816 08:54:54 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:40:02.816 08:54:54 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:02.816 08:54:54 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:02.816 08:54:54 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:02.816 08:54:54 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:02.816 08:54:54 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:02.816 08:54:54 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:40:02.816 08:54:54 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:02.816 08:54:54 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:02.816 08:54:54 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:02.816 08:54:54 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:02.816 08:54:54 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:02.816 08:54:54 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:02.816 08:54:54 keyring_linux -- paths/export.sh@5 -- # export PATH 00:40:02.816 08:54:54 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:02.816 08:54:54 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:40:02.816 08:54:54 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:02.816 08:54:54 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:02.816 08:54:54 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:02.816 08:54:54 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:02.816 08:54:54 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:02.816 08:54:54 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:02.816 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:02.816 08:54:54 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:02.816 08:54:54 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:02.816 08:54:54 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:02.816 08:54:54 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:40:02.816 08:54:54 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:40:02.816 08:54:54 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:40:02.816 08:54:54 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:40:02.816 08:54:54 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:40:02.816 08:54:54 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:40:02.816 08:54:54 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:40:02.816 08:54:54 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:40:02.816 08:54:54 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:40:02.816 08:54:54 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:40:02.816 08:54:54 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:40:02.816 08:54:54 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:40:02.816 08:54:54 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:40:02.816 08:54:54 keyring_linux -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:40:02.816 08:54:54 keyring_linux -- nvmf/common.sh@726 -- # local prefix key digest 00:40:02.816 08:54:54 keyring_linux -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:40:02.816 08:54:54 keyring_linux -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:40:02.816 08:54:54 keyring_linux -- nvmf/common.sh@728 -- # digest=0 00:40:02.816 08:54:54 keyring_linux -- nvmf/common.sh@729 -- # python - 00:40:03.079 08:54:54 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:40:03.079 08:54:54 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:40:03.079 /tmp/:spdk-test:key0 00:40:03.079 08:54:54 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:40:03.079 08:54:54 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:40:03.079 08:54:54 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:40:03.079 08:54:54 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:40:03.079 08:54:54 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:40:03.079 08:54:54 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:40:03.079 08:54:54 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:40:03.079 08:54:54 keyring_linux -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:40:03.079 08:54:54 keyring_linux -- nvmf/common.sh@726 -- # local prefix key digest 00:40:03.079 08:54:54 keyring_linux -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:40:03.079 08:54:54 keyring_linux -- nvmf/common.sh@728 -- # key=112233445566778899aabbccddeeff00 00:40:03.079 08:54:54 keyring_linux -- nvmf/common.sh@728 -- # digest=0 00:40:03.079 08:54:54 keyring_linux -- nvmf/common.sh@729 -- # python - 00:40:03.079 08:54:54 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:40:03.079 08:54:54 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:40:03.079 /tmp/:spdk-test:key1 00:40:03.079 08:54:54 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:40:03.079 08:54:54 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=4087653 00:40:03.079 08:54:54 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 4087653 00:40:03.079 08:54:54 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 4087653 ']' 00:40:03.079 08:54:54 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:03.079 08:54:54 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:03.079 08:54:54 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:03.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:03.079 08:54:54 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:03.079 08:54:54 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:03.079 [2024-10-01 08:54:54.761270] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:40:03.079 [2024-10-01 08:54:54.761344] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4087653 ] 00:40:03.079 [2024-10-01 08:54:54.824986] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:03.079 [2024-10-01 08:54:54.899504] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:40:04.021 08:54:55 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:04.021 08:54:55 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:40:04.021 08:54:55 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:40:04.021 08:54:55 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:04.021 08:54:55 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:04.021 [2024-10-01 08:54:55.564977] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:04.021 null0 00:40:04.021 [2024-10-01 08:54:55.597028] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:40:04.021 [2024-10-01 08:54:55.597429] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:40:04.021 08:54:55 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:04.021 08:54:55 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:40:04.021 1047706139 00:40:04.021 08:54:55 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:40:04.021 430012582 00:40:04.021 08:54:55 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=4087705 00:40:04.021 08:54:55 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 4087705 /var/tmp/bperf.sock 00:40:04.021 08:54:55 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:40:04.021 08:54:55 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 4087705 ']' 00:40:04.021 08:54:55 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:04.021 08:54:55 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:04.021 08:54:55 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:04.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:04.021 08:54:55 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:04.021 08:54:55 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:04.021 [2024-10-01 08:54:55.675817] Starting SPDK v25.01-pre git sha1 718f46c19 / DPDK 24.03.0 initialization... 00:40:04.021 [2024-10-01 08:54:55.675870] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4087705 ] 00:40:04.021 [2024-10-01 08:54:55.748834] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:04.021 [2024-10-01 08:54:55.802361] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:40:04.961 08:54:56 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:04.961 08:54:56 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:40:04.961 08:54:56 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:40:04.961 08:54:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:40:04.961 08:54:56 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:40:04.961 08:54:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:40:05.221 08:54:56 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:40:05.221 08:54:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:40:05.221 [2024-10-01 08:54:56.977957] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:40:05.482 nvme0n1 00:40:05.482 08:54:57 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:40:05.482 08:54:57 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:40:05.482 08:54:57 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:40:05.482 08:54:57 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:40:05.482 08:54:57 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:40:05.482 08:54:57 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:05.482 08:54:57 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:40:05.482 08:54:57 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:40:05.482 08:54:57 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:40:05.482 08:54:57 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:40:05.482 08:54:57 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:05.482 08:54:57 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:05.482 08:54:57 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:40:05.742 08:54:57 keyring_linux -- keyring/linux.sh@25 -- # sn=1047706139 00:40:05.742 08:54:57 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:40:05.742 08:54:57 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:40:05.742 08:54:57 keyring_linux -- keyring/linux.sh@26 -- # [[ 1047706139 == \1\0\4\7\7\0\6\1\3\9 ]] 00:40:05.742 08:54:57 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 1047706139 00:40:05.742 08:54:57 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:40:05.742 08:54:57 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:40:05.742 Running I/O for 1 seconds... 00:40:07.123 16822.00 IOPS, 65.71 MiB/s 00:40:07.123 Latency(us) 00:40:07.123 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:07.123 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:40:07.123 nvme0n1 : 1.01 16822.33 65.71 0.00 0.00 7577.38 1788.59 8792.75 00:40:07.123 =================================================================================================================== 00:40:07.123 Total : 16822.33 65.71 0.00 0.00 7577.38 1788.59 8792.75 00:40:07.123 { 00:40:07.123 "results": [ 00:40:07.123 { 00:40:07.123 "job": "nvme0n1", 00:40:07.123 "core_mask": "0x2", 00:40:07.123 "workload": "randread", 00:40:07.123 "status": "finished", 00:40:07.123 "queue_depth": 128, 00:40:07.123 "io_size": 4096, 00:40:07.123 "runtime": 1.007649, 00:40:07.123 "iops": 16822.32602821022, 00:40:07.123 "mibps": 65.71221104769617, 00:40:07.123 "io_failed": 0, 00:40:07.123 "io_timeout": 0, 00:40:07.123 "avg_latency_us": 7577.384137022397, 00:40:07.123 "min_latency_us": 1788.5866666666666, 00:40:07.123 "max_latency_us": 8792.746666666666 00:40:07.123 } 00:40:07.123 ], 00:40:07.123 "core_count": 1 00:40:07.123 } 00:40:07.123 08:54:58 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:40:07.123 08:54:58 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:40:07.123 08:54:58 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:40:07.123 08:54:58 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:40:07.123 08:54:58 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:40:07.123 08:54:58 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:40:07.123 08:54:58 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:40:07.123 08:54:58 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:07.123 08:54:58 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:40:07.123 08:54:58 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:40:07.123 08:54:58 keyring_linux -- keyring/linux.sh@23 -- # return 00:40:07.123 08:54:58 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:07.123 08:54:58 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:40:07.123 08:54:58 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:07.123 08:54:58 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:40:07.123 08:54:58 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:07.123 08:54:58 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:40:07.123 08:54:58 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:07.123 08:54:58 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:07.123 08:54:58 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:07.383 [2024-10-01 08:54:59.067671] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:40:07.383 [2024-10-01 08:54:59.068107] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2310c10 (107): Transport endpoint is not connected 00:40:07.383 [2024-10-01 08:54:59.069103] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2310c10 (9): Bad file descriptor 00:40:07.383 [2024-10-01 08:54:59.070105] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:40:07.383 [2024-10-01 08:54:59.070112] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:40:07.383 [2024-10-01 08:54:59.070118] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:40:07.383 [2024-10-01 08:54:59.070124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:40:07.383 request: 00:40:07.383 { 00:40:07.383 "name": "nvme0", 00:40:07.383 "trtype": "tcp", 00:40:07.383 "traddr": "127.0.0.1", 00:40:07.383 "adrfam": "ipv4", 00:40:07.383 "trsvcid": "4420", 00:40:07.383 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:07.383 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:07.383 "prchk_reftag": false, 00:40:07.383 "prchk_guard": false, 00:40:07.383 "hdgst": false, 00:40:07.383 "ddgst": false, 00:40:07.383 "psk": ":spdk-test:key1", 00:40:07.383 "allow_unrecognized_csi": false, 00:40:07.383 "method": "bdev_nvme_attach_controller", 00:40:07.383 "req_id": 1 00:40:07.383 } 00:40:07.383 Got JSON-RPC error response 00:40:07.383 response: 00:40:07.383 { 00:40:07.383 "code": -5, 00:40:07.383 "message": "Input/output error" 00:40:07.383 } 00:40:07.383 08:54:59 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:40:07.383 08:54:59 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:40:07.383 08:54:59 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:40:07.383 08:54:59 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:40:07.383 08:54:59 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:40:07.383 08:54:59 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:40:07.383 08:54:59 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:40:07.383 08:54:59 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:40:07.383 08:54:59 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:40:07.383 08:54:59 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:40:07.383 08:54:59 keyring_linux -- keyring/linux.sh@33 -- # sn=1047706139 00:40:07.383 08:54:59 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1047706139 00:40:07.383 1 links removed 00:40:07.383 08:54:59 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:40:07.383 08:54:59 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:40:07.383 08:54:59 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:40:07.383 08:54:59 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:40:07.383 08:54:59 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:40:07.383 08:54:59 keyring_linux -- keyring/linux.sh@33 -- # sn=430012582 00:40:07.383 08:54:59 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 430012582 00:40:07.383 1 links removed 00:40:07.383 08:54:59 keyring_linux -- keyring/linux.sh@41 -- # killprocess 4087705 00:40:07.383 08:54:59 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 4087705 ']' 00:40:07.383 08:54:59 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 4087705 00:40:07.383 08:54:59 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:40:07.383 08:54:59 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:07.383 08:54:59 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4087705 00:40:07.384 08:54:59 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:40:07.384 08:54:59 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:40:07.384 08:54:59 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4087705' 00:40:07.384 killing process with pid 4087705 00:40:07.384 08:54:59 keyring_linux -- common/autotest_common.sh@969 -- # kill 4087705 00:40:07.384 Received shutdown signal, test time was about 1.000000 seconds 00:40:07.384 00:40:07.384 Latency(us) 00:40:07.384 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:07.384 =================================================================================================================== 00:40:07.384 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:07.384 08:54:59 keyring_linux -- common/autotest_common.sh@974 -- # wait 4087705 00:40:07.644 08:54:59 keyring_linux -- keyring/linux.sh@42 -- # killprocess 4087653 00:40:07.644 08:54:59 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 4087653 ']' 00:40:07.644 08:54:59 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 4087653 00:40:07.644 08:54:59 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:40:07.644 08:54:59 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:07.644 08:54:59 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4087653 00:40:07.644 08:54:59 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:07.644 08:54:59 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:07.644 08:54:59 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4087653' 00:40:07.644 killing process with pid 4087653 00:40:07.644 08:54:59 keyring_linux -- common/autotest_common.sh@969 -- # kill 4087653 00:40:07.644 08:54:59 keyring_linux -- common/autotest_common.sh@974 -- # wait 4087653 00:40:07.903 00:40:07.903 real 0m5.207s 00:40:07.904 user 0m9.601s 00:40:07.904 sys 0m1.423s 00:40:07.904 08:54:59 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:07.904 08:54:59 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:07.904 ************************************ 00:40:07.904 END TEST keyring_linux 00:40:07.904 ************************************ 00:40:07.904 08:54:59 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:40:07.904 08:54:59 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:40:07.904 08:54:59 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:40:07.904 08:54:59 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:40:07.904 08:54:59 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:40:07.904 08:54:59 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:40:07.904 08:54:59 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:40:07.904 08:54:59 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:40:07.904 08:54:59 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:40:07.904 08:54:59 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:40:07.904 08:54:59 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:40:07.904 08:54:59 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:40:07.904 08:54:59 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:40:07.904 08:54:59 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:40:07.904 08:54:59 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:40:07.904 08:54:59 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:40:07.904 08:54:59 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:40:07.904 08:54:59 -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:07.904 08:54:59 -- common/autotest_common.sh@10 -- # set +x 00:40:07.904 08:54:59 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:40:07.904 08:54:59 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:40:07.904 08:54:59 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:40:07.904 08:54:59 -- common/autotest_common.sh@10 -- # set +x 00:40:16.036 INFO: APP EXITING 00:40:16.036 INFO: killing all VMs 00:40:16.036 INFO: killing vhost app 00:40:16.036 WARN: no vhost pid file found 00:40:16.036 INFO: EXIT DONE 00:40:19.332 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:40:19.332 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:40:19.332 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:40:19.332 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:40:19.332 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:40:19.332 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:40:19.332 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:40:19.332 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:40:19.332 0000:65:00.0 (144d a80a): Already using the nvme driver 00:40:19.332 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:40:19.332 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:40:19.332 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:40:19.332 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:40:19.332 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:40:19.332 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:40:19.332 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:40:19.332 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:40:22.634 Cleaning 00:40:22.634 Removing: /var/run/dpdk/spdk0/config 00:40:22.634 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:40:22.634 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:40:22.634 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:40:22.634 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:40:22.634 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:40:22.634 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:40:22.634 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:40:22.634 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:40:22.634 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:40:22.634 Removing: /var/run/dpdk/spdk0/hugepage_info 00:40:22.634 Removing: /var/run/dpdk/spdk1/config 00:40:22.634 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:40:22.894 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:40:22.894 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:40:22.894 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:40:22.894 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:40:22.894 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:40:22.894 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:40:22.894 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:40:22.894 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:40:22.894 Removing: /var/run/dpdk/spdk1/hugepage_info 00:40:22.895 Removing: /var/run/dpdk/spdk2/config 00:40:22.895 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:40:22.895 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:40:22.895 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:40:22.895 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:40:22.895 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:40:22.895 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:40:22.895 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:40:22.895 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:40:22.895 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:40:22.895 Removing: /var/run/dpdk/spdk2/hugepage_info 00:40:22.895 Removing: /var/run/dpdk/spdk3/config 00:40:22.895 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:40:22.895 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:40:22.895 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:40:22.895 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:40:22.895 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:40:22.895 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:40:22.895 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:40:22.895 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:40:22.895 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:40:22.895 Removing: /var/run/dpdk/spdk3/hugepage_info 00:40:22.895 Removing: /var/run/dpdk/spdk4/config 00:40:22.895 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:40:22.895 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:40:22.895 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:40:22.895 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:40:22.895 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:40:22.895 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:40:22.895 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:40:22.895 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:40:22.895 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:40:22.895 Removing: /var/run/dpdk/spdk4/hugepage_info 00:40:22.895 Removing: /dev/shm/bdev_svc_trace.1 00:40:22.895 Removing: /dev/shm/nvmf_trace.0 00:40:22.895 Removing: /dev/shm/spdk_tgt_trace.pid3517208 00:40:22.895 Removing: /var/run/dpdk/spdk0 00:40:22.895 Removing: /var/run/dpdk/spdk1 00:40:22.895 Removing: /var/run/dpdk/spdk2 00:40:22.895 Removing: /var/run/dpdk/spdk3 00:40:22.895 Removing: /var/run/dpdk/spdk4 00:40:22.895 Removing: /var/run/dpdk/spdk_pid3515257 00:40:22.895 Removing: /var/run/dpdk/spdk_pid3517208 00:40:22.895 Removing: /var/run/dpdk/spdk_pid3517942 00:40:22.895 Removing: /var/run/dpdk/spdk_pid3519095 00:40:22.895 Removing: /var/run/dpdk/spdk_pid3519345 00:40:22.895 Removing: /var/run/dpdk/spdk_pid3520502 00:40:22.895 Removing: /var/run/dpdk/spdk_pid3520607 00:40:22.895 Removing: /var/run/dpdk/spdk_pid3520972 00:40:22.895 Removing: /var/run/dpdk/spdk_pid3522112 00:40:22.895 Removing: /var/run/dpdk/spdk_pid3522902 00:40:23.155 Removing: /var/run/dpdk/spdk_pid3523276 00:40:23.155 Removing: /var/run/dpdk/spdk_pid3523646 00:40:23.155 Removing: /var/run/dpdk/spdk_pid3524062 00:40:23.155 Removing: /var/run/dpdk/spdk_pid3524439 00:40:23.155 Removing: /var/run/dpdk/spdk_pid3524604 00:40:23.155 Removing: /var/run/dpdk/spdk_pid3524898 00:40:23.155 Removing: /var/run/dpdk/spdk_pid3525286 00:40:23.155 Removing: /var/run/dpdk/spdk_pid3526477 00:40:23.155 Removing: /var/run/dpdk/spdk_pid3529946 00:40:23.155 Removing: /var/run/dpdk/spdk_pid3530322 00:40:23.155 Removing: /var/run/dpdk/spdk_pid3530686 00:40:23.155 Removing: /var/run/dpdk/spdk_pid3531016 00:40:23.155 Removing: /var/run/dpdk/spdk_pid3531390 00:40:23.155 Removing: /var/run/dpdk/spdk_pid3531523 00:40:23.155 Removing: /var/run/dpdk/spdk_pid3532101 00:40:23.155 Removing: /var/run/dpdk/spdk_pid3532123 00:40:23.155 Removing: /var/run/dpdk/spdk_pid3532486 00:40:23.155 Removing: /var/run/dpdk/spdk_pid3532813 00:40:23.155 Removing: /var/run/dpdk/spdk_pid3532878 00:40:23.155 Removing: /var/run/dpdk/spdk_pid3533188 00:40:23.155 Removing: /var/run/dpdk/spdk_pid3533639 00:40:23.155 Removing: /var/run/dpdk/spdk_pid3533995 00:40:23.155 Removing: /var/run/dpdk/spdk_pid3534392 00:40:23.155 Removing: /var/run/dpdk/spdk_pid3538920 00:40:23.155 Removing: /var/run/dpdk/spdk_pid3544299 00:40:23.155 Removing: /var/run/dpdk/spdk_pid3556090 00:40:23.155 Removing: /var/run/dpdk/spdk_pid3556773 00:40:23.155 Removing: /var/run/dpdk/spdk_pid3562073 00:40:23.155 Removing: /var/run/dpdk/spdk_pid3562605 00:40:23.155 Removing: /var/run/dpdk/spdk_pid3568148 00:40:23.155 Removing: /var/run/dpdk/spdk_pid3575225 00:40:23.155 Removing: /var/run/dpdk/spdk_pid3578328 00:40:23.155 Removing: /var/run/dpdk/spdk_pid3590923 00:40:23.155 Removing: /var/run/dpdk/spdk_pid3601906 00:40:23.155 Removing: /var/run/dpdk/spdk_pid3603992 00:40:23.155 Removing: /var/run/dpdk/spdk_pid3605231 00:40:23.155 Removing: /var/run/dpdk/spdk_pid3626485 00:40:23.155 Removing: /var/run/dpdk/spdk_pid3631248 00:40:23.155 Removing: /var/run/dpdk/spdk_pid3687292 00:40:23.155 Removing: /var/run/dpdk/spdk_pid3693790 00:40:23.155 Removing: /var/run/dpdk/spdk_pid3700750 00:40:23.155 Removing: /var/run/dpdk/spdk_pid3708190 00:40:23.155 Removing: /var/run/dpdk/spdk_pid3708192 00:40:23.155 Removing: /var/run/dpdk/spdk_pid3709196 00:40:23.155 Removing: /var/run/dpdk/spdk_pid3710201 00:40:23.155 Removing: /var/run/dpdk/spdk_pid3711207 00:40:23.155 Removing: /var/run/dpdk/spdk_pid3711875 00:40:23.155 Removing: /var/run/dpdk/spdk_pid3711888 00:40:23.155 Removing: /var/run/dpdk/spdk_pid3712216 00:40:23.155 Removing: /var/run/dpdk/spdk_pid3712235 00:40:23.156 Removing: /var/run/dpdk/spdk_pid3712255 00:40:23.156 Removing: /var/run/dpdk/spdk_pid3713301 00:40:23.156 Removing: /var/run/dpdk/spdk_pid3714305 00:40:23.156 Removing: /var/run/dpdk/spdk_pid3715390 00:40:23.156 Removing: /var/run/dpdk/spdk_pid3715996 00:40:23.156 Removing: /var/run/dpdk/spdk_pid3716125 00:40:23.156 Removing: /var/run/dpdk/spdk_pid3716381 00:40:23.156 Removing: /var/run/dpdk/spdk_pid3717765 00:40:23.156 Removing: /var/run/dpdk/spdk_pid3719123 00:40:23.156 Removing: /var/run/dpdk/spdk_pid3729681 00:40:23.156 Removing: /var/run/dpdk/spdk_pid3765499 00:40:23.416 Removing: /var/run/dpdk/spdk_pid3770914 00:40:23.416 Removing: /var/run/dpdk/spdk_pid3772900 00:40:23.416 Removing: /var/run/dpdk/spdk_pid3775048 00:40:23.416 Removing: /var/run/dpdk/spdk_pid3775263 00:40:23.416 Removing: /var/run/dpdk/spdk_pid3775577 00:40:23.416 Removing: /var/run/dpdk/spdk_pid3775617 00:40:23.416 Removing: /var/run/dpdk/spdk_pid3776327 00:40:23.416 Removing: /var/run/dpdk/spdk_pid3778595 00:40:23.416 Removing: /var/run/dpdk/spdk_pid3779760 00:40:23.416 Removing: /var/run/dpdk/spdk_pid3780464 00:40:23.416 Removing: /var/run/dpdk/spdk_pid3783075 00:40:23.416 Removing: /var/run/dpdk/spdk_pid3783876 00:40:23.416 Removing: /var/run/dpdk/spdk_pid3784593 00:40:23.416 Removing: /var/run/dpdk/spdk_pid3789649 00:40:23.416 Removing: /var/run/dpdk/spdk_pid3796351 00:40:23.416 Removing: /var/run/dpdk/spdk_pid3796352 00:40:23.416 Removing: /var/run/dpdk/spdk_pid3796353 00:40:23.416 Removing: /var/run/dpdk/spdk_pid3801004 00:40:23.416 Removing: /var/run/dpdk/spdk_pid3811241 00:40:23.416 Removing: /var/run/dpdk/spdk_pid3816685 00:40:23.416 Removing: /var/run/dpdk/spdk_pid3823858 00:40:23.416 Removing: /var/run/dpdk/spdk_pid3825389 00:40:23.416 Removing: /var/run/dpdk/spdk_pid3827011 00:40:23.416 Removing: /var/run/dpdk/spdk_pid3828761 00:40:23.416 Removing: /var/run/dpdk/spdk_pid3834397 00:40:23.416 Removing: /var/run/dpdk/spdk_pid3839226 00:40:23.416 Removing: /var/run/dpdk/spdk_pid3848310 00:40:23.416 Removing: /var/run/dpdk/spdk_pid3848315 00:40:23.416 Removing: /var/run/dpdk/spdk_pid3853371 00:40:23.416 Removing: /var/run/dpdk/spdk_pid3853706 00:40:23.416 Removing: /var/run/dpdk/spdk_pid3853906 00:40:23.416 Removing: /var/run/dpdk/spdk_pid3854380 00:40:23.416 Removing: /var/run/dpdk/spdk_pid3854391 00:40:23.416 Removing: /var/run/dpdk/spdk_pid3859956 00:40:23.416 Removing: /var/run/dpdk/spdk_pid3860588 00:40:23.416 Removing: /var/run/dpdk/spdk_pid3865956 00:40:23.416 Removing: /var/run/dpdk/spdk_pid3869678 00:40:23.416 Removing: /var/run/dpdk/spdk_pid3876394 00:40:23.416 Removing: /var/run/dpdk/spdk_pid3882651 00:40:23.416 Removing: /var/run/dpdk/spdk_pid3892856 00:40:23.416 Removing: /var/run/dpdk/spdk_pid3901349 00:40:23.416 Removing: /var/run/dpdk/spdk_pid3901378 00:40:23.416 Removing: /var/run/dpdk/spdk_pid3925294 00:40:23.416 Removing: /var/run/dpdk/spdk_pid3925995 00:40:23.416 Removing: /var/run/dpdk/spdk_pid3926842 00:40:23.416 Removing: /var/run/dpdk/spdk_pid3927659 00:40:23.416 Removing: /var/run/dpdk/spdk_pid3928631 00:40:23.416 Removing: /var/run/dpdk/spdk_pid3929406 00:40:23.416 Removing: /var/run/dpdk/spdk_pid3930091 00:40:23.416 Removing: /var/run/dpdk/spdk_pid3930772 00:40:23.416 Removing: /var/run/dpdk/spdk_pid3935854 00:40:23.416 Removing: /var/run/dpdk/spdk_pid3936168 00:40:23.416 Removing: /var/run/dpdk/spdk_pid3943519 00:40:23.416 Removing: /var/run/dpdk/spdk_pid3943644 00:40:23.416 Removing: /var/run/dpdk/spdk_pid3950127 00:40:23.416 Removing: /var/run/dpdk/spdk_pid3955348 00:40:23.416 Removing: /var/run/dpdk/spdk_pid3966697 00:40:23.416 Removing: /var/run/dpdk/spdk_pid3967395 00:40:23.416 Removing: /var/run/dpdk/spdk_pid3972986 00:40:23.416 Removing: /var/run/dpdk/spdk_pid3973339 00:40:23.416 Removing: /var/run/dpdk/spdk_pid3978430 00:40:23.416 Removing: /var/run/dpdk/spdk_pid3985364 00:40:23.678 Removing: /var/run/dpdk/spdk_pid3988333 00:40:23.678 Removing: /var/run/dpdk/spdk_pid4000402 00:40:23.678 Removing: /var/run/dpdk/spdk_pid4011011 00:40:23.678 Removing: /var/run/dpdk/spdk_pid4013013 00:40:23.678 Removing: /var/run/dpdk/spdk_pid4014022 00:40:23.678 Removing: /var/run/dpdk/spdk_pid4033934 00:40:23.678 Removing: /var/run/dpdk/spdk_pid4038578 00:40:23.678 Removing: /var/run/dpdk/spdk_pid4041786 00:40:23.678 Removing: /var/run/dpdk/spdk_pid4049196 00:40:23.678 Removing: /var/run/dpdk/spdk_pid4049208 00:40:23.678 Removing: /var/run/dpdk/spdk_pid4055081 00:40:23.678 Removing: /var/run/dpdk/spdk_pid4057444 00:40:23.678 Removing: /var/run/dpdk/spdk_pid4059782 00:40:23.678 Removing: /var/run/dpdk/spdk_pid4061047 00:40:23.678 Removing: /var/run/dpdk/spdk_pid4063499 00:40:23.678 Removing: /var/run/dpdk/spdk_pid4064855 00:40:23.678 Removing: /var/run/dpdk/spdk_pid4074855 00:40:23.678 Removing: /var/run/dpdk/spdk_pid4075430 00:40:23.678 Removing: /var/run/dpdk/spdk_pid4076440 00:40:23.678 Removing: /var/run/dpdk/spdk_pid4079503 00:40:23.678 Removing: /var/run/dpdk/spdk_pid4080010 00:40:23.678 Removing: /var/run/dpdk/spdk_pid4080524 00:40:23.678 Removing: /var/run/dpdk/spdk_pid4085096 00:40:23.678 Removing: /var/run/dpdk/spdk_pid4085405 00:40:23.678 Removing: /var/run/dpdk/spdk_pid4087059 00:40:23.678 Removing: /var/run/dpdk/spdk_pid4087653 00:40:23.678 Removing: /var/run/dpdk/spdk_pid4087705 00:40:23.678 Clean 00:40:23.678 08:55:15 -- common/autotest_common.sh@1451 -- # return 0 00:40:23.678 08:55:15 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:40:23.678 08:55:15 -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:23.678 08:55:15 -- common/autotest_common.sh@10 -- # set +x 00:40:23.678 08:55:15 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:40:23.678 08:55:15 -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:23.678 08:55:15 -- common/autotest_common.sh@10 -- # set +x 00:40:23.939 08:55:15 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:40:23.939 08:55:15 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:40:23.939 08:55:15 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:40:23.939 08:55:15 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:40:23.939 08:55:15 -- spdk/autotest.sh@394 -- # hostname 00:40:23.939 08:55:15 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:40:23.939 geninfo: WARNING: invalid characters removed from testname! 00:40:50.536 08:55:38 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:50.537 08:55:41 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:51.917 08:55:43 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:53.298 08:55:45 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:55.208 08:55:46 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:56.590 08:55:48 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:58.500 08:55:49 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:40:58.500 08:55:49 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:40:58.500 08:55:49 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:40:58.500 08:55:49 -- common/autotest_common.sh@1681 -- $ lcov --version 00:40:58.500 08:55:50 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:40:58.500 08:55:50 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:40:58.500 08:55:50 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:40:58.500 08:55:50 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:40:58.500 08:55:50 -- scripts/common.sh@336 -- $ IFS=.-: 00:40:58.500 08:55:50 -- scripts/common.sh@336 -- $ read -ra ver1 00:40:58.500 08:55:50 -- scripts/common.sh@337 -- $ IFS=.-: 00:40:58.500 08:55:50 -- scripts/common.sh@337 -- $ read -ra ver2 00:40:58.500 08:55:50 -- scripts/common.sh@338 -- $ local 'op=<' 00:40:58.500 08:55:50 -- scripts/common.sh@340 -- $ ver1_l=2 00:40:58.500 08:55:50 -- scripts/common.sh@341 -- $ ver2_l=1 00:40:58.500 08:55:50 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:40:58.500 08:55:50 -- scripts/common.sh@344 -- $ case "$op" in 00:40:58.500 08:55:50 -- scripts/common.sh@345 -- $ : 1 00:40:58.500 08:55:50 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:40:58.500 08:55:50 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:58.500 08:55:50 -- scripts/common.sh@365 -- $ decimal 1 00:40:58.500 08:55:50 -- scripts/common.sh@353 -- $ local d=1 00:40:58.500 08:55:50 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:40:58.500 08:55:50 -- scripts/common.sh@355 -- $ echo 1 00:40:58.500 08:55:50 -- scripts/common.sh@365 -- $ ver1[v]=1 00:40:58.500 08:55:50 -- scripts/common.sh@366 -- $ decimal 2 00:40:58.500 08:55:50 -- scripts/common.sh@353 -- $ local d=2 00:40:58.500 08:55:50 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:40:58.500 08:55:50 -- scripts/common.sh@355 -- $ echo 2 00:40:58.500 08:55:50 -- scripts/common.sh@366 -- $ ver2[v]=2 00:40:58.500 08:55:50 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:40:58.500 08:55:50 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:40:58.500 08:55:50 -- scripts/common.sh@368 -- $ return 0 00:40:58.500 08:55:50 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:58.500 08:55:50 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:40:58.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:58.500 --rc genhtml_branch_coverage=1 00:40:58.500 --rc genhtml_function_coverage=1 00:40:58.500 --rc genhtml_legend=1 00:40:58.500 --rc geninfo_all_blocks=1 00:40:58.500 --rc geninfo_unexecuted_blocks=1 00:40:58.500 00:40:58.500 ' 00:40:58.500 08:55:50 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:40:58.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:58.500 --rc genhtml_branch_coverage=1 00:40:58.500 --rc genhtml_function_coverage=1 00:40:58.500 --rc genhtml_legend=1 00:40:58.500 --rc geninfo_all_blocks=1 00:40:58.500 --rc geninfo_unexecuted_blocks=1 00:40:58.500 00:40:58.500 ' 00:40:58.500 08:55:50 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:40:58.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:58.500 --rc genhtml_branch_coverage=1 00:40:58.500 --rc genhtml_function_coverage=1 00:40:58.500 --rc genhtml_legend=1 00:40:58.500 --rc geninfo_all_blocks=1 00:40:58.500 --rc geninfo_unexecuted_blocks=1 00:40:58.500 00:40:58.500 ' 00:40:58.500 08:55:50 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:40:58.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:58.500 --rc genhtml_branch_coverage=1 00:40:58.500 --rc genhtml_function_coverage=1 00:40:58.500 --rc genhtml_legend=1 00:40:58.500 --rc geninfo_all_blocks=1 00:40:58.500 --rc geninfo_unexecuted_blocks=1 00:40:58.500 00:40:58.500 ' 00:40:58.500 08:55:50 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:58.500 08:55:50 -- scripts/common.sh@15 -- $ shopt -s extglob 00:40:58.500 08:55:50 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:40:58.500 08:55:50 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:58.500 08:55:50 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:58.500 08:55:50 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:58.500 08:55:50 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:58.500 08:55:50 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:58.500 08:55:50 -- paths/export.sh@5 -- $ export PATH 00:40:58.500 08:55:50 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:58.500 08:55:50 -- common/autobuild_common.sh@478 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:40:58.500 08:55:50 -- common/autobuild_common.sh@479 -- $ date +%s 00:40:58.500 08:55:50 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727765750.XXXXXX 00:40:58.500 08:55:50 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727765750.hezg3C 00:40:58.500 08:55:50 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:40:58.500 08:55:50 -- common/autobuild_common.sh@485 -- $ '[' -n '' ']' 00:40:58.500 08:55:50 -- common/autobuild_common.sh@488 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:40:58.500 08:55:50 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:40:58.500 08:55:50 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:40:58.500 08:55:50 -- common/autobuild_common.sh@495 -- $ get_config_params 00:40:58.500 08:55:50 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:40:58.500 08:55:50 -- common/autotest_common.sh@10 -- $ set +x 00:40:58.500 08:55:50 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:40:58.500 08:55:50 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:40:58.500 08:55:50 -- pm/common@17 -- $ local monitor 00:40:58.500 08:55:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:58.500 08:55:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:58.500 08:55:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:58.500 08:55:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:58.500 08:55:50 -- pm/common@21 -- $ date +%s 00:40:58.500 08:55:50 -- pm/common@21 -- $ date +%s 00:40:58.500 08:55:50 -- pm/common@25 -- $ sleep 1 00:40:58.500 08:55:50 -- pm/common@21 -- $ date +%s 00:40:58.500 08:55:50 -- pm/common@21 -- $ date +%s 00:40:58.500 08:55:50 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1727765750 00:40:58.500 08:55:50 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1727765750 00:40:58.500 08:55:50 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1727765750 00:40:58.500 08:55:50 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1727765750 00:40:58.500 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1727765750_collect-cpu-load.pm.log 00:40:58.500 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1727765750_collect-vmstat.pm.log 00:40:58.500 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1727765750_collect-cpu-temp.pm.log 00:40:58.500 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1727765750_collect-bmc-pm.bmc.pm.log 00:40:59.443 08:55:51 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:40:59.443 08:55:51 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:40:59.443 08:55:51 -- spdk/autopackage.sh@14 -- $ timing_finish 00:40:59.443 08:55:51 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:40:59.443 08:55:51 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:40:59.443 08:55:51 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:40:59.443 08:55:51 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:40:59.443 08:55:51 -- pm/common@29 -- $ signal_monitor_resources TERM 00:40:59.443 08:55:51 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:40:59.443 08:55:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:59.443 08:55:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:40:59.443 08:55:51 -- pm/common@44 -- $ pid=4100635 00:40:59.443 08:55:51 -- pm/common@50 -- $ kill -TERM 4100635 00:40:59.443 08:55:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:59.443 08:55:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:40:59.443 08:55:51 -- pm/common@44 -- $ pid=4100636 00:40:59.443 08:55:51 -- pm/common@50 -- $ kill -TERM 4100636 00:40:59.443 08:55:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:59.443 08:55:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:40:59.443 08:55:51 -- pm/common@44 -- $ pid=4100638 00:40:59.443 08:55:51 -- pm/common@50 -- $ kill -TERM 4100638 00:40:59.443 08:55:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:59.443 08:55:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:40:59.443 08:55:51 -- pm/common@44 -- $ pid=4100662 00:40:59.443 08:55:51 -- pm/common@50 -- $ sudo -E kill -TERM 4100662 00:40:59.443 + [[ -n 3431073 ]] 00:40:59.443 + sudo kill 3431073 00:40:59.454 [Pipeline] } 00:40:59.470 [Pipeline] // stage 00:40:59.475 [Pipeline] } 00:40:59.489 [Pipeline] // timeout 00:40:59.496 [Pipeline] } 00:40:59.511 [Pipeline] // catchError 00:40:59.518 [Pipeline] } 00:40:59.534 [Pipeline] // wrap 00:40:59.541 [Pipeline] } 00:40:59.556 [Pipeline] // catchError 00:40:59.567 [Pipeline] stage 00:40:59.569 [Pipeline] { (Epilogue) 00:40:59.582 [Pipeline] catchError 00:40:59.584 [Pipeline] { 00:40:59.596 [Pipeline] echo 00:40:59.598 Cleanup processes 00:40:59.604 [Pipeline] sh 00:40:59.893 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:59.894 4100775 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:40:59.894 4101331 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:59.913 [Pipeline] sh 00:41:00.207 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:41:00.207 ++ grep -v 'sudo pgrep' 00:41:00.207 ++ awk '{print $1}' 00:41:00.207 + sudo kill -9 4100775 00:41:00.219 [Pipeline] sh 00:41:00.508 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:41:12.748 [Pipeline] sh 00:41:13.036 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:41:13.036 Artifacts sizes are good 00:41:13.051 [Pipeline] archiveArtifacts 00:41:13.059 Archiving artifacts 00:41:13.244 [Pipeline] sh 00:41:13.531 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:41:13.547 [Pipeline] cleanWs 00:41:13.557 [WS-CLEANUP] Deleting project workspace... 00:41:13.557 [WS-CLEANUP] Deferred wipeout is used... 00:41:13.565 [WS-CLEANUP] done 00:41:13.567 [Pipeline] } 00:41:13.585 [Pipeline] // catchError 00:41:13.596 [Pipeline] sh 00:41:14.013 + logger -p user.info -t JENKINS-CI 00:41:14.027 [Pipeline] } 00:41:14.041 [Pipeline] // stage 00:41:14.046 [Pipeline] } 00:41:14.060 [Pipeline] // node 00:41:14.066 [Pipeline] End of Pipeline 00:41:14.105 Finished: SUCCESS